1) The CAP theorem defines consistency, availability, and partition tolerance in ways that are contradictory, counterintuitive, or dependent on unrealistic network assumptions.
2) Kleppmann proposes alternative frameworks like "delay sensitivity" that focus on how long users are willing to wait for responses and realistic network behaviors.
3) Rather than absolute definitions of consistency models, a continuum of options exists depending on the invariants required by each system. The focus should be on selecting the appropriate consistency level.
The Transformation Priority Premise is a tool invented by Robert “Uncle Bob” C. Martin that helps in discovering the sequence in which to write unit tests and production code in Test-Driven Development. This talk gives an overview of the Transformation Priority Premise - and Aliens.
What if your database wasn't in just one place, but was all around you? What type of framework will you use in a post-server world? Where should your data be stored, and who's planning it? How tolerant is your system to failure? Unbase is an ideology and aspirational framework for distributed data. Unbase uses a physics-first approach to transcending the server, increasing availability, and meeting user expectations. In this presentation, we discuss key design concepts, their scientific basis, and our goals for building Unbase.
Nothing goes quite according to plan when Paddington Bear goes to the beach.
Paddington Bear has charmed readers for more than fifty years. Now another generation of fans can join the beloved bear from Peru on a variety of adventures written by Michael Bond.
The Transformation Priority Premise is a tool invented by Robert “Uncle Bob” C. Martin that helps in discovering the sequence in which to write unit tests and production code in Test-Driven Development. This talk gives an overview of the Transformation Priority Premise - and Aliens.
What if your database wasn't in just one place, but was all around you? What type of framework will you use in a post-server world? Where should your data be stored, and who's planning it? How tolerant is your system to failure? Unbase is an ideology and aspirational framework for distributed data. Unbase uses a physics-first approach to transcending the server, increasing availability, and meeting user expectations. In this presentation, we discuss key design concepts, their scientific basis, and our goals for building Unbase.
Nothing goes quite according to plan when Paddington Bear goes to the beach.
Paddington Bear has charmed readers for more than fifty years. Now another generation of fans can join the beloved bear from Peru on a variety of adventures written by Michael Bond.
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...Codemotion
Blockchain (and Cryptocurrency) is an evolution of 20-year old research from scientists like Chaum, Lamport, and Castro & Liskov. Due to the current hype, it's hard to distinguish beneficial aspects of the technology from a desire for a "silver bullet" for device security, verifiable logistics, or "saving democracy". The problem: blockchain introduces new security challenges - and blind adoption without understanding reduces overall security. In this talk, Melanie Rieback and Klaus Kursawe explain the pitfalls and limits of blockchain, so you can avoid making your applications LESS secure.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Case Study "Enabling Data Interoperability through the Healthcare Continuum”
Charles Jaffe, MD
CEO
Health Level 7 (HL7)
iHT2 case studies and presentations illustrate challenges, successes and various factors in the outcomes of numerous types of health IT implementations. They are interactive and dynamic sessions providing opportunity for dialogue, debate and exchanging ideas and best practices. This session will be presented by a thought leader in the provider, payer or government space.
A time series of networks. Is everything OK? Are there anomalies?CSIRO
Consider how bills get voted in the Parliament/Congress. Members belonging to different parties may vote differently. As time passes the voting patterns can change. These bill voting patterns can be denoted as a network. At each time stamp, a different network emerges. The collection of networks indexed by time is a time series -- of networks. We study these network time series. What are the features of these networks? How do the features change over time? Are there anomalous networks? We investigate these questions using real world networks. We use graph theoretic features to transform the network to a feature space and model their evolution using time series methods. Then we find anomalous networks using time series residuals. Our results coincide with noteworthy, historical events.
A Digital Conversation Meetup, June 2014. The closing presentation of the evening was shared by Adam Sefton.
Talking on the subject of complexity of the Next Web, he suggested instead of worrying about trying to organise and control this world, both digital and offline, we should embrace complexity (unicorns and all) and allow solutions to evolve and emerge naturally.
Adam Sefton is Global Executive Creative Director at Reading Room. He's been working in digital on a variety of levels for over 10 years, the last 6 in senior agency positions. He is excitable, energetic and enthusiastic about the internet, how people like to use it and what might happen to it in the future. He is returning to discuss how the emergent principles and technologies underlying the next iteration of the web should influence organisations digital strategies. What are the challenges and opportunities facing digital decision makers.
Rapid fire talk going through a number of topics that we'd pre-selected...one slide on the question, 1-2 slides on an answer....
Much goodness, for reference, here's the subjects:
Planes: Lets go from myth to reality in a couple of slides, including updates since 2015
Transportation in general, cars, trucks, trains and ships….
Why can we still do this?
What’s not changed?
The technology, reactive, static vs. predictive
The humans, why do we ignore them?
Why this needs to change…what does the future hold?
Why DO we stare into the abyss, why do we continue to deny it
Hacking humans, molecular
Hacking humans, consciousness
Why DO we need to fix and HOW do we fix it?
Fix the human
Fix the basics
Intelligent systems working collaboratively with us
Augmented intelligence, the science of giving us the edge.
Collaborate
In 1971, David Parnas wrote the great paper, "On the criteria to be used decomposing the system into parts," and yet the problem of breaking down big projects into small parts that work well together remains a struggle in the industry. The ability to decompose a problem space and in turn, compose a solution is essential to our work.
Things have gotten worse since 1971. With microservices, big data, and streaming systems, we're all going to be distributed systems engineers sooner or later. In distributed systems, effective decomposition has an even greater impact on the reliability, performance, and availability of our systems as it determines the frequency and weight of communication in the system.
This talk speaks to the essential considerations for defining and evaluating boundaries and behaviors in large-scale distributed systems. It will touch on topics such as bulkhead design and architectural evolution.
Deja vu Security CEO Adam Cecchetti was invited to present the keynote speech at this year's (sold-out!) Hushcon in Seattle. Rich in humorous anecdotes and practical analysis, Test For Echo explores the relationship between time, ken, and the future of computer security.
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
John Hugg presented at Boston Data Ignite on Jan. 28, 2016. These slides cover the Two Generals Problem, with an interesting analogy between ninjas, Jean-Claude Van Damme and distributed systems!
You online: Identity, Privacy, and the SelfAbhay Agarwal
In the current landscape of media and communication, our world is undergoing immense and rapid transformations in the breadth, and format of how we interconnect. At the same time, it is difficult for even the most technically adept to fully comprehend the scope of these projects. This talk is a musing on the ideas behind online identity and mass communication in the 21st century. It intends to partially unravel the mystery behind networked social identity, as well as provide the tools for even the technically-disinclined to understand the possibilities for control, surveillance, freedom, and liberated identity within this new topology.
Some included topics:
* Online surveillance, and how deleting your Facebook isn’t enough
* Big Data analytics: why your data is worth money, and the (im)possibility of privacy
* Theories and Paradoxes in a hyper-connected future
* Alternative internets, (or darkness) and what they represent.
The Next Moore's Law: Netness - describes the growing and changing power of connectivity - and why connectivity is replacing Moore's Law as the most important source of opportunity. It suggests that "everything wants to be connected" because the more things are connected (can communicate) the better things work. It describes connectivity as evolving to become fields rather than networks. Original date of presentation: June, 2009.
Architecting a Post Mortem - Velocity 2018 San Jose TutorialWill Gallego
Engineers are frequently tasked with being front and center in intense, highly demanding situations that require clear lines of communication. Our systems fail not because of a lack of attention or laziness but due to cognitive dissonance between what we believe about our environments and the objective interactions both internal and external to them.
It’s time to revisit your established beliefs surrounding failure scenarios, with an emphasis not on the “who” in decision making but instead on the “why” behind those decisions. With attention to growth mindset, you can encourage your teams to reject shallow explanations of human error for said failures and focus on how to gain greater understanding of these complexities and push the boundaries on what you believe to be static, unchanging context outside your sphere of influence.
Will Gallego walks you through the structure of postmortems used at large tech companies with real-world examples of failure scenarios and debunks myths regularly attributed to failures. You’ll learn how to incorporate open dialogue within and between teams to bridge these gaps in understanding.
You are the ultimate data wrangler. The polyglot master of python and R. You know all about the differences of linear versus logistic regression. You know when to use a dimensionality reduction algorithm and when to use a neural net. You have petabytes of data taking structural-form at your command, and you have the R-squared score to prove it!
But all of your data wrangling and number crunching won't matter if the decision makers ignore your data.
The tools to communicate the message in your data are simple, yet they can be a hard to learn. So, let’s talk about the five critical communication tools you need to master "The Art of Speaking Data."
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Melanie Rieback, Klaus Kursawe - Blockchain Security: Melting the "Silver Bul...Codemotion
Blockchain (and Cryptocurrency) is an evolution of 20-year old research from scientists like Chaum, Lamport, and Castro & Liskov. Due to the current hype, it's hard to distinguish beneficial aspects of the technology from a desire for a "silver bullet" for device security, verifiable logistics, or "saving democracy". The problem: blockchain introduces new security challenges - and blind adoption without understanding reduces overall security. In this talk, Melanie Rieback and Klaus Kursawe explain the pitfalls and limits of blockchain, so you can avoid making your applications LESS secure.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Case Study "Enabling Data Interoperability through the Healthcare Continuum”
Charles Jaffe, MD
CEO
Health Level 7 (HL7)
iHT2 case studies and presentations illustrate challenges, successes and various factors in the outcomes of numerous types of health IT implementations. They are interactive and dynamic sessions providing opportunity for dialogue, debate and exchanging ideas and best practices. This session will be presented by a thought leader in the provider, payer or government space.
A time series of networks. Is everything OK? Are there anomalies?CSIRO
Consider how bills get voted in the Parliament/Congress. Members belonging to different parties may vote differently. As time passes the voting patterns can change. These bill voting patterns can be denoted as a network. At each time stamp, a different network emerges. The collection of networks indexed by time is a time series -- of networks. We study these network time series. What are the features of these networks? How do the features change over time? Are there anomalous networks? We investigate these questions using real world networks. We use graph theoretic features to transform the network to a feature space and model their evolution using time series methods. Then we find anomalous networks using time series residuals. Our results coincide with noteworthy, historical events.
A Digital Conversation Meetup, June 2014. The closing presentation of the evening was shared by Adam Sefton.
Talking on the subject of complexity of the Next Web, he suggested instead of worrying about trying to organise and control this world, both digital and offline, we should embrace complexity (unicorns and all) and allow solutions to evolve and emerge naturally.
Adam Sefton is Global Executive Creative Director at Reading Room. He's been working in digital on a variety of levels for over 10 years, the last 6 in senior agency positions. He is excitable, energetic and enthusiastic about the internet, how people like to use it and what might happen to it in the future. He is returning to discuss how the emergent principles and technologies underlying the next iteration of the web should influence organisations digital strategies. What are the challenges and opportunities facing digital decision makers.
Rapid fire talk going through a number of topics that we'd pre-selected...one slide on the question, 1-2 slides on an answer....
Much goodness, for reference, here's the subjects:
Planes: Lets go from myth to reality in a couple of slides, including updates since 2015
Transportation in general, cars, trucks, trains and ships….
Why can we still do this?
What’s not changed?
The technology, reactive, static vs. predictive
The humans, why do we ignore them?
Why this needs to change…what does the future hold?
Why DO we stare into the abyss, why do we continue to deny it
Hacking humans, molecular
Hacking humans, consciousness
Why DO we need to fix and HOW do we fix it?
Fix the human
Fix the basics
Intelligent systems working collaboratively with us
Augmented intelligence, the science of giving us the edge.
Collaborate
In 1971, David Parnas wrote the great paper, "On the criteria to be used decomposing the system into parts," and yet the problem of breaking down big projects into small parts that work well together remains a struggle in the industry. The ability to decompose a problem space and in turn, compose a solution is essential to our work.
Things have gotten worse since 1971. With microservices, big data, and streaming systems, we're all going to be distributed systems engineers sooner or later. In distributed systems, effective decomposition has an even greater impact on the reliability, performance, and availability of our systems as it determines the frequency and weight of communication in the system.
This talk speaks to the essential considerations for defining and evaluating boundaries and behaviors in large-scale distributed systems. It will touch on topics such as bulkhead design and architectural evolution.
Deja vu Security CEO Adam Cecchetti was invited to present the keynote speech at this year's (sold-out!) Hushcon in Seattle. Rich in humorous anecdotes and practical analysis, Test For Echo explores the relationship between time, ken, and the future of computer security.
Presented at #H2OWorld 2017 in Mountain View, CA.
Enjoy the video: https://youtu.be/TBJqgvXYhfo.
Learn more about H2O.ai: https://www.h2o.ai/.
Follow @h2oai: https://twitter.com/h2oai.
- - -
Abstract:
Machine learning is at the forefront of many recent advances in science and technology, enabled in part by the sophisticated models and algorithms that have been recently introduced. However, as a consequence of this complexity, machine learning essentially acts as a black-box as far as users are concerned, making it incredibly difficult to understand, predict, or "trust" their behavior. In this talk, I will describe our research on approaches that explain the predictions of ANY classifier in an interpretable and faithful manner.
Sameer's Bio:
Dr. Sameer Singh is an Assistant Professor of Computer Science at the University of California, Irvine. He is working on large-scale and interpretable machine learning applied to natural language processing. Sameer was a Postdoctoral Research Associate at the University of Washington and received his PhD from the University of Massachusetts, Amherst, during which he also worked at Microsoft Research, Google Research, and Yahoo! Labs on massive-scale machine learning. He was awarded the Adobe Research Data Science Faculty Award, was selected as a DARPA Riser, won the grand prize in the Yelp dataset challenge, and received the Yahoo! Key Scientific Challenges fellowship. Sameer has published extensively at top-tier machine learning and natural language processing conferences. (http://sameersingh.org)
John Hugg presented at Boston Data Ignite on Jan. 28, 2016. These slides cover the Two Generals Problem, with an interesting analogy between ninjas, Jean-Claude Van Damme and distributed systems!
You online: Identity, Privacy, and the SelfAbhay Agarwal
In the current landscape of media and communication, our world is undergoing immense and rapid transformations in the breadth, and format of how we interconnect. At the same time, it is difficult for even the most technically adept to fully comprehend the scope of these projects. This talk is a musing on the ideas behind online identity and mass communication in the 21st century. It intends to partially unravel the mystery behind networked social identity, as well as provide the tools for even the technically-disinclined to understand the possibilities for control, surveillance, freedom, and liberated identity within this new topology.
Some included topics:
* Online surveillance, and how deleting your Facebook isn’t enough
* Big Data analytics: why your data is worth money, and the (im)possibility of privacy
* Theories and Paradoxes in a hyper-connected future
* Alternative internets, (or darkness) and what they represent.
The Next Moore's Law: Netness - describes the growing and changing power of connectivity - and why connectivity is replacing Moore's Law as the most important source of opportunity. It suggests that "everything wants to be connected" because the more things are connected (can communicate) the better things work. It describes connectivity as evolving to become fields rather than networks. Original date of presentation: June, 2009.
Architecting a Post Mortem - Velocity 2018 San Jose TutorialWill Gallego
Engineers are frequently tasked with being front and center in intense, highly demanding situations that require clear lines of communication. Our systems fail not because of a lack of attention or laziness but due to cognitive dissonance between what we believe about our environments and the objective interactions both internal and external to them.
It’s time to revisit your established beliefs surrounding failure scenarios, with an emphasis not on the “who” in decision making but instead on the “why” behind those decisions. With attention to growth mindset, you can encourage your teams to reject shallow explanations of human error for said failures and focus on how to gain greater understanding of these complexities and push the boundaries on what you believe to be static, unchanging context outside your sphere of influence.
Will Gallego walks you through the structure of postmortems used at large tech companies with real-world examples of failure scenarios and debunks myths regularly attributed to failures. You’ll learn how to incorporate open dialogue within and between teams to bridge these gaps in understanding.
You are the ultimate data wrangler. The polyglot master of python and R. You know all about the differences of linear versus logistic regression. You know when to use a dimensionality reduction algorithm and when to use a neural net. You have petabytes of data taking structural-form at your command, and you have the R-squared score to prove it!
But all of your data wrangling and number crunching won't matter if the decision makers ignore your data.
The tools to communicate the message in your data are simple, yet they can be a hard to learn. So, let’s talk about the five critical communication tools you need to master "The Art of Speaking Data."
Similar to PWLSD.1 A critique of the cap theorem - Martin Kleppmann (20)
How Does XfilesPro Ensure Security While Sharing Documents in Salesforce?XfilesPro
Worried about document security while sharing them in Salesforce? Fret no more! Here are the top-notch security standards XfilesPro upholds to ensure strong security for your Salesforce documents while sharing with internal or external people.
To learn more, read the blog: https://www.xfilespro.com/how-does-xfilespro-make-document-sharing-secure-and-seamless-in-salesforce/
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
Software Engineering, Software Consulting, Tech Lead.
Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Security,
Spring Transaction, Spring MVC,
Log4j, REST/SOAP WEB-SERVICES.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
Advanced Flow Concepts Every Developer Should KnowPeter Caitens
Tim Combridge from Sensible Giraffe and Salesforce Ben presents some important tips that all developers should know when dealing with Flows in Salesforce.
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
7. Chronology of CAP
2000: Brewer publicizes his conjecture in various talks and papers – “The CAP Principle”.
2002: Gilbert and Lynch formally proved Brewer’s conjecture, and CAP Theorem was born.
1970s, 80s, and 90s: Absolutely[1] Nothing[2] happened[3] Nothing[4] to[5] see[6] here[7] folks.
[1] Paul R Johnson and Robert H Thomas. RFC 677: The maintenance of duplicate databases. Network Working Group, January 1975. URL https://tools.ietf.org/html/rfc677.
[2] Jim N Gray, Raymond A Lorie, Gianfranco R Putzolu, and Irving L Traiger. Granularity of locks and degrees of consistency in a shared data base. In G M Nijssen, editor,
Modelling in Data Base Management Systems: Proceedings of the IFIP Working Conference on Modelling in Data Base Management Systems, pages 364–394. Elsevier/North
Holland, 1976.
[3] Leslie Lamport. How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Transactions on Computers, 28(9):690–691, September 1979.
doi:10.1109/TC.1979.1675439.
[4] Bruce G Lindsay, Patricia Griffiths Selinger, C Galtieri, Jim N Gray, Raymond A Lorie, Thomas G Price, Gianfranco R Putzolu, Irving L Traiger, and Bradford W Wade. Notes on
distributed databases. Technical Report RJ2571(33471), IBM Research, July 1979.
[5] Bowen Alpern and Fred B Schneider. Defining liveness. Information Processing Letters, 21(4):181–185, October 1985. doi:10.1016/0020-0190(85)90056-0.
[6] Susan B Davidson, Hector Garcia-Molina, and Dale Skeen. Consistency in partitioned networks. ACM Computing Surveys, 17(3):341–370, September 1985.
doi:10.1145/5505.5508.
[7] Linearizability: A Correctness Condition for Concurrent Objects. M P. Herlihy and J M. Wing. ACM Transactions, Vol. 12, No. 3, July 1990.
10. “Consistency”
“I’m totally pro-choice.” (Fox News, October 31, 1999)
“I’m pro-life.” (CPAC, February 10, 2011)
“I wanted to do this for myself. I had to do it for myself.”
(Time, August 18, 2015)
“I don’t want it for myself. I don’t need it for myself.” (ABC
News, November 20, 2015)
“I think the institution of marriage should be between a
man and a woman.” (The Advocate, February 15, 2000)
“If two people dig each other, they dig each other.”
(Trump University “Trump Blog,” December 22, 2005)
“I’m against gay marriage.” (Fox News, April 14, 2011)
11. What is “Consistency”
● Brewer defines consistency as one-copy-serializability (1SR)
● Gilbert and Lynch define consistency as linearizability
14. Put Simply:
Linearizability can be viewed as a special case of strict
serializability where transactions are restricted to consist
of a single operation applied to a single object.[1]
[1] Linearizability: A Correctness Condition for Concurrent Objects. By M P. Herlihy and J M. Wing. ACM Transactions on Programming
Languages and Systems, Vol. 12, No. 3, July 1990.
20. “Availability”
Gilbert & Lynch defined availability differently.
Property of algorithm, or observed metric?
Brewer:
“availability is obviously continuous from 0 to 100 percent”
Gylbert & Lynch:
“For a distributed system to be continuously available, every request received by a non-failing node in the
system must result in a response”
21. “Availability”
The Server is DOWN! – What is “UP” anyway?
● Is it a server that’s fails to respond in 1000ms?
● What if it responds 5 minutes later?
● What if it responds, but with invalid data?
● What if it responds but the response is not received?
● What if your request packet is dropped, but all others are fine?
● What is the sound of one hand clapping?
24. “Availability”
● Gilbert & Lynch definition – Contradictory and counter-intuitive
● Brewer’s definition is ok
● Nonsensical to call an algorithm “Available”
25. “Partition Tolerance”
A Network partition is:
“a communication failure in which the network is split into disjoint
sub-networks, with no communication possible across sub-networks”
26. “Partition Tolerance”
Most networks are Fairloss links
A Fairloss link is a link where the probability the message you send is
delivered is non-zero.
(and the probability of delivery is less than 100%)
28. CA AP CP CCCP
Using Gilbert & Lynch definitions,
essentially only two options:
CA - Avoid the gulag dear comrade, coordinate carefully with your party official.
AP - You’re on your own, capitalist swine!
CP - Not really any different from CA
CCCP - Purge all your data and start over every few years
29. PROVE IT
From “Brewer’s Conjecture and the Feasibility of Consistent, Available, Partition-Tolerant Web
Services” Gylbert & Lynch 2002
30. PROVE IT
G&L offer Irrefutable proof that:
The operation of their algorithm is non-linearizable only if the partition lasts
forever.
But:
Partitions don’t last forever!
Sometimes faults aren’t detected!
Maybe you don’t even want linearizability!
31. GREETINGS PROFESSOR BREWER
CAP IS A STRANGE GAME.
THE ONLY WINNING MOVE IS NOT
TO PLAY.
HELLO
HOW ABOUT A NICE GAME OF CHESS?
32. In my humble opinion...
There’s a simpler way to say it:
● Some consistency models require total event ordering
● Any up-to-date list can only exist in a single point in space
● We are not optimistic about FTL information transfer
Be patient, or be self-sufficient. No whining.
33. Kleppmann proposes a “Delay sensitivity framework”
In a nutshell: Travel takes time, how patient can you be?
So What’s better?
34. “Delay sensitivity framework”
● Networks latency is lumpy / uncertain
● Very few scenarios entail actually reliable networks - packet loss is common
● Understanding lower bounds for different consistency models
So What’s better?
35. Proposed Terminology
● Availability - Empirical measurements only!
● Delay Sensitive - How patient can we be for a given operation?
● Network Faults - All kinds of weird shit happens on networks.
● Fault Tolerance - Under what specific failure modes can we guarantee our invariants?
● Consistency - Not just one, but many different consistency models to choose from
So What’s better?
36. Strong vs Weak
Consistency models galore:
Consistency models are a continuum.
“Strong” and “Weak” are conversational terms, not formal.
“Weak” does not mean unsafe.
It’s all about selecting which invariants you want.
Kyle Kingsbury 2014 – https://aphyr.com/posts/313-strong-consistency-models
37. In Conclusion:
● Irrefutable truths mayy be predicated on
misleading definitions
● consistency is subjective.
● CAP has outlived it’s usefulness
● We can do better!
● Let’s reason about latency
● Lets fix our terminology
● Rigor is good, CS needs moar of it