Instrumentation of Complex Systems is necessary and addresses the issues of static documentation of said systems. Instrumentation is flawed, flaws which are resolvable with an intentional kind of documentation.
Given at Write the Docs, Portland OR 2014.
10 Billion a Day, 100 Milliseconds Per: Monitoring Real-Time Bidding at AdRollBrian Troutwine
This is the talk I gave at Erlang Factory SF Bay Area 2014. In it I discussed the instrumentation by default approach taken in the AdRoll real-time bidding team, discuss the technical details of the libraries we use and lessons learned to adapt your organization to deal with the onslaught of data from instrumentation.
Websauna - introduction to the best Python web frameworkMikko Ohtamaa
Websauna is a Python package and application framework for developing custom consumer and business web services. It emphasises meeting business requirements with reliable delivery times, responsiveness, consistency in quality, security and high data integrity. A low learning curve, novice friendliness and polished documentation help less seasoned developers to get their first release out quickly.
A short history of digital storytelling by Tiana Tasich, digital consultant, ...Tiana Tasich
This is a cut-down version of a presentation given at a breakfast briefing and a workshop organised by IDEK and Dhyaan Design in Stockholm on 12 May 2016.
Documentation avoidance for developersPeter Hilton
However good your code, other people never seem to get it. Instead they ruin your day (and your productivity) by asking questions and expecting documentation. You need to know how to explain code without getting stuck in meetings or spending half your time on the only thing you hate more than meetings: writing documentation. Instead, you aim for constructive laziness: tactics that give you more time to write code.
This talk teaches you how to avoid writing documentation, by making it unnecessary or delegating the work to someone else. You will also learn how to deal with the awkward situation when you can’t get away with avoidance or delegation, and have to write the documentation yourself.
This talk explores what we talk about when we talk about code, how we do it, and the tools we use. You can often find a better tool than documentation, but not always. Not everyone writes detailed specifications these days, but remote working and distributed teams make written explanations more valuable than ever. Talking face to face requires less effort, but you rarely or never meet the authors of most of the code you see. Software craftsmanship has failed to make written documentation unnecessary. Instead we shall turn to README-Driven Development, comments evasion, documentation-avoidance, just-in-time documentation and the art of not writing it in the first place.
Living Documentation (NCrafts Paris 2015, DDDx London 2015, BDX.io 2015, Code...Cyrille Martraire
What if documentation was as fun as coding? Always up-to-date? And what if it could even improve your design? Reconsider how you invest in knowledge to accelerate delivery, with a touch of Domain-Driven Design.
For more, get the book on Leanpub: https://leanpub.com/livingdocumentation
10 Billion a Day, 100 Milliseconds Per: Monitoring Real-Time Bidding at AdRollBrian Troutwine
This is the talk I gave at Erlang Factory SF Bay Area 2014. In it I discussed the instrumentation by default approach taken in the AdRoll real-time bidding team, discuss the technical details of the libraries we use and lessons learned to adapt your organization to deal with the onslaught of data from instrumentation.
Websauna - introduction to the best Python web frameworkMikko Ohtamaa
Websauna is a Python package and application framework for developing custom consumer and business web services. It emphasises meeting business requirements with reliable delivery times, responsiveness, consistency in quality, security and high data integrity. A low learning curve, novice friendliness and polished documentation help less seasoned developers to get their first release out quickly.
A short history of digital storytelling by Tiana Tasich, digital consultant, ...Tiana Tasich
This is a cut-down version of a presentation given at a breakfast briefing and a workshop organised by IDEK and Dhyaan Design in Stockholm on 12 May 2016.
Documentation avoidance for developersPeter Hilton
However good your code, other people never seem to get it. Instead they ruin your day (and your productivity) by asking questions and expecting documentation. You need to know how to explain code without getting stuck in meetings or spending half your time on the only thing you hate more than meetings: writing documentation. Instead, you aim for constructive laziness: tactics that give you more time to write code.
This talk teaches you how to avoid writing documentation, by making it unnecessary or delegating the work to someone else. You will also learn how to deal with the awkward situation when you can’t get away with avoidance or delegation, and have to write the documentation yourself.
This talk explores what we talk about when we talk about code, how we do it, and the tools we use. You can often find a better tool than documentation, but not always. Not everyone writes detailed specifications these days, but remote working and distributed teams make written explanations more valuable than ever. Talking face to face requires less effort, but you rarely or never meet the authors of most of the code you see. Software craftsmanship has failed to make written documentation unnecessary. Instead we shall turn to README-Driven Development, comments evasion, documentation-avoidance, just-in-time documentation and the art of not writing it in the first place.
Living Documentation (NCrafts Paris 2015, DDDx London 2015, BDX.io 2015, Code...Cyrille Martraire
What if documentation was as fun as coding? Always up-to-date? And what if it could even improve your design? Reconsider how you invest in knowledge to accelerate delivery, with a touch of Domain-Driven Design.
For more, get the book on Leanpub: https://leanpub.com/livingdocumentation
We often relate Domain-Driven Design with the content of Eric Evans' book; however even this book suggests looking outside for other patterns and inspirations: analysis patterns (Accounting, Finance), domain-oriented use of design patterns (the Flyweight pattern), established formalisms (e.g. monoids) and XP literature in particular (e.g. the patterns on the c2 wiki and OOPSLA papers).
The world has not stopped since the book either, and new ideas keep on emerging regularly. And you can share your own patterns as well.
In this session, through examples and code we'll go through some particularly important patterns which deserve to be in your tool belt. We'll also provide guidance on how best to use them (or not), at the right time and in the right context, and on how to train your colleagues on them!
Experimentation-Driven Approach to Organisational DevelopmentSami Paju
My presentation from Spark the Change 2017 conference in Toronto, Canada.
Topics include complexity theory, organisations as complex adaptive systems, the ambidextrous organisations, a step-by-step guide to creating experiments, how to use experiments in organisational development work, and lastly what causes an experimentation-driven approach to fail.
Workshop on getting to grips with digital strategy by thinking like a network. Understanding complex adaptive systems, terminology, exponential growth and how technology, behaviour and design all come together. Two exercises included are Stinky Fish and Jobs to be Done. Lots of stuff on Netflix in there too.
Data Modelling is an important tool in the toolbox of a developer. By building and communicating a shared understanding of the domain they're working with, their applications and APIs are more useable and maintainable. However, as you scale up your technical teams, how do you keep these benefits whilst avoiding time-consuming meetings every time something new comes along? This talk reminds ourselves of key data modelling technique and how our use of Kafka changes and informs them. It then examines how these patterns change as more teams join your organisation and how Kafka comes into its own in this world.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Introducing four different complementary architectural - CQRS, Event Sourcing, CQS and Domain Driven Design. Looking at an architecture that would use all of these. Acknowledging that it's never been truly successful.
The Tragic Flaws of Neural Networks | Jack FitzpatrickJack Fitzpatrick
Already, neural networks have come into being, utilizing artificial intelligence to eliminate the strain on human workers and optimize certain processes. While these networks are designed to alleviate some of the unnecessary exertion that plagues human workers, there are some potential issues that accompany this innovative technology.
Read the blog: http://jackfitzpatrick.io/the-tragic-flaws-of-neural-networks/
Predicting digital futures a sector at a time is relatively easy, but in a networked world driven by accelerating technologies this is insufficient. Sectors do not operate in isolation, they are connected, and as technology advances the boundaries morph, with whole industries overtaken and pushed aside. At the same time old jobs lose relevance and new skills are required, but in aggregate ever more people are employed. Today there is no country, no matter how big or rich, that has all the raw materials and people required to power its industries, healthcare systems, farming and food production, or indeed educational institutions. Insourcing, outsourcing, and globalisation are the result, and they are about to be augmented by global networking of facilities, skills and abilities
We have never known or understood so much about our world, and nor have we enjoyed the capabilities bestowed by modern technology. But keeping up to date, acquiring the right knowledge and skills is a growing challenge as ‘the world of the simple’ evaporates and complexity takes over.
“There are plenty of simple solutions to complex problems, but they are all wrong”
Preparing for change whilst coping with the status quo now presents many new challenges way beyond human ability and we have to partner with machines to aid our decisions. For organisations it is essential to find and employ the right people, and for people it is necessary to become ever more flexible and adaptable whilst continually acquiring pertinent capabilities.
“AI and robots are not going to push us aside, but they will change everything”
No man is an island, and neither is any country, company or institution. A digital and connected global interdependency now governs the fortunes of our species as technology empowers us at every level. In this presentation we highlight a small sample of the technologies on the horizon, the jobs they will destroy, enhance and create.
Assignment x Through reviewing the Olympic Messaging Syste.docxedmondpburgess27164
Assignment x
Through reviewing the Olympic Messaging System's system design methodology, the authors will
provide advice on when particular methodologies would be used and how long they would take. The
methodologies they focus on are the following: early focus on users and tasks, empirical measurement,
and iterative design. There is a fourth principle introduced later on, which they call the “Integrated
Usability Design”.
The authors utilized a huge amount of ideas in their pursuit of the design principles. They printed
scenarios of the interfaces, performed early iterative tests of user guides, preformed early simulations
and early demonstrations, made sure to have a representative for the Olympians, took tours of the
Olympic Village sites and had interviews with Olympians themselves, made oversea tests of the
Family/Friends interface, used a hallway and storefront technique, performed a prototype tests. They
also used unusual techniques such as a “Try to destroy it” test and a win a bear contest. Of course, all
of these ideas had a purpose.
Following the principles may have required more work in the beginning, but they greatly reduced the
work later on. The use of printed scenarios was helpful in showing the first definition of system
functions, the user interface, and hard to imagine deep system organizations. The scenarios also
identified conflicts that a list of functions could not do, allowed people to criticize where their
comments had most impact and changes could be made before code was written. Basically, it helped
them make decisions that were still being debated.
The early user guides were helpful in identifying issues and problems in system organization. When the
developers were performing early simulations, they utilized a Voice Toolkit that allowed them to debug
the user interface, conduct informal user experiments for the interfaces for both major user groups, and
provide demonstrations to raise comments from people. These early simulations also helped to develop
help messages and revealed how much a user should know to use the system.
Hallway methodology was an easy way to get participants for informal experiments, it was enjoyable,
accelerated the rate of progress, and other group members got a better feel for where their work fit in.
The prototype test performed in Yorktown was useful in debugging the system and user interfaces. It
also helped them fine tune of what was implemented in the OMS so far. The contest was useful in
displaying the usability for everyone and caught bugs as well. On the “try to destroy it” test, they were
able to figure how reliable the system was. The final prototype test they performed was useful in
learning how to interface OMS with the Los Angeles telephone network. All in all, the OMS was very
exportable.
The principles are worth following, but there are some consequences. It was sometimes
psychologically difficult .
The aspirational visions of Society 5.0 coined by many nations around 2015/16 have now been eclipsed by technological progress and world events including another European war, global warming, climate change and resource shortages. In this new context, the published 5.0 documents now seem naive and simplistic, high on aspiration, and very short on ‘the how’. The stark reality is that the present situation has been induced by our species and our inability to understand and cope with complexity.
“There are no simple solutions to complex problems”
What is now clear is that our route to survival and Society 5.0 will be born of Industry 4.0/5.0 and a symbiosis between Mother Nature, Machines, and Mankind. Today we consume and destroy near 50% more resources than the planet might reasonably support, and merely improving the efficiency of all our processes and what we do will only delay the end point. And so I4.0 is founded on new materials and new processes that are far less damaging, inherently sustainable, and most importantly, readily dispensable across the planet.
“Reversing global warming will not see a climatic reversal to some previously stable state”
In this presentation, we start with the nature of climate change, move on to the technology changes that might save the day, the impact of Industry 4.0/5.0, and then postulate what Society 5.0 might actually look like.
For millennia we have crafted artifacts from bulk materials that we have progressively refined to produce ever more precision tools and products. Latterly, we have crossed a critical threshold where our abilities now eclipse Mother Nature. For example; the smallest transistors in production today have feature sizes down to 2nm which is smaller than a biological virus ~20 - 200nm. The implications for ITC, AI, Robotics, and Production are ever more profound as we approach, and most likely undercut, the scale of the atom ~ 0.1-0.4nm. Not only does this open the door to new technologies, it sees new and remarkable capabilities. So, in this presentation we look at this new Tech Horizon spanning robotics to quantum computing and sensory technologies, and how they will help us realise sustainable futures germane to Industry 4.0, 5.0, and beyond.
In a world of accelerating innovation and increasingly complex digital services, applications, appliances, and devices, it seems unreasonable to expect customers to understand and maintain their own cyber security. We are way past the point where even the well educated can cope with the compounded complexity of an ‘on-line-life’. The reality is, today's products and services are incomplete and sport wholly inadequate cyber defence applications.
Perhaps the single biggest problem is that defenders have never been professional attackers - and they don’t share the same level of thinking and deviousness, or indeed, the inventiveness of their enemies. Apart from an education embracing the attack techniques, and in some cases, engaging in war games, the defenders remain on the back foot However, there a number of new, an potentially significant, approaches yet to be addressed, and we care to look at the problem from a new direction.
In the maintenance of high-tech equipment and systems across many industries, identifiable precursors are employed to flag impending outages and failures. This realisation prompted a series of experiments to see if it was possible to presage pending cyber attacks. And indeed it was found to be the case!
In this presentation we give an overview of our early experimental and observational results, long with our current thinking spanning networks through to individual hackers, and inside actors.
Throughout our education and life we are mostly given a ‘soda-straw’ view of Maths, Physics, Chemistry, Biology, HealthCare, Business and Commerce that conditions us to ‘one concept at a time’ thinking. This is rife in Government and Politics, Industry and Health, and it has been extremely powerful in a now past slow paced and disconnected world. In fact, the speciation of disciplines, topics and problems has largely been responsible for the acceleration and prominence of human progress.
However; in a connected/networked, highly mobile, and tech driven world this simple and narrow minded view is insufficient and dangerous. In common parlance we refer to ‘unintended consequences’ whilst in complex system theory would use the term ‘emergent behaviours’. In brief; education, health, crime, productivity, GDP creation, social cohesion and stability cannot be considered independent variables/properties. They are all related and interdependent. For example; when politicians decide to starve the education system of funds for very young children the impact shows up in health, crime and the economy some 10 - 30 years later!
By analogy; all of this is true of our technologies, industries, lives, and the prospect of sustainable societies. Robots, AI, AL, and Quantum Computing do not stand alone in isolation, they have complementary roles. In this Public Lecture we devote an hour to thinking more holistically what these technologies bring to the party in the context of industry, health, society, sustainable societies and global warming. We then devote a further hour to discussion and debate.
In the context of Global Warming we make the following overriding observations:
“Panic is a poor substitute for thinking”
“Tech is the only exponential capability we enjoy”
“Technology is never a threat, but humans always are”
“Uncertainty always prescribes the precautionary principle”
Every Industrial revolution has seen the progression from people dominated design, build and production to a higher degrees of automation that has gone hand-in-hand with shortening timescales enabled by ever-more powerful technologies. However, at a fundamental level the process has remained the same, but it is now edging toward a continuum of evolution as opposed to a series of discrete jumps that often trigger company reorganizations. In concert, there is a realization abroad that it is no longer about the biggest, the strongest, the best, or the fittest, it is now all about the survival of the most adaptable.
By and large it is relatively easy to predict when and where tech change will occur and the likely outcomes, in terms of existing and future products and services, but how people, customers, companies and societies will react is an unsolved puzzle. On another plane, competition and threats may well occur outside the sector, from a direction managers are not looking, by entirely new mechanisms, and at a most critical time. These are all challenges indeed!
How to adapt to, and cope with these collective challenges is the focus of this presentation which is illustrated and supported by past and present industrial cases along with the experiences and methodologies of those who have driven/weathered this storm as well as those who failed. Many of the illustrations are automated and there are exemplar movies and segue inserts throughout.
We often relate Domain-Driven Design with the content of Eric Evans' book; however even this book suggests looking outside for other patterns and inspirations: analysis patterns (Accounting, Finance), domain-oriented use of design patterns (the Flyweight pattern), established formalisms (e.g. monoids) and XP literature in particular (e.g. the patterns on the c2 wiki and OOPSLA papers).
The world has not stopped since the book either, and new ideas keep on emerging regularly. And you can share your own patterns as well.
In this session, through examples and code we'll go through some particularly important patterns which deserve to be in your tool belt. We'll also provide guidance on how best to use them (or not), at the right time and in the right context, and on how to train your colleagues on them!
Experimentation-Driven Approach to Organisational DevelopmentSami Paju
My presentation from Spark the Change 2017 conference in Toronto, Canada.
Topics include complexity theory, organisations as complex adaptive systems, the ambidextrous organisations, a step-by-step guide to creating experiments, how to use experiments in organisational development work, and lastly what causes an experimentation-driven approach to fail.
Workshop on getting to grips with digital strategy by thinking like a network. Understanding complex adaptive systems, terminology, exponential growth and how technology, behaviour and design all come together. Two exercises included are Stinky Fish and Jobs to be Done. Lots of stuff on Netflix in there too.
Data Modelling is an important tool in the toolbox of a developer. By building and communicating a shared understanding of the domain they're working with, their applications and APIs are more useable and maintainable. However, as you scale up your technical teams, how do you keep these benefits whilst avoiding time-consuming meetings every time something new comes along? This talk reminds ourselves of key data modelling technique and how our use of Kafka changes and informs them. It then examines how these patterns change as more teams join your organisation and how Kafka comes into its own in this world.
Continuous Automated Testing - Cast conference workshop august 2014Noah Sussman
CAST 2014 New York: The Art and Science of Testing
The Association for Software Testing www.associationforsoftwaretesting.org
COURSE DESCRIPTION
Automated tools provide test professionals with the capability to make relevant observations even in the fastest-paced environments. Automated testing is also a powerful tool for improving communication between software engineers. This is important because good communication is a prerequisite for growing a great software engineering organization.
This workshop will explore the continuous testing of software systems. Special focus will be given to the situation where the engineering team is deploying code to production so frequently that it is not possible to perform deep regression testing before each release.
People who participate in this course will learn pragmatic automated testing strategies like:
* Data analysis on the command line with find, grep and wc.
* Network analysis with Chrome Inspector, Charles and netcat.
* Using code churn to predict hotspots where bugs may occur.
* Putting stack traces in context with automated SCM blame emails.
* Using statsd to instrument a whole application.
* Testing in production.
* Monitoring-as-testing.
Technical level: participants should have some familiarity with the command line and with editing code using a text editor or IDE. Familiarity with Git, SVN or another version control system is helpful but not required. Likewise some knowledge of Web servers is helpful but not required. It is desirable for participants to bring laptops.
BIO
From 2010 to 2012 Noah was a Test Architect at Etsy. He helped build Etsy's continuous integration system, and has helped countless other engineers develop successful automated testing strategies.These days Noah is an independent consultant in New York. He is passionate about helping engineers understand and use automated tools as they work to scale their applications more effectively.
Introducing four different complementary architectural - CQRS, Event Sourcing, CQS and Domain Driven Design. Looking at an architecture that would use all of these. Acknowledging that it's never been truly successful.
The Tragic Flaws of Neural Networks | Jack FitzpatrickJack Fitzpatrick
Already, neural networks have come into being, utilizing artificial intelligence to eliminate the strain on human workers and optimize certain processes. While these networks are designed to alleviate some of the unnecessary exertion that plagues human workers, there are some potential issues that accompany this innovative technology.
Read the blog: http://jackfitzpatrick.io/the-tragic-flaws-of-neural-networks/
Predicting digital futures a sector at a time is relatively easy, but in a networked world driven by accelerating technologies this is insufficient. Sectors do not operate in isolation, they are connected, and as technology advances the boundaries morph, with whole industries overtaken and pushed aside. At the same time old jobs lose relevance and new skills are required, but in aggregate ever more people are employed. Today there is no country, no matter how big or rich, that has all the raw materials and people required to power its industries, healthcare systems, farming and food production, or indeed educational institutions. Insourcing, outsourcing, and globalisation are the result, and they are about to be augmented by global networking of facilities, skills and abilities
We have never known or understood so much about our world, and nor have we enjoyed the capabilities bestowed by modern technology. But keeping up to date, acquiring the right knowledge and skills is a growing challenge as ‘the world of the simple’ evaporates and complexity takes over.
“There are plenty of simple solutions to complex problems, but they are all wrong”
Preparing for change whilst coping with the status quo now presents many new challenges way beyond human ability and we have to partner with machines to aid our decisions. For organisations it is essential to find and employ the right people, and for people it is necessary to become ever more flexible and adaptable whilst continually acquiring pertinent capabilities.
“AI and robots are not going to push us aside, but they will change everything”
No man is an island, and neither is any country, company or institution. A digital and connected global interdependency now governs the fortunes of our species as technology empowers us at every level. In this presentation we highlight a small sample of the technologies on the horizon, the jobs they will destroy, enhance and create.
Assignment x Through reviewing the Olympic Messaging Syste.docxedmondpburgess27164
Assignment x
Through reviewing the Olympic Messaging System's system design methodology, the authors will
provide advice on when particular methodologies would be used and how long they would take. The
methodologies they focus on are the following: early focus on users and tasks, empirical measurement,
and iterative design. There is a fourth principle introduced later on, which they call the “Integrated
Usability Design”.
The authors utilized a huge amount of ideas in their pursuit of the design principles. They printed
scenarios of the interfaces, performed early iterative tests of user guides, preformed early simulations
and early demonstrations, made sure to have a representative for the Olympians, took tours of the
Olympic Village sites and had interviews with Olympians themselves, made oversea tests of the
Family/Friends interface, used a hallway and storefront technique, performed a prototype tests. They
also used unusual techniques such as a “Try to destroy it” test and a win a bear contest. Of course, all
of these ideas had a purpose.
Following the principles may have required more work in the beginning, but they greatly reduced the
work later on. The use of printed scenarios was helpful in showing the first definition of system
functions, the user interface, and hard to imagine deep system organizations. The scenarios also
identified conflicts that a list of functions could not do, allowed people to criticize where their
comments had most impact and changes could be made before code was written. Basically, it helped
them make decisions that were still being debated.
The early user guides were helpful in identifying issues and problems in system organization. When the
developers were performing early simulations, they utilized a Voice Toolkit that allowed them to debug
the user interface, conduct informal user experiments for the interfaces for both major user groups, and
provide demonstrations to raise comments from people. These early simulations also helped to develop
help messages and revealed how much a user should know to use the system.
Hallway methodology was an easy way to get participants for informal experiments, it was enjoyable,
accelerated the rate of progress, and other group members got a better feel for where their work fit in.
The prototype test performed in Yorktown was useful in debugging the system and user interfaces. It
also helped them fine tune of what was implemented in the OMS so far. The contest was useful in
displaying the usability for everyone and caught bugs as well. On the “try to destroy it” test, they were
able to figure how reliable the system was. The final prototype test they performed was useful in
learning how to interface OMS with the Los Angeles telephone network. All in all, the OMS was very
exportable.
The principles are worth following, but there are some consequences. It was sometimes
psychologically difficult .
The aspirational visions of Society 5.0 coined by many nations around 2015/16 have now been eclipsed by technological progress and world events including another European war, global warming, climate change and resource shortages. In this new context, the published 5.0 documents now seem naive and simplistic, high on aspiration, and very short on ‘the how’. The stark reality is that the present situation has been induced by our species and our inability to understand and cope with complexity.
“There are no simple solutions to complex problems”
What is now clear is that our route to survival and Society 5.0 will be born of Industry 4.0/5.0 and a symbiosis between Mother Nature, Machines, and Mankind. Today we consume and destroy near 50% more resources than the planet might reasonably support, and merely improving the efficiency of all our processes and what we do will only delay the end point. And so I4.0 is founded on new materials and new processes that are far less damaging, inherently sustainable, and most importantly, readily dispensable across the planet.
“Reversing global warming will not see a climatic reversal to some previously stable state”
In this presentation, we start with the nature of climate change, move on to the technology changes that might save the day, the impact of Industry 4.0/5.0, and then postulate what Society 5.0 might actually look like.
For millennia we have crafted artifacts from bulk materials that we have progressively refined to produce ever more precision tools and products. Latterly, we have crossed a critical threshold where our abilities now eclipse Mother Nature. For example; the smallest transistors in production today have feature sizes down to 2nm which is smaller than a biological virus ~20 - 200nm. The implications for ITC, AI, Robotics, and Production are ever more profound as we approach, and most likely undercut, the scale of the atom ~ 0.1-0.4nm. Not only does this open the door to new technologies, it sees new and remarkable capabilities. So, in this presentation we look at this new Tech Horizon spanning robotics to quantum computing and sensory technologies, and how they will help us realise sustainable futures germane to Industry 4.0, 5.0, and beyond.
In a world of accelerating innovation and increasingly complex digital services, applications, appliances, and devices, it seems unreasonable to expect customers to understand and maintain their own cyber security. We are way past the point where even the well educated can cope with the compounded complexity of an ‘on-line-life’. The reality is, today's products and services are incomplete and sport wholly inadequate cyber defence applications.
Perhaps the single biggest problem is that defenders have never been professional attackers - and they don’t share the same level of thinking and deviousness, or indeed, the inventiveness of their enemies. Apart from an education embracing the attack techniques, and in some cases, engaging in war games, the defenders remain on the back foot However, there a number of new, an potentially significant, approaches yet to be addressed, and we care to look at the problem from a new direction.
In the maintenance of high-tech equipment and systems across many industries, identifiable precursors are employed to flag impending outages and failures. This realisation prompted a series of experiments to see if it was possible to presage pending cyber attacks. And indeed it was found to be the case!
In this presentation we give an overview of our early experimental and observational results, long with our current thinking spanning networks through to individual hackers, and inside actors.
Throughout our education and life we are mostly given a ‘soda-straw’ view of Maths, Physics, Chemistry, Biology, HealthCare, Business and Commerce that conditions us to ‘one concept at a time’ thinking. This is rife in Government and Politics, Industry and Health, and it has been extremely powerful in a now past slow paced and disconnected world. In fact, the speciation of disciplines, topics and problems has largely been responsible for the acceleration and prominence of human progress.
However; in a connected/networked, highly mobile, and tech driven world this simple and narrow minded view is insufficient and dangerous. In common parlance we refer to ‘unintended consequences’ whilst in complex system theory would use the term ‘emergent behaviours’. In brief; education, health, crime, productivity, GDP creation, social cohesion and stability cannot be considered independent variables/properties. They are all related and interdependent. For example; when politicians decide to starve the education system of funds for very young children the impact shows up in health, crime and the economy some 10 - 30 years later!
By analogy; all of this is true of our technologies, industries, lives, and the prospect of sustainable societies. Robots, AI, AL, and Quantum Computing do not stand alone in isolation, they have complementary roles. In this Public Lecture we devote an hour to thinking more holistically what these technologies bring to the party in the context of industry, health, society, sustainable societies and global warming. We then devote a further hour to discussion and debate.
In the context of Global Warming we make the following overriding observations:
“Panic is a poor substitute for thinking”
“Tech is the only exponential capability we enjoy”
“Technology is never a threat, but humans always are”
“Uncertainty always prescribes the precautionary principle”
Every Industrial revolution has seen the progression from people dominated design, build and production to a higher degrees of automation that has gone hand-in-hand with shortening timescales enabled by ever-more powerful technologies. However, at a fundamental level the process has remained the same, but it is now edging toward a continuum of evolution as opposed to a series of discrete jumps that often trigger company reorganizations. In concert, there is a realization abroad that it is no longer about the biggest, the strongest, the best, or the fittest, it is now all about the survival of the most adaptable.
By and large it is relatively easy to predict when and where tech change will occur and the likely outcomes, in terms of existing and future products and services, but how people, customers, companies and societies will react is an unsolved puzzle. On another plane, competition and threats may well occur outside the sector, from a direction managers are not looking, by entirely new mechanisms, and at a most critical time. These are all challenges indeed!
How to adapt to, and cope with these collective challenges is the focus of this presentation which is illustrated and supported by past and present industrial cases along with the experiences and methodologies of those who have driven/weathered this storm as well as those who failed. Many of the illustrations are automated and there are exemplar movies and segue inserts throughout.
Seventy years on from AI appearing on the public scene and all the optimistic projections have been largely overtaken with systems outgunning humans at all board, card and computer games including Chess, Poker and GO. Of course; general knowledge, medical diagnosis, genetics and proteomics, image and pattern recognition are now all firmly in the grasp of AI.
Interestingly, AI is treading a similar path to computing in that it began with single purpose/task machines that could only deal with a company payroll calculations or banking transactions and nothing more! General purpose computing emerged over further decades to give us the PCs and devices we now enjoy. So, AI currently runs as task specific applications on these general purpose platforms, and no doubt, general purpose AI will also become tractable in a few decades too!
Recent progress has promoted a deal of debate and discussion along with hundreds of published papers and definitions that attempt to characterise biological and artificial intelligence. But they all suffer the same futility and fail! Without reference to any formal characterisation, all discussion and debate remains relatively meaningless.
Somewhat ironically, it was the defence industry that triggered the analysis work here. Two of key steps to success were: the abandonment of all performance comparisons between biological and machine entities; and the avoidance of using the human brain as some ‘golden’ intelligence reference.
This presentation is suitable for professionals and public alike, and comes fully illustrated by high quality graphics, animations and movies. Inevitably, it contains (engineering) mathematics that non-practitioners will have to take on trust, whilst professionals may wish challenge on the basis that the focus on getting a solution rather than the purity of the process!
The biggest force for social change since the first industrial revolution has been adjusting to, and taking advantage of, the new and accelerating capabilities of our advancing technologies. And in our entire history, the dominant technology driver has been silicon-based electronics. It has prompted revolutions in Computing, Telecoms, Automation, AI, and Robotics that radically changed the human condition. Today, that same exponential revolution is accelerating us into Industry 4.0 and onto Industry 5.0.
The consequential transformation of medicine, industrial design and production, farming, food, processing, supply and demand has seen living standards improve and life expectancy widen. Many of our institutions have also seen tech-driven transformations in line with industry. If there has been a down-side to this progression, it has been our inability to transform the workforce ahead of new demands. Unemployment has persisted whilst reeducation and retraining have been on the back foot, whilst, the net creation of new jobs has always exceeded the demise of the old. As a result, leading countries in the first world now have labour shortages at all levels right across the spectrum.
Recently, COVID-19 has demonstrated that we have the technology and we can rapidly reorganise and change society if we have to. So in this presentation, we examine ‘the force functions’ and changes engineered to date, and then peer over the horizon to sample what is to come in terms of technologies and working practices…
Data mining and analysis has been dominated by the big looking at the small. Businesses, institutions and governments examine our habits with an eye to commercial opportunities, welfare, and security. However, big data is migrating analysis into the arena of networking and association to enhance services: advertising, ‘pre-selling,’ healthcare, security and tax avoidance reduction. But this leaves the critical arena of Small Data unaddressed - the small looking at the small - individuals and things examining and exploiting their own data.
Here we consider a future of ubiquitous tagging, sensors, measuring and networked monitoring powered by the IoT. Key conclusions see many devices talking to each other at close range with little (or no) need of internet connection, and more network connections generated between things than those on the net.
On Inherent Complexity of Computation, by Attila SzegediZeroTurnaround
The system you just recently deployed is likely an application processing some data, likely relying on some configuration, maybe using some plugins, certainly relying on some libraries, using services of an operating system running on some physical hardware. The previous sentence names 7 categories into which we compartmentalise various parts of a computation process that’s in the end going on in a physical world. Where do you draw the line of functionality between categories? From what vantage points do these distinctions become blurry? Finally, how does it all interact with the actual physical world in which the computation takes place? (What is the necessary physical minimum required to perform a computation, anyway?) Let’s make a journey from your AOP-assembled, plugin-injected, YAML-configured, JIT compiled, Hotspot-executed, Linux-on-x86 hosted Java application server talking JSON-over-HTTP-over-TCP-over-IP-over-Ethernet all the way down to electrons. And then back. Recorded at GeekOut 2013.
Technology Trends, Consumer Experience @MICA 2016Ravi Pal
Technology trends and consumer experience, how to build for new age experience? how do we understand experience and its architecture? what are the possible candidates to attack to build an impact using technology.
Similar to Instrumentation as a Living Documentation: Teaching Humans About Complex Systems (20)
(Moonconf 2016) Fetching Moths from the Works: Correctness Methods in SoftwareBrian Troutwine
We live in a nice world. There’s a wealth of historical thought on achieving correctness in software–shipping code that does only what is intended, not less and not more–and there are a whole bunch of methods available to us as practitioners. Some of these are hard to apply, some are easy. For instance, case testing is widely used and considered standard practice. Property testing is understood to exist but not widely used. The application of advanced logics? Way out there.
If you look around you’ll find a lot of software fails a lot of the time. Why is that?
In this talk I’ll give an overview of the methods for producing correct systems and will discuss each in its historical context. With each method, we’ll keep an eye out for present applications and the difficulty of doing so. We’ll discuss why there’s so much buggy software in the world. I expect there will be talk of spaceships a bit. By the end of this talk you ought to be able to make reasoned decisions about applying correctness methods in your own work and have a good shot at building better software.
Getting Uphill on a Candle: Crushed Spines, Detached Retinas and One Small StepBrian Troutwine
Looking back through history, we often view NASA’s early mission in terms of “getting to the Moon”, discussing how this or that program served the purpose of answering Kennedy’s challenge. This is wrong-headed. In this talk I will discuss aeronautics research beginning with the Writght Brothers and ending with the first Shuttle launch in 1981. We’ll see how NASA is an organization whose primary mission is basic research and development in aeronautics for the benefit of the public at large and space exploration. We’ll see how the Lunar Program was a focusing of research to a practical, political aim which built off decades of basic research and necessarily side-lined other programs. It’s my aim to convince you that Moonshot projects cannot be considered independently of their organizations and its history.
The Charming Genius of the Apollo Guidance ComputerBrian Troutwine
The Apollo Project was the first flight system to deploy with a digital, general-purpose computer made of integrated circuits at its core: the Apollo Guidance Computer (AGC). It was a complete research project: no IC computer had run consecutively for more than a few hours, sophisticated programming techniques were unknown and the interactive human/computer interface had to be invented and made to appeal to astronauts opposed to machine interference in flight operations.
In this talk I'll give the historical context for the AGC, discuss its initial design and the evolution of this design as the Apollo Project progressed. We'll do a deep-dive on the machine architecture and note how tight integration with a special-purpose vehicle admitted incredibly sophisticated behaviour from a primitive machine. We'll further discuss the human/computer interface for the AGC, how the astronaut's flight roles dictated the computer's role and vice versa. Motivating examples from select Apollo flights will be used.
Throughout, we'll keep an eye on lessons to be gleaned from the experience of engineering the AGC and how we can adapt these lessons to modern computer systems in mission-critical deployments.
Fault-tolerance on the Cheap: Making Systems That (Probably) Won't Fall Over Brian Troutwine
Building computer systems that are reliable is hard. The functional programming community has invested a lot of time and energy into up-front-correctness guarantees: types and the like. Unfortunately, absolutely correct software is time-consuming to write and expensive as a result. Fault-tolerant systems achieve system-total reliability by accepting that sub-components will fail and planning for that failure as a first-class concern of the system. As companies embrace the wave of "as-a-service" architectures, failure of sub-systems become a more pressing concern. Using examples from heavy industry, aeronautics and telecom systems, this talk will explore how you can design for fault-tolerance and how functional programming techniques get us most of the way there.
Monitoring Complex Systems: Keeping Your Head on Straight in a Hard WorldBrian Troutwine
This talk will provide motivation for the extensive instrumentation of complex computer systems and make the argument that such systems. This talk will provide practical starting points in Erlang projects and maintain a perspective on the human organization around the computer system. Brian will focus on getting started with instrumentation in a systematic way and follow up with the challenge of interpreting and acting on metrics emitted from a production system in a way which does not overwhelm operators’ ability to effectively control or prioritize faults in the system. He’ll use historical examples and case studies from my work to keep the talk anchored in the practical.
Talk objectives:
Brian hopes to convince the audience of two things:
* that monitoring and instrumentation is an essential component of any long-lived system and
* that it's not so hard to get started, after all.
He’ll keep a clear-eyed view of what works and is difficult in practice so that the audience can make a reasoned decision after the talk.
Target audience:
This talk would appeal to engineers with long-running production employments, operations folks and Erlangers in general.
Let it crash! The Erlang Approach to Building Reliable ServicesBrian Troutwine
In this talk, using the Erlang hacker's semi-official motto "Let it Crash!" as a lens, I'll speak to how radical simplicity of implementation, straight-forward runtime characteristics and discoverability of the running system lead to computer systems which have great success in networked, always-on deployments.
I will argue that while Erlang natively implements many features which aid the construction of such systems--functional programming language semantics, lack of global mutable state, first-class networking, for instance--these characteristics can be replicated in any computer system, as a part of initial design of new systems or the gradual evolution of an existing project. I'll discuss common design patterns and anti-patterns, using my own work at AdRoll and project experience reports from a variety of fields--both successes and failures--to advance my argument.
This will be a very practical talk and will be accessible to engineers of all backgrounds.
Automation With Humans in Mind: Making Complex Systems Predictable, Reliable ...Brian Troutwine
I believe that our current approach to designing software systems is driving society in a bad direction. In particular, I believe we are creating a society predicated on automation which is oriented to be serviced by humans or, requiring no service, is simply in control of humans. Ignoring the dystopian overtones of this, I argue that this is a technically flawed approach, that such automation is less reliable, less flexible and less robust through time than a system designed with humans as the controlling party in mind. I will argue--with a mix of personal experience, reference to academic literature and historical examples--that complex systems designed with human control in mind are more lasting through time, more technically excellent and just generally more useful. I will further argue that a re-orientation toward human supremacy in computer systems is especially important as we begin to tightly couple western civilization's technology to the internet, being the Internet of Things. I'll talk a bit about the political and social implications, as well, after I've made a purely technical argument.
Monitoring Complex Systems - Chicago Erlang, 2014Brian Troutwine
Imagine being responsible for monitoring 100 servers. Now imagine 1000. Each server has 100 different things to keep track of. What do you pay attention to and what do you ignore? What is important? In this talk Brian will show how Erlang can be used to capture more information without compromising clarity — i.e. to keep track of the forest without loosing site of the trees!
Presentation slides given at Erloung Bay Area, January 2014. The deck is a brief introduction to the Erlang library exometer and gives an overview of my work at AdRoll to increase monitoring and insight of the running real-time bidding system.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Listen to the keynote address and hear about the latest developments from Rachana Ananthakrishnan and Ian Foster who review the updates to the Globus Platform and Service, and the relevance of Globus to the scientific community as an automation platform to accelerate scientific discovery.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
In software engineering, the right architecture is essential for robust, scalable platforms. Wix has undergone a pivotal shift from event sourcing to a CRUD-based model for its microservices. This talk will chart the course of this pivotal journey.
Event sourcing, which records state changes as immutable events, provided robust auditing and "time travel" debugging for Wix Stores' microservices. Despite its benefits, the complexity it introduced in state management slowed development. Wix responded by adopting a simpler, unified CRUD model. This talk will explore the challenges of event sourcing and the advantages of Wix's new "CRUD on steroids" approach, which streamlines API integration and domain event management while preserving data integrity and system resilience.
Participants will gain valuable insights into Wix's strategies for ensuring atomicity in database updates and event production, as well as caching, materialization, and performance optimization techniques within a distributed system.
Join us to discover how Wix has mastered the art of balancing simplicity and extensibility, and learn how the re-adoption of the modest CRUD has turbocharged their development velocity, resilience, and scalability in a high-growth environment.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Understanding Globus Data Transfers with NetSageGlobus
NetSage is an open privacy-aware network measurement, analysis, and visualization service designed to help end-users visualize and reason about large data transfers. NetSage traditionally has used a combination of passive measurements, including SNMP and flow data, as well as active measurements, mainly perfSONAR, to provide longitudinal network performance data visualization. It has been deployed by dozens of networks world wide, and is supported domestically by the Engagement and Performance Operations Center (EPOC), NSF #2328479. We have recently expanded the NetSage data sources to include logs for Globus data transfers, following the same privacy-preserving approach as for Flow data. Using the logs for the Texas Advanced Computing Center (TACC) as an example, this talk will walk through several different example use cases that NetSage can answer, including: Who is using Globus to share data with my institution, and what kind of performance are they able to achieve? How many transfers has Globus supported for us? Which sites are we sharing the most data with, and how is that changing over time? How is my site using Globus to move data internally, and what kind of performance do we see for those transfers? What percentage of data transfers at my institution used Globus, and how did the overall data transfer performance compare to the Globus users?
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
11. The nature of the problem domain:
• Low latency ( < 100ms per transaction )
• Firm real-time system
• Highly concurrent ( > 55 billion transactions per
day )
• Global, 24/7 operation
13. Complex Systems
• Non-linear feedback
• Tightly coupled to external systems
• Difficult to model, understand
• Usually a solution to some “wicked
problem”
14. - - C . W E S T C H U R C H M A N ,
- G U E S T E D I T O R I A L : W I C K E D P R O B L E M S
- M A N A G E M E N T S C I E N C E V O L . 4 , 1 9 6 7
[WICKED PROBLEMS ARE] SOCIAL PROBLEMS WHICH ARE
ILL FORMULATED, WHERE THE INFORMATION IS CONFUSING,
WHERE THERE ARE MANY CLIENTS AND DECISION-MAKERS
WITH CONFLICTING VALUES, AND WHERE THE
RAMIFICATIONS IN THE WHOLE SYSTEM ARE THOROUGHLY
CONFUSING. […] THE ADJECTIVE ‘WICKED’ IS SUPPOSED TO
DESCRIBE THE MISCHIEVOUS AND EVEN EVIL QUALITY OF
THESE PROBLEMS, WHERE PROPOSED ‘SOLUTIONS’ OFTEN
TURN OUT TO BE WORSE THAN THE SYMPTOMS.”
17. HUMANS ARE BAD AT PREDICTING
THE PERFORMANCE OF COMPLEX
SYSTEMS(…). OUR ABILITY TO CREATE
LARGE AND COMPLEX SYSTEMS FOOLS
US INTO BELIEVING THAT WE’RE ALSO
ENTITLED TO UNDERSTAND THEM.
C A R L O S B U E N O
“ M AT U R E O P T I M I Z AT I O N H A N D B O O K ”
18. The key challenge to
sustaining a complex
system is maintaining
our understanding of it.
27. D AV I D E . H O F F M A N
“ T H E D E A D H A N D : T H E U N T O L D S T O R Y O F T H E C O L D
WA R A R M S R A C E A N D I T ’ S D A N G E R O U S L E G A C Y ”
ONE OPERATOR (…) WAS CONFUSED BY THE
LOGBOOK. HE CALLED SOMEONE ELSE TO INQUIRE.
!
“WHAT SHALL I DO?” HE ASKED. “IN THE PROGRAM
THERE ARE INSTRUCTIONS OF WHAT TO DO, AND
THEN A LOT OF THINGS CROSSED OUT.”
!
THE OTHER PERSON THOUGHT FOR A MINUTE, THEN
R E P L I E D , “ F O L L O W T H E C R O S S E D O U T
INSTRUCTIONS.”
29. E R I C S C H L O S S E R
C O M M A N D A N D C O N T R O L : N U C L E A R W E A P O N S , T H E
D A M A S C U S A C C I D E N T, A N D T H E I L L U S I O N O F S A F E T Y
CLEARLY THE TEXTBOOKS (…) DIDN’T TELL YOU
WHAT REALLY HAPPENED IN THE FIELD. (…)
(T)HERE WAS A WAY YOU WERE SUPPOSED TO
DO THINGS – AND THE WAY THINGS GOT DONE.
RFHCO SUITS WERE HOT AND CUMBERSOME
(…) AND IF A MAINTENANCE TASK COULD BE
ACCOMPLISHED QUICKLY WITHOUT AN OFFICER
NOTICING, SOMETIMES THE SUITS WEREN’T
WORN.
31. H E N R Y S . F. C O O P E R , J R .
X I I I : T H E A P O L L O F L I G H T T H AT FA I L E D
THE FIRST DISASTER IN SPACE HAD
OCCURRED, AND NO ONE KNEW
WHAT HAD HAPPENED. ON THE
GROUND, THE FLIGHT CONTROLLERS
W E R E N O T E V E N S U R E T H AT
ANYTHING HAD.
38. THIS “COLLECTIVE ENTITY” WAS ORGANIZED
AROUND THE PILOT TO MAKE IT “SAFER
AND MORE EFFICIENT IF THERE WAS A
FOCAL POINT. AND I WAS THE FOCAL
POINT. JIM FED THINGS INTO MY EARS.
THE MOON FED THINGS INTO MY EYES AND
I COULD FEEL THE MACHINE OPERATING.”
C O M M A N D E R D AV I D S C O T T
A S Q U O T E D I N D AV I D A . M I N D E L L ' S
D I G I TA L A P O L L O : H U M A N A N D M A C H I N E I N S PA C E F L I G H T
46. Case Study: Exchange Throttling
• All other metrics (run-queue, CPU, network IO)
were fine.
• Confirmed that no changes had been made to
the running systems via deployment.
• Amazon data showed no network issues to our
machines.
54. Case Study: Timeout Jumps
• Timeouts jump occurred only in US East, US
West fine.
• All other metrics (as above) checked out.
• System deployment strongly correlated with
timeout jump.
• Rollback to previous release reduce timeouts to
acceptable levels.
59. (THE FIREFIGHTERS) TRIED TO BEAT
DOWN THE FLAMES (OF CHERNOBYL
REACTOR 4). THEY KICKED AT THE
BURNING GRAPHITE WITH THEIR FEET.
… THE DOCTORS KEPT TELLING THEM
THEY’D BEEN POISONED BY GAS.
- S V E T L A N A A L E X I E V I C H
- V O I C E S F R O M C H E R N O B Y L : T H E O R A L H I S T O R Y O F A
N U C L E A R D I S A S T E R
60. It is possible to collect too
much information, or
present it badly.
61. SAFETY SYSTEMS, SUCH AS WARNING
LIGHTS, ARE NECESSARY, BUT THEY HAVE
THE POTENTIAL FOR DECEPTION. (…) ONE OF
THE LESSONS OF COMPLEX SYSTEMS AND
(THREE MILE ISLAND) IS THAT ANY PART OF
THE SYSTEM MIGHT BE INTERACTING WITH
OTHER PARTS IN UNANTICIPATED WAYS.
- C H A R L E S P E R R O W
- N O R M A L A C C I D E N T S : L I V I N G W I T H H I G H - R I S K
T E C H N O L O G I E S
76. IF YOU DON'T TRUST A COMPUTER
BECAUSE SOMETIMES IT DOESN'T TELL
YOU THE TRUTH, TELLING IT TO TELL
YOU TO TRUST IT IS ASKING IT TO LIE
TO YOU SOMETIMES.
M I K E S A S S A K ,
C U R B S I D E
79. I PROPOSE THAT MEN AND WOMEN BE RETURNED TO
WORK AS CONTROLLERS OF MACHINES, AND THAT THE
CONTROL OF PEOPLE BY MACHINES BE CURTAILED. I
PROPOSE, FURTHER, THAT THE EFFECTS OF CHANGES IN
TECHNOLOGY AND ORGANIZATION ON LIFE PATTERNS BE
TAKEN INTO CAREFUL CONSIDERATION, AND THAT THE
CHANGES BE WITHHELD OR INTRODUCED ON THE BASIS
OF THIS CONSIDERATION.
K U R T V O N N E G U T
P L AY E R P I A N O