Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

A brief history of automation in Software Engineering

97 views

Published on

In this talk we will discuss different levels of automation and what automation has in common with DevOps, Product Maturity and Machine learning. We will show how automation enables fast feedback and finally, while looking at an example of an observable and continuous deployable system we will show how automation can make your team more productive (while delivering more stable software and decrease time to market).

Published in: Engineering
  • Be the first to comment

A brief history of automation in Software Engineering

  1. 1. A brief history of automation in Software Engineering
  2. 2. Bio ● 16+ years IT stuff ● 1981 - 2011 in Germany ● Since 2011 in Brazil ● married, 2 sons Currently ● Head of R&D, Dafiti Contact ● Twitter (follow me for mostly geek stuff): https://twitter.com/georg.buske ● LinkedIn: https://br.linkedin.com/in/georgbuske ● Email: georg.buske@dafiti.com.br whoami; Georg Buske
  3. 3. FIRST THINGS FIRST ● WE ARE HIRING ○ SREs (Ops), Developers (Devs), Data Scientists, Masters of Agility ● Also, you can follow us on Twitter @Dafiti_tech ● You love papers? Dafiti will host the papers- we-love chapter Sao Paulo in 05/2018
  4. 4. No deep dive, it is really about history and possibilities ~50 slides in 25 minutes => feel free to catch up afterwards Bullet points and slides full of text suck! BUT... DISCLAIMER
  5. 5. ...you’ll find them all over the place during this presentation DISCLAIMER Sorry for that! ;)
  6. 6. ● Definition and some history ● Different levels of Automation and some examples ● What Automation has in common with DevOps, Product maturity and machine learning ● Showcase TODAY’S AGENDA
  7. 7. Source: https://en.wikipedia.org/wiki/Automation “Automation can be defined as the technology by which a process or procedure is performed without human assistance.” - Groover, Mikell (2014). Fundamentals of Modern Manufacturing: Materials, Processes, and Systems.
  8. 8. ● Without human interaction ● Automation is about feedback, moreover enables fast(er) feedback MAIN CHARACTERISTICS
  9. 9. ● Increased throughput or productivity and maintainability ● Decreases errors and rework ● Improved quality or increased predictability of quality ● improved robustness (consistency) of processes or product ADVANTAGES
  10. 10. Automation goes back to Ford (in 1947 Ford created an automation department) SOME HISTORICAL MILESTONES
  11. 11. Feedback controllers (introduced and adopted in the 1930s in industrial processes) SOME HISTORICAL MILESTONES
  12. 12. ● The most classic example of automation in Software industry is test automation (also build automation) ● First developments in the early 1970s by IBM for mainframes (SIMON / OLIVER) ● The Windows (™) era introduced new vendors and new products for automation throughout the 1990s SOME HISTORICAL MILESTONES
  13. 13. ● 1996 the first project (at Chrysler) based on extreme programming were executed ● 1998 Kent Beck introduced the xUnit framework (after early developments of a testing framework for Smalltalk) ● Notable names (among others) are Jez Humble, Martin Fowler and of course, Kent Beck SOME HISTORICAL MILESTONES
  14. 14. ● 2000s not only “Agile” (Agile manifesto) also products such as CruiseControl were created ● 2004 Sun Microsystems started the Hudson project ● 2011 into what we know today as Jenkins (there are other vendors and products, but Jenkins is probably the most notable one) ● 2005 tools like Puppet emerged on the operations side (exception cfengine, v1 1993) ● Today: containers and autoscaling are mainstream, CI/CD continue, ML/AI being adopted SOME HISTORICAL MILESTONES
  15. 15. Build & Test automation: Build an artefact, run tests, get feedback on coverage and failed tests Notifications: In case of errors in one of the build or test steps - or even in released software - get automatically notified of problems by email, chat or SMS VCS (e.g. GIT, merging branches is nowadays quite a breeze - compared to manual diffs or CVS and SVN) COMMON USE CASES
  16. 16. Source: https://www.google.com.br/search?q=automation+software EXAMPLES OF TOOLS (SOFTWARE)
  17. 17. Source: https://www.statista.com/statistics/673467/worldwide-software-development-survey-testing-tools/
  18. 18. Hint: If you want to do DevOps there must be automation in place AUTOMATION AND DEVOPS Also: DevOps is not a position but should be seen as part of the culture (and strategy)
  19. 19. The first way: The technical practices of flow ○ Systems thinking ○ Continuous Integration ○ Continuous Delivery ○ Continuous Deployment DEVOPS - THE 3 WAYS (1)
  20. 20. The second way: The technical practices of feedback ○ Amplify Feedback loops ○ Monitoring ○ Fail fast ○ Continuous Improvement (use 20 % of time each week to improve code) DEVOPS - THE 3 WAYS (2)
  21. 21. The third way: The technical practices of learning ○ Culture of continuous experimentation and learning ○ Learning reviews (a.k.a. Post mortems) ○ Live failure modes, game days DEVOPS - THE 3 WAYS (3)
  22. 22. ● @Dafiti we created a Technology Maturity Model (TMM) for assessment of our internal services ● Services are evaluated in various KPAs (Key Process Areas): Architecture, Data, Infrastructure & Operations, Quality and Security ● From level 0 to level 5 AUTOMATION AND PRODUCT MATURITY
  23. 23. What has this to do with automation? ● Simply put: To reach a high level of maturity you must have automation in place ● now let’s count all parts with explicit “automation” in the text AUTOMATION AND PRODUCT MATURITY
  24. 24. KPA ARCHITECTURE [LEVEL 1]
  25. 25. KPA DATA [LEVEL 4]
  26. 26. KPA INFRASTRUCTURE & OPERATIONS
  27. 27. KPA QUALITY
  28. 28. KPA SECURITY [LEVEL 5]
  29. 29. AUTOMATION AND MACHINE LEARNING ● Machine learning is not only a hype (AI first) ● Machine learning and AI is all about automatic decision making, feedback and optimization ● Example applications are: Source code analysis, anomaly detection, error prediction
  30. 30. SHOWCASE Two stories in one: 1. Let's consider a startup which develops a web application for inventory and asset management 2. I will also cover some learnings from www.dafiti.com.br
  31. 31. The typical steps: ● started with an MVP ● developed in a hurry ● no source control ● no automated build process (copy by FTP) After the first release - new features are added, new developers are hired, more users are on the platform (soon, we will be rich!) THE BEGINNING BUT: new release => bugs and problems everywhere ;-(
  32. 32. The good: ● Continue with a makefile, use a VCS (e.g. GIT) ● Manual QA testing process a bit more automated process (basic unit tests added) ● Merging of different branches now is much better => deployment less error prone The bad: ● Low test coverage ● “worked on my branch and on my machine” - Problem A FIRST EVOLUTION
  33. 33. The team sits together, and now finds the CI server Jenkins... NEW INSIGHTS
  34. 34. NEW INSIGHTS ● Build pipelines are created ● Now tests are running on every merge to master ● A releasable version of the software is created
  35. 35. With Jenkins in place their applications were now all running various build steps: 1. Build 2. Test 3. Deploy to staging 4. Deploy to live 5. Release in live (it is also good to have a rollback button - just in case - they found this out soon and so implemented it afterwards…) CI/CD
  36. 36. ● Very happy about recent accomplishments ● Let’s release latest version! ● All tests were green! EVOLUTION IN AUTOMATED TESTING All good?! ● Uhh! site is not usable anymore => messages about parse errors every request ● Merge conflict was deployed broken and unnoticed ● Sanity checks and linting were added on each build
  37. 37. Again, new release, errors when integrating third party system stopped working ● Team added tests for every integration point of third party system ● Now developers were waiting always a long time to finish the tests MORE ON TESTING ● Split tests into unit and integration tests ● Execute unit tests fast, integration tests only once a day ● Yay, we are fail safe and have fast feedback for development! o/ Slow feedback
  38. 38. AUTOMATING FRONTEND TESTS ● After latest release users complain about long response times ● Dev and ops found very big images and javascript libraries are the problem ● Added webpagetest into the build pipeline to test for performance regressions ● NOW WE ARE FINE! NOT YET ;-/
  39. 39. AUTOMATING FRONTEND TESTS ● Tests were running fine ● But: after another release user was not able to add no items (as the “save” button was overlayed by a HTML div) ● Yes, this happened at dafiti on the checkout The team added Selenium tests to verify the frontend.and once again: added to the build pipeline
  40. 40. ● New killer feature: google docs integration ● The marketing team launched some huge campaigns on Twitter and Facebook AND EVEN MORE TESTING ● Result: The site crashed abruptly ● Plan: regular performance testing on their live environment ● And: added the performance plugin to their Jenkins setup and now are running Jmeter tests before every deploy in staging environment
  41. 41. Performance and functionality is now [almost always] guaranteed to be great LAST BUT FOR SURE NOT LEAST BUT ● Suddenly the Head of Security tells that some hackers were able to get access to other user accounts by a XSS attack ● BTW: also deleted part of the address database via SQL injection ● After extensive manual pentesting the team added a Security testing step into deploy pipeline (here w3af)
  42. 42. ● You need to maintain your unit tests and treat them as first class citizens in your code base (clean code, refactoring, etc.) => or more headache than value ● Use mock data ● Be aware of flaky [randomly failing] tests ● Open source code bases can be problematic when do not have a public test suite or code is changed and not is running locally (this was the case with magento at Dafiti) ● Measure your code coverage and increase constantly (core business logic and regressions first - don’t spend time test framework code) ● The further the process towards live release, the higher the cost -> this is true for testing and for bugs MORE NOTES ON TESTING
  43. 43. ● Get timely feedback! ● CI tools (Jenkins) and all kind of monitoring tools have ways to integrate various notifications channels (often via webhooks) BEING INFORMED In our startup: Now everybody gets informed via Chat (slack channel) and email when something went wrong
  44. 44. Pretty great job! Releases were much better and bug free and everyone happy! DEVELOPMENT AND OPERATIONS BUT: still a lot of pain when things went live… ...because it was discovered that the CI server used other libraries than the live environment.
  45. 45. ● Research on Environment provisioning software ● use what works best for the team (often matter of taste but Puppet, Chef, Ansible and cfengine works all quite well) LESS TROUBLE W/ ENV. PROVISIONING ● New hire in the operations team (SRE) already had some experience with Puppet they settled on Puppet ● Complete setup of the environment is now available in configuration files and scripts => recreation of envs in a consistent manner everytime needed
  46. 46. “Monitoring tells you whether the system works. Observability lets you ask why it's not working.” - Baron Schwartz To achieve this the team added several tools to their stack: ● Service monitoring -> Prometheus (server and application) ● Dashboards -> Grafana ● Log aggregation -> ELK (elastic search, logstash, kibana) ● Distributed tracing -> Zipkin (opentracing) Now we can trace problems down to the root! MONITORING AND OBSERVABILITY
  47. 47. ALMOST... MONITORING AND OBSERVABILITY
  48. 48. SRE (a.k.a Ops) headache because of ● Spikes in site usage, e.g. marketing campaigns ● Pressure to lower costs for infrastructure SRE AND “SOS” CALLS ● Enable autoscaling in AWS setup (this works as well for GCP or Azure) ● Check (cloud agnostic) containerization (e.g. Docker), move to Kubernetes
  49. 49. Achievements so far: test, build, deployment and infrastructure Roadmap: ● Code analyzer (cyclomatic complexity, etc.) like Sonar ● Use machine learning for anomaly detection for their live stack and software error prediction into code analyzer (part of the build pipeline) ● Evaluate chaos engineering (Netflix’s Chaos monkey / Simian army) ● Canary releases and blue green deployments ● Automatic failover of Database slave ● Feedback control for Message Queue consumers REACHING AUTOMATION HEAVEN
  50. 50. It is a process, not a product - you constantly need to cleanup, improve and innovate BUT AS SO OFTEN
  51. 51. ● The Phoenix Project (Gene Kim, George Spafford, and Kevin Behr) ● The DevOps Handbook (Gene Kim, Jez Humble, and Patrick Debois) ● Feedback Control for Computer Systems (Philipp K. Janert) ● Continuous Delivery (David Farley and Jez Humble) ● SRE book (Chris Jones, Jennifer Petoff, and Niall Richard Murphy) ● Beyond Blame (Dave Zwieback) BOOK RECOMMENDATIONS
  52. 52. ONE LAST THING ● WE ARE HIRING ○ SREs (Ops), Developers (Devs), Data Scientists, Masters of Agility ● Also, you can follow us on Twitter @Dafiti_tech ● You love papers? Dafiti will host the papers- we-love chapter Sao Paulo in 05/2018
  53. 53. ● https://www.quora.com/What-is-the-history-of-automated-software-testing ● https://www.linkedin.com/pulse/20141007123253-16089094-a-very-brief-history-of-test- automation/ ● https://en.wikipedia.org/wiki/CruiseControl ● https://en.wikipedia.org/wiki/Jenkins_(software) ● https://en.wikipedia.org/wiki/Kent_Beck ● https://en.wikipedia.org/wiki/XUnit ● https://www.martinfowler.com/articles/continuousIntegration.html ● https://www.ansible.com/ ● https://www.chef.io/ ● https://puppet.com/ ● https://cfengine.com/ ● https://grafana.com ● https://medium.com/@steve.mushero/observability-vs-monitoring-is-it-about-active-vs-passive- or-dev-vs-ops-14b24ddf182f ● https://www.vividcortex.com/blog/monitoring-isnt-observability ● https://wiki.jenkins.io/display/JENKINS/Notification+Plugin ● https://www.robustperception.io/using-slack-with-the-alertmanager/ ● https://arxiv.org/ftp/arxiv/papers/1506/1506.07563.pdf REFERENCES (1)
  54. 54. ● https://github.com/Netflix/chaosmonkey ● https://www.cncf.io/# ● https://kubernetes.io ● https://en.wikipedia.org/wiki/HP_QuickTest_Professional ● https://www.statista.com/statistics/673467/worldwide-software-development-survey-testing-tools/ ● https://en.wikipedia.org/wiki/Capability_Maturity_Model_Integration ● http://w3af.org ● https://wiki.jenkins.io/display/JENKINS/Performance+Plugin ● https://jmeter.apache.org ● https://martinfowler.com/articles/mocksArentStubs.html ● https://www.elastic.co/webinars/introduction-elk-stack ● http://google-engtools.blogspot.com.br/2011/12/bug-prediction-at-google.html ● https://martinfowler.com/bliki/SelfInitializingFake.html ● https://github.com/jenkinsci/job-dsl-plugin ● https://www.sonarqube.org ● https://prometheus.io ● https://www.webpagetest.org/ ● https://www.npmjs.com/package/webpagetest ● https://www.quora.com/What-is-the-history-of-automated-software-testing ● https://en.wikibooks.org/wiki/Control_Systems/Feedback_Loops REFERENCES (2)

×