Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
The OutSystems R&D
Continuous Delivery Journey
DevOpsDays Warsaw ‘16 - November 22
Who am I?
Diogo Oliveira (PT)
Software Engineer @ OutSystems R&D DevOps Group
Co-organizer @ Lisbon DevOps Meetup
diogo.oliveira@out...
What is OutSystems and what
do we do?
International company with its R&D based in Lisbon, Portugal
OutSystems provides a low-code rapid application development ...
OutSystems Product
Supported Stacks
How are we developing and
ensuring quality?
Technologies / Languages
(...)
.NET and Java version
Translation
Code + Tests
Dogfooding
Product Quality
● ~10.000 distinct fully automated tests (per major version)
Product Quality
The OutSystems R&D
Continuous Delivery Journey
Rewind 10 years
(We are now in 2006)
Back in the days
Back in the days
Test management and orchestration tool
Test management and orchestration tool
Better than anything else at the time
Fast Forward 8 years
(We are now in 2014)
Test management and orchestration tool
● Solved the problem initially
● Continued to solve it, thanks to people with high ...
Engineering team size
July 2013
18 Engineers July 2014
41 SW Engineers
● Fast-forward tip:
○ Kept growing - 85 Engineers i...
Branching
Teams working on separate environments
● Achieved through branching (per team and/or project)
● ~30 active (SVN)...
Quality Assurance
Big challenge
Executing 10.000 distinct fully automated tests * possible stack
combinations (~100.000 te...
Quality Assurance
How were we doing QA?
Full Build + Full Test Run
in Test Environments
Long Feedback Loop!
Quality Assurance
~26 hours to run ~10.000 tests over different stacks
No daily visibility
Slow Builds
Unreliable Environm...
Testing Infrastructure
And what about the testing infrastructure?
Testing Infrastructure
Developer: “I want a test environment to run
tests”
Ops Guy: “Ok, let me get one machine and then
i...
This model was not anything near CD!
But… For our release frequency (1 major per year) and support model
(only corrective ...
So, why the need to change?
(Still in 2014)
What made us change
● No more corrective maintenance only
○ Features released in “maintenance releases”
● Amount of develo...
What made us change
… the need for faster feedback (and with quality!) started to grow
But our processes, tools and infras...
What made us change
We perceived the boiling water on time, before the frog started to die (our
frog was smarter)!
Which m...
What did we do?
(Still in 2014)
What did we do?
Root cause analysis, prioritization and alignment
Automate the provisioning of our test environments (“infrastructure as code”)
Let’s automate!
Let’s automate!
● Saved ~3 days of 1 person per each test environment
● Easier for development teams to keep infrastructur...
Still some problems...
Test environment configurations are
automated, but...
… still need to wait for someone to
create th...
“Nimbus” project - moving to the cloud
● “Nimbus” project - move our testing
infrastructure to the cloud (AWS)
“Nimbus” project - moving to the cloud
● Test environment provisioning much faster (1 hour by clicking a button
- includes...
Continuous Delivery Knowledge Session Huge impact!
Continuous Delivery Knowledge Session
Some developers (by their initiative) went to their managers...
Hey! We are really
excited about CD and
we have this idea…...
And the managers said...
Ok! Go for it!!
Impact on developers
And, in that week, they created…
CINTIA!
(Continuous INTegration and Intelligent Alert system)
The rise of CINTIA
The rise of CINTIA
● Automated incremental builds
● Automated installations
● Some automated tests (~1000 tests to start.....
What did CINTIA bring at this point?
● Build + Installation + ~1000 Tests in 19 minutes, automatically
triggered by commit...
From CINTIA PoC to a real CI system
Challenge: how to achieve fast feedback with 100.000 test executions (taking
in accoun...
From CINTIA PoC to a real CI system
Let’s apply some risk management here
● Let’s run almost all the tests for 1/2 particu...
Still some problems with tests...
Even by taking this choice, there were still open challenges:
● Reinventing the wheel wi...
CINTIA growth
Started using GO.cd (open-source CD tool)
3 test stages with different speed times (~8.000 tests)
10 min, 30 min, < 1 hour
Parallelization
Automatic flaky tests detection
CI Culture
How can we make sure people are fixing the issues?
How can we motivate them?
Give them visibility :-)
CINTIA on TVs
More challenges...
Despite the fact we had TVs giving visibility, developers needed more detail
Why is the test failing?
H...
Centralized test history (message, stack trace, environment)
More challenges...
What if something misbehaves?
Monitoring integrated with Slack!
We reached today!
(We are now on 2016, November 22)
The Continuous Delivery Journey
BEFORE (2 years ago)
~10.000 tests in ~26 hours
No daily visibility
Slow Builds
Unreliable...
The future - Open Challenges
The future - Open Challenges
What are we up now? What challenges are we facing?
● Align validation process with product ar...
The future
Thank You!
Diogo Oliveira
diogo.oliveira@outsystems.com
DOD 2016 - Diogo Oliveira -  The OutSystems R&D Continuous Delivery Journey
DOD 2016 - Diogo Oliveira -  The OutSystems R&D Continuous Delivery Journey
DOD 2016 - Diogo Oliveira -  The OutSystems R&D Continuous Delivery Journey
Upcoming SlideShare
Loading in …5
×

DOD 2016 - Diogo Oliveira - The OutSystems R&D Continuous Delivery Journey

71 views

Published on

YouTube: https://www.youtube.com/watch?v=f-DyEiTN6nc&index=4&list=PLnKL6-WWWE_VtIMfNLW3N3RGuCUcQkDMl

OutSystems builds a complex software product. As the company and the product complexity kept growing (and at a faster pace) to a model where we needed to be able to release more frequently, challenges appeared on the way we were doing automated testing and continuous integration / delivery, which demanded significant changes and improvements in these processes, from the tools to the culture. I will share with you our journey towards Continuous Delivery @ OutSystems R&D, namely describing where we were, where are we now (and how are we doing it) and where do we want to go. This is a very interesting story where we were able to change a lot in a relatively small period of time.

Published in: Technology
  • Be the first to comment

  • Be the first to like this

DOD 2016 - Diogo Oliveira - The OutSystems R&D Continuous Delivery Journey

  1. 1. The OutSystems R&D Continuous Delivery Journey DevOpsDays Warsaw ‘16 - November 22
  2. 2. Who am I?
  3. 3. Diogo Oliveira (PT) Software Engineer @ OutSystems R&D DevOps Group Co-organizer @ Lisbon DevOps Meetup diogo.oliveira@outsystems.com
  4. 4. What is OutSystems and what do we do?
  5. 5. International company with its R&D based in Lisbon, Portugal OutSystems provides a low-code rapid application development and delivery platform (plus integration of custom code) It consists on a complete application lifecycle system to develop, manage and change enterprise web & native mobile applications 400+ Employees 100+ at R&D / Engineering
  6. 6. OutSystems Product
  7. 7. Supported Stacks
  8. 8. How are we developing and ensuring quality?
  9. 9. Technologies / Languages (...)
  10. 10. .NET and Java version Translation Code + Tests
  11. 11. Dogfooding
  12. 12. Product Quality ● ~10.000 distinct fully automated tests (per major version)
  13. 13. Product Quality
  14. 14. The OutSystems R&D Continuous Delivery Journey
  15. 15. Rewind 10 years (We are now in 2006)
  16. 16. Back in the days
  17. 17. Back in the days
  18. 18. Test management and orchestration tool
  19. 19. Test management and orchestration tool Better than anything else at the time
  20. 20. Fast Forward 8 years (We are now in 2014)
  21. 21. Test management and orchestration tool ● Solved the problem initially ● Continued to solve it, thanks to people with high pain threshold ● Was the only thing that gave us the green ‘ship it’ light ● Tested too much ● Evolved by everyone, without a vision
  22. 22. Engineering team size July 2013 18 Engineers July 2014 41 SW Engineers ● Fast-forward tip: ○ Kept growing - 85 Engineers in 2015, and 130 Engineers in 2016
  23. 23. Branching Teams working on separate environments ● Achieved through branching (per team and/or project) ● ~30 active (SVN) branches as of November 2014 REINTEGRATE HELL
  24. 24. Quality Assurance Big challenge Executing 10.000 distinct fully automated tests * possible stack combinations (~100.000 test executions)
  25. 25. Quality Assurance How were we doing QA? Full Build + Full Test Run in Test Environments Long Feedback Loop!
  26. 26. Quality Assurance ~26 hours to run ~10.000 tests over different stacks No daily visibility Slow Builds Unreliable Environments Flaky/Unstable Tests Long Feedback Loop (~100.000 test executions)
  27. 27. Testing Infrastructure And what about the testing infrastructure?
  28. 28. Testing Infrastructure Developer: “I want a test environment to run tests” Ops Guy: “Ok, let me get one machine and then in two weeks I will spend 3 days configuring it using our…” …49 pages long manual
  29. 29. This model was not anything near CD! But… For our release frequency (1 major per year) and support model (only corrective maintenance) this worked well enough Where were we then?
  30. 30. So, why the need to change? (Still in 2014)
  31. 31. What made us change ● No more corrective maintenance only ○ Features released in “maintenance releases” ● Amount of development going on increased a lot ● Full run not enough anymore (and not reliable) So...
  32. 32. What made us change … the need for faster feedback (and with quality!) started to grow But our processes, tools and infrastructure were not in place for that
  33. 33. What made us change We perceived the boiling water on time, before the frog started to die (our frog was smarter)! Which means… We understood that the need for faster feedback and more frequent releases would keep growing, so we started building our own journey towards CD.
  34. 34. What did we do? (Still in 2014)
  35. 35. What did we do? Root cause analysis, prioritization and alignment
  36. 36. Automate the provisioning of our test environments (“infrastructure as code”) Let’s automate!
  37. 37. Let’s automate! ● Saved ~3 days of 1 person per each test environment ● Easier for development teams to keep infrastructure code updated (it’s code after all!)
  38. 38. Still some problems... Test environment configurations are automated, but... … still need to wait for someone to create the machine first (1, 2, 3 weeks…).
  39. 39. “Nimbus” project - moving to the cloud ● “Nimbus” project - move our testing infrastructure to the cloud (AWS)
  40. 40. “Nimbus” project - moving to the cloud ● Test environment provisioning much faster (1 hour by clicking a button - includes all the environment configuration) ● Easy to recover ● Reliable and performant ● Scalable and elastic
  41. 41. Continuous Delivery Knowledge Session Huge impact! Continuous Delivery Knowledge Session
  42. 42. Some developers (by their initiative) went to their managers... Hey! We are really excited about CD and we have this idea… Give us one week and we'll save you 3 months of wasted effort! Impact on developers
  43. 43. And the managers said... Ok! Go for it!! Impact on developers
  44. 44. And, in that week, they created… CINTIA! (Continuous INTegration and Intelligent Alert system) The rise of CINTIA
  45. 45. The rise of CINTIA ● Automated incremental builds ● Automated installations ● Some automated tests (~1000 tests to start...) ● Automatic assign to right “culprits”! ● Developed using our own product (UI) + Python (Orchestration)
  46. 46. What did CINTIA bring at this point? ● Build + Installation + ~1000 Tests in 19 minutes, automatically triggered by commits ● Fast feedback! ● Automatic “culprit” assign, and fast!
  47. 47. From CINTIA PoC to a real CI system Challenge: how to achieve fast feedback with 100.000 test executions (taking in account all the stack combinations)? Do we really need to always run all the tests for all the stack combinations?
  48. 48. From CINTIA PoC to a real CI system Let’s apply some risk management here ● Let’s run almost all the tests for 1/2 particular stacks on each commit ● Let’s run all the other tests weekly, on milestones and prior to releases
  49. 49. Still some problems with tests... Even by taking this choice, there were still open challenges: ● Reinventing the wheel with custom Python orchestration ● Too many tests to execute (still not-so-fast feedback) ● Unclear test categorization (unit, integration, etc.) ● A “monolithic” test stage ● Flaky tests
  50. 50. CINTIA growth Started using GO.cd (open-source CD tool)
  51. 51. 3 test stages with different speed times (~8.000 tests) 10 min, 30 min, < 1 hour Parallelization
  52. 52. Automatic flaky tests detection
  53. 53. CI Culture How can we make sure people are fixing the issues? How can we motivate them? Give them visibility :-)
  54. 54. CINTIA on TVs
  55. 55. More challenges... Despite the fact we had TVs giving visibility, developers needed more detail Why is the test failing? How can they troubleshoot it?
  56. 56. Centralized test history (message, stack trace, environment)
  57. 57. More challenges... What if something misbehaves?
  58. 58. Monitoring integrated with Slack!
  59. 59. We reached today! (We are now on 2016, November 22)
  60. 60. The Continuous Delivery Journey BEFORE (2 years ago) ~10.000 tests in ~26 hours No daily visibility Slow Builds Unreliable Environments Flaky/Unstable Tests Long Feedback Loop (testing all stacks) NOW ~8.000 tests in ~1h Full daily visibility Fast/incremental Builds Reliable Test Environment Focus on creating fast/good design tests Fast Feedback Loop (testing 2 stacks)
  61. 61. The future - Open Challenges
  62. 62. The future - Open Challenges What are we up now? What challenges are we facing? ● Align validation process with product architecture ● Ownership ● Having the right tests / design for testability ● Refactor in a “moving train” ● Single branch (per major) development ● Culture and mindset (e.g., “you break it, you fix it, fast”) ● Take developers out of the release decision Achieve Continuous Delivery :-)
  63. 63. The future
  64. 64. Thank You! Diogo Oliveira diogo.oliveira@outsystems.com

×