Tom Leach and Travis Thieman of GameChanger talk about their experiences migrating their build and deploy pipeline from being heavily based on Chef to one based around Docker.
This presentation is split in to two main sections. The first section covers the motivations for why GameChanger, as a fast-growing startup, identified a need to replace it's existing Chef-based deploy model with a model which reduces deploy-time risk and allows its engineering team to scale.
The second section is a high-level walkthrough of the new GameChanger deploy pipeline based around Docker.
Monufacture: Effortless Test Data for MongoDBTom Leach
One of the biggest selling points of MongoDB is its ability to directly persist arbitrary object structures without requiring the developer to navigate issues like building an ORM layer. However, this flexibility comes at a price - creating meaningful test data which adheres to these more complex structures can be much more involved.
At GameChanger we observed that developers typically had to write large amounts of test data setup boilerplate to perform an effective test against a MongoDB-dependent function, dis-incentivizing them from writing rigorous tests. So we created Monufacture - a Python test data generation framework for MongoDB that makes setting up test data a breeze.
In this talk I break down some of motivations and design decisions behind Monufacture, demoing its functionality and giving some tips on how to write effective tests of your MongoDB-dependent code.
End-to-End test architectures, a dead End roadRoy Braam
With the rise of Distributed Architecture, independent DevOps teams and automated CI/CD the End-to-End test environments need to be reconsidered.
They become flaky, shaky, untrustworthy and hard to maintain. Why are End-to-End test environments a dead End road and what are the alternatives.
Why are people still using these so-called 'production-like' test environments and how can we achieve the same level of software quality without them.
After attending this talk I hope people are questioning the end-2-end test environments.
I will give some ideas on how to solve the testing problems in a different way being less depending on those fragile environments.
End-2-End test environments, a dead End roadRoy Braam
With the rise of Distributed Architecture, independent DevOps teams and automated CI/CD the End-to-End test environments need to be reconsidered.
They become flaky, shaky, untrustworthy and hard to maintain. Why are End-to-End test environments a dead End road and what are the alternatives.
Why are people still using these so-called 'production-like' test environments and how can we achieve the same level of software quality without them.
After attending this talk I hope people are questioning the end-2-end test environments.
I will give some ideas on how to solve the testing problems in a different way being less depending on those fragile environments.
Load Testing with Open Source, includes
#1 Common Sense in load testing
#2 Review of Open Source Load Testing tools including JMeter(http://jmeter.apache.org/), Gatling.io, and others.
#3 Why continuous load testing (Jenkins)
#4 Why is load testing interesting to me and the start of Redine13(https://www.redline13.com/)
Delivered at Fosscon(http://fosscon.us/) Philadelphia 2015.
Divide and stress: the journey to component load test talk was given at ExpoQA 2017 under the track of Quality Assurance and Performance.
Describes what are the most common pains that big companies suffer on load testing processes with expesive cost of 1:1s replicas of production environment for performance testing.
In order to reduce those expesive costs, The Workshop designed a new methodology that aims to reduce operational costs, human errors and enables the performance testing in Continuous Delivery pipelines, that it also can be adopted by Continuous Deployment scenarios.
Component Based Load Testing (CBT) is a methodology designed in The Workshop (http://theworkshop.com) that rethinks what the future of performance testing should do.
CBT introduces the load test executon as part of CD pipelines, ensuring the quality of our products through defined exit criteria for the main metrics, determining if the changes of a new release is ready or not to progress to next environments (stage, prod, etc).
CBT tries to use a pool resources efficiently, making them avaiable for any load test execution requested by any of our products. CBT main mission is trying to reduce all operative costs of maintain 1:1 replicas of production environments, by having a reduce pool of resources where the performance tests are executed using dockers by a reduced time under really volatile environments.
Monufacture: Effortless Test Data for MongoDBTom Leach
One of the biggest selling points of MongoDB is its ability to directly persist arbitrary object structures without requiring the developer to navigate issues like building an ORM layer. However, this flexibility comes at a price - creating meaningful test data which adheres to these more complex structures can be much more involved.
At GameChanger we observed that developers typically had to write large amounts of test data setup boilerplate to perform an effective test against a MongoDB-dependent function, dis-incentivizing them from writing rigorous tests. So we created Monufacture - a Python test data generation framework for MongoDB that makes setting up test data a breeze.
In this talk I break down some of motivations and design decisions behind Monufacture, demoing its functionality and giving some tips on how to write effective tests of your MongoDB-dependent code.
End-to-End test architectures, a dead End roadRoy Braam
With the rise of Distributed Architecture, independent DevOps teams and automated CI/CD the End-to-End test environments need to be reconsidered.
They become flaky, shaky, untrustworthy and hard to maintain. Why are End-to-End test environments a dead End road and what are the alternatives.
Why are people still using these so-called 'production-like' test environments and how can we achieve the same level of software quality without them.
After attending this talk I hope people are questioning the end-2-end test environments.
I will give some ideas on how to solve the testing problems in a different way being less depending on those fragile environments.
End-2-End test environments, a dead End roadRoy Braam
With the rise of Distributed Architecture, independent DevOps teams and automated CI/CD the End-to-End test environments need to be reconsidered.
They become flaky, shaky, untrustworthy and hard to maintain. Why are End-to-End test environments a dead End road and what are the alternatives.
Why are people still using these so-called 'production-like' test environments and how can we achieve the same level of software quality without them.
After attending this talk I hope people are questioning the end-2-end test environments.
I will give some ideas on how to solve the testing problems in a different way being less depending on those fragile environments.
Load Testing with Open Source, includes
#1 Common Sense in load testing
#2 Review of Open Source Load Testing tools including JMeter(http://jmeter.apache.org/), Gatling.io, and others.
#3 Why continuous load testing (Jenkins)
#4 Why is load testing interesting to me and the start of Redine13(https://www.redline13.com/)
Delivered at Fosscon(http://fosscon.us/) Philadelphia 2015.
Divide and stress: the journey to component load test talk was given at ExpoQA 2017 under the track of Quality Assurance and Performance.
Describes what are the most common pains that big companies suffer on load testing processes with expesive cost of 1:1s replicas of production environment for performance testing.
In order to reduce those expesive costs, The Workshop designed a new methodology that aims to reduce operational costs, human errors and enables the performance testing in Continuous Delivery pipelines, that it also can be adopted by Continuous Deployment scenarios.
Component Based Load Testing (CBT) is a methodology designed in The Workshop (http://theworkshop.com) that rethinks what the future of performance testing should do.
CBT introduces the load test executon as part of CD pipelines, ensuring the quality of our products through defined exit criteria for the main metrics, determining if the changes of a new release is ready or not to progress to next environments (stage, prod, etc).
CBT tries to use a pool resources efficiently, making them avaiable for any load test execution requested by any of our products. CBT main mission is trying to reduce all operative costs of maintain 1:1 replicas of production environments, by having a reduce pool of resources where the performance tests are executed using dockers by a reduced time under really volatile environments.
Now you have finished your site and someone asked you the question: How many users can we serve before we need more power and muscle on our server environment? Good question! And if you don't know how to find that out, how to measure it, and find the bottle necks, come to this session. You’ll find out how to get started and learn more about tools for Coldfusion application load testing and how to use them.
See Video Recording of Talk at NCDevCon here:
http://goo.gl/Obia8
QA Fest 2019. Олексій Остапов. Тестування навантаження за 5 хв. Порівняння до...QAFest
Спочатку я хотів розповісти чергову success story - як ми “за 5 хвилин” протестували продуктивність сервісів в нашому проекті, але таких історій багато. Мені порадили зробити більше - саме так з’явилась ця доповідь. Будемо порівнювати можливості різних додатків для тестування навантаження та як іх використовувати.
Performance testing with 100,000 concurrent users in AWSMatthias Matook
M-Square build an easy scalable performance test solution on AWS, using open source tools & CI servers, to allow cost-effective testing at scale. The solution is suitable for any organisation type, from startup to enterprise.
The talk covers VPC, EC2, S3, ELB’s, AWS API scripting, automation and interesting performance issues when running massive workloads on AWS.
Load testing is essential for any software to the test its performance before it’s out there in the market. Load testing will simulate various load circumstances and will test the endurance of any software. However, to do these, testers usually depends on various tools. Let’s have a look.
Octopus Deploy is a tool for .NET deployment automation. You can use it to deploy IIS websites, Windows services, and even certificates and scripts that you need to run on remote machines.
Octopus Deploy has the potential to make deploying from the build server to remote machines painless and repeatable- but there are some things you may want to know up front to make that happen. This session will explore why you might want to try Octopus Deploy, what sort of issues you may run into, and how Ocuvera uses Octopus to manage our on-premise product installations & updates.
BugRaptors use different types of tools for performance and load testing. One of the tools we use is JMeter to analyze the performance of web applications and Mobile apps with varying load. It is used to test performance both on static and dynamic resources such as static files, Java Servlets, ASP.NET, PHP, CGI scripts, Java objects, databases, FTP servers, and more.
Performance testing is testing an application for speed, stability and scalability in “Production like Environment” under virtual user load to meet Non-Functional requirements
How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything? Is it really possible to introduce a deployment routine that works for everyone?
In less than six months, Karoline transformed the deployment routines at Epinova by introducing Octopus Deploy to the organization. She will take you through the steps needed to get started, the pitfalls along the way, and success that Octopus Deploy has become.
In this workshop we will start out by installing an Octopus Deploy server and tentacle on your laptop, before looking at the basic concepts of Environments, Machines, Roles and Projects. You will create a project of your own and deploy this using Octopus Deploy before we round off by looking at the advanced topics of Script modules, Step templates, Variable sets and Retention Policies.
At the end of this workshop, you'll have all the knowledge you need in order to create a more efficient and failproof deployment process for your project. Keep calm and deploy to production!
An introduction to the advantages of the features of JMeter 4.0. In addition, I will talk a little bit about the way that a real project applies it for continuous integration on TeamCity to get the test result in every day
In this advanced session, we will investigate all the ways that you can automate your testing processes with TestBox and many CI and automation tools. From Jenkins integration, Travis CI, Node runners, Grunt watchers and much more. This session will show you the value of continuous integration and how to apply it with modern tools and technologies.
Main Points
Why we want to automate
Continuous Integration
ANT/CommandBox Test Runner
Setup of a Jenkins CI server
Travis CI integration
Pipelines CI integration
Node TestBox Runners
Grunt Watchers and Browser Live Reloads
Now you have finished your site and someone asked you the question: How many users can we serve before we need more power and muscle on our server environment? Good question! And if you don't know how to find that out, how to measure it, and find the bottle necks, come to this session. You’ll find out how to get started and learn more about tools for Coldfusion application load testing and how to use them.
See Video Recording of Talk at NCDevCon here:
http://goo.gl/Obia8
QA Fest 2019. Олексій Остапов. Тестування навантаження за 5 хв. Порівняння до...QAFest
Спочатку я хотів розповісти чергову success story - як ми “за 5 хвилин” протестували продуктивність сервісів в нашому проекті, але таких історій багато. Мені порадили зробити більше - саме так з’явилась ця доповідь. Будемо порівнювати можливості різних додатків для тестування навантаження та як іх використовувати.
Performance testing with 100,000 concurrent users in AWSMatthias Matook
M-Square build an easy scalable performance test solution on AWS, using open source tools & CI servers, to allow cost-effective testing at scale. The solution is suitable for any organisation type, from startup to enterprise.
The talk covers VPC, EC2, S3, ELB’s, AWS API scripting, automation and interesting performance issues when running massive workloads on AWS.
Load testing is essential for any software to the test its performance before it’s out there in the market. Load testing will simulate various load circumstances and will test the endurance of any software. However, to do these, testers usually depends on various tools. Let’s have a look.
Octopus Deploy is a tool for .NET deployment automation. You can use it to deploy IIS websites, Windows services, and even certificates and scripts that you need to run on remote machines.
Octopus Deploy has the potential to make deploying from the build server to remote machines painless and repeatable- but there are some things you may want to know up front to make that happen. This session will explore why you might want to try Octopus Deploy, what sort of issues you may run into, and how Ocuvera uses Octopus to manage our on-premise product installations & updates.
BugRaptors use different types of tools for performance and load testing. One of the tools we use is JMeter to analyze the performance of web applications and Mobile apps with varying load. It is used to test performance both on static and dynamic resources such as static files, Java Servlets, ASP.NET, PHP, CGI scripts, Java objects, databases, FTP servers, and more.
Performance testing is testing an application for speed, stability and scalability in “Production like Environment” under virtual user load to meet Non-Functional requirements
How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything? Is it really possible to introduce a deployment routine that works for everyone?
In less than six months, Karoline transformed the deployment routines at Epinova by introducing Octopus Deploy to the organization. She will take you through the steps needed to get started, the pitfalls along the way, and success that Octopus Deploy has become.
In this workshop we will start out by installing an Octopus Deploy server and tentacle on your laptop, before looking at the basic concepts of Environments, Machines, Roles and Projects. You will create a project of your own and deploy this using Octopus Deploy before we round off by looking at the advanced topics of Script modules, Step templates, Variable sets and Retention Policies.
At the end of this workshop, you'll have all the knowledge you need in order to create a more efficient and failproof deployment process for your project. Keep calm and deploy to production!
An introduction to the advantages of the features of JMeter 4.0. In addition, I will talk a little bit about the way that a real project applies it for continuous integration on TeamCity to get the test result in every day
In this advanced session, we will investigate all the ways that you can automate your testing processes with TestBox and many CI and automation tools. From Jenkins integration, Travis CI, Node runners, Grunt watchers and much more. This session will show you the value of continuous integration and how to apply it with modern tools and technologies.
Main Points
Why we want to automate
Continuous Integration
ANT/CommandBox Test Runner
Setup of a Jenkins CI server
Travis CI integration
Pipelines CI integration
Node TestBox Runners
Grunt Watchers and Browser Live Reloads
Moving to Microservices with the Help of Distributed TracesKP Kaiser
Moving away from a monolith to a microservices architecture is a process fraught with hidden challenges. There's legacy code, infrastructure, and organizational processes that all need to change, in order to make the switch successful.
But microservices come with a huge increase in infrastructure complexity. We'll see how distributed traces empower developers to work with greater autonomy, in increasingly complex deployment environments.
The new buzz world in the world of Agile is "DevOps". So what exactly is devOps and Why do we need it? When development got married to deployment (sys-admin/operations) ; what is born is a new advanced species which is known to us today as "DevOps"
You are already the Duke of DevOps: you have a master in CI/CD, some feature teams including ops skills, your TTM rocks ! But you have some difficulties to scale it. You have some quality issues, Qos at risk. You are quick to adopt practices that: increase flexibility of development and velocity of deployment. An urgent question follows on the heels of these benefits: how much confidence we can have in the complex systems that we put into production? Let’s talk about the next hype of DevOps: SRE, error budget, continuous quality, observability, Chaos Engineering.
This is a presentation I gave to 100+ people at Rev1 Ventures in Columbus, OH. The presentation was about how to define DevOps. Like any new concept, there are multiple and sometimes competing definitions. I've found that implementations of DevOps can change but there are some very common anti-patterns. Lastly, I talk about how we implement DevOps at Bold Penguin.
DevOps is a methodology capturing the practices adopted from the very start by the web giants who had a unique opportunity as well as a strong requirement to invent new ways of working due to the very nature of their business: the need to evolve their systems at an unprecedented pace as well as extend them and their business sometimes on a daily basis.
While DevOps makes obviously a critical sense for startups, I believe that the big corporations with large and old-fashioned IT departments are actually the ones that can benefit the most from adopting these principles and practices.
Five Ways Automation Has Increased Application Deployment and Changed CultureXebiaLabs
Paychex, a recognized leader in the payroll, human resource, and benefits outsourcing industry, found that the demand for application deployments had increased beyond what could be supported by manual configuration. Keeping up with this demand required a shift from manually providing a service to developing an automated platform for self-service resulting in a culture change with new partnering across their DEV, OPS and Architecture teams.
David Jozis, Automation Engineer at Paychex, discusses the challenges they encountered when making these significant changes and how they were able to overcome them to accomplish 5x as many deployments as before.
WinOps meetup April 2016 DevOps lessons from Microsoft \\Build\DevOpsGroup
Some DevOps lessons from the 2016 Microsoft Build conference that were presented at the London WinOps meetup in April 2016. Most of the material was taken from the Microsoft presentations available here - https://channel9.msdn.com/Events/Build/2016?wt.mc_id=build_hp
Building a full-stack app with Golang and Google Cloud Platform in one weekDr. Felix Raab
The talk will cover how to effectively build a production-ready, full-stack app with Golang and GCP under time constraints. I'll discuss how to approach making quick and sound technical decisions and how to apply modern software engineering practices for end-to-end apps. The presentation shows, in an opinionated and "meme-ful" way, various lessons learned, tools, and key takeaways for cloud environments.
Scaling Up Lookout was originally presented at Lookout's Scaling for Mobile event on July 25, 2013. R. Tyler Croy is a Senior Software Engineer at Lookout, Inc. Lookout has grown immensely in the last year. We've doubled the size of the company—added more than 80 engineers to the team, support 45+ million users, have over 1000 machines in production, see over 125,000 QPS and more than 2.6 billion requests/month. Our analysts use Hadoop, Hive, and MySQL to interactively manipulate multibillion row tables. With that, there are bound to be some growing pains and lessons learned.
Strategies for Successful Data Migration Tools.pptxvarshanayak241
Data migration is a complex but essential task for organizations aiming to modernize their IT infrastructure and leverage new technologies. By understanding common challenges and implementing these strategies, businesses can achieve a successful migration with minimal disruption. Data Migration Tool like Ask On Data play a pivotal role in this journey, offering features that streamline the process, ensure data integrity, and maintain security. With the right approach and tools, organizations can turn the challenge of data migration into an opportunity for growth and innovation.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Modern design is crucial in today's digital environment, and this is especially true for SharePoint intranets. The design of these digital hubs is critical to user engagement and productivity enhancement. They are the cornerstone of internal collaboration and interaction within enterprises.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Why React Native as a Strategic Advantage for Startup Innovation.pdfayushiqss
Do you know that React Native is being increasingly adopted by startups as well as big companies in the mobile app development industry? Big names like Facebook, Instagram, and Pinterest have already integrated this robust open-source framework.
In fact, according to a report by Statista, the number of React Native developers has been steadily increasing over the years, reaching an estimated 1.9 million by the end of 2024. This means that the demand for this framework in the job market has been growing making it a valuable skill.
But what makes React Native so popular for mobile application development? It offers excellent cross-platform capabilities among other benefits. This way, with React Native, developers can write code once and run it on both iOS and Android devices thus saving time and resources leading to shorter development cycles hence faster time-to-market for your app.
Let’s take the example of a startup, which wanted to release their app on both iOS and Android at once. Through the use of React Native they managed to create an app and bring it into the market within a very short period. This helped them gain an advantage over their competitors because they had access to a large user base who were able to generate revenue quickly for them.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Multiple Your Crypto Portfolio with the Innovative Features of Advanced Crypt...Hivelance Technology
Cryptocurrency trading bots are computer programs designed to automate buying, selling, and managing cryptocurrency transactions. These bots utilize advanced algorithms and machine learning techniques to analyze market data, identify trading opportunities, and execute trades on behalf of their users. By automating the decision-making process, crypto trading bots can react to market changes faster than human traders
Hivelance, a leading provider of cryptocurrency trading bot development services, stands out as the premier choice for crypto traders and developers. Hivelance boasts a team of seasoned cryptocurrency experts and software engineers who deeply understand the crypto market and the latest trends in automated trading, Hivelance leverages the latest technologies and tools in the industry, including advanced AI and machine learning algorithms, to create highly efficient and adaptable crypto trading bots
TROUBLESHOOTING 9 TYPES OF OUTOFMEMORYERRORTier1 app
Even though at surface level ‘java.lang.OutOfMemoryError’ appears as one single error; underlyingly there are 9 types of OutOfMemoryError. Each type of OutOfMemoryError has different causes, diagnosis approaches and solutions. This session equips you with the knowledge, tools, and techniques needed to troubleshoot and conquer OutOfMemoryError in all its forms, ensuring smoother, more efficient Java applications.
Unleash Unlimited Potential with One-Time Purchase
BoxLang is more than just a language; it's a community. By choosing a Visionary License, you're not just investing in your success, you're actively contributing to the ongoing development and support of BoxLang.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
5. • Scorekeeping
• 150+ Stats
• Live Gamestream
• Team management
• 12TB (10 histories
of pro sports)
• 10 MongoDB shards
• 100-400 app servers
• 50K games/day
(10K concurrent)
• 3000 w/s, 30,000 r/s
GameChanger’s market is amateur sports. Whereas ESPN caters to handful of top professional teams in the country, GameChanger provides free tools to the millions of
amateur sports teams around the world.
6. ELIMINATING DEPLOY-TIME RISK
This graph shows the number of requests/second received by one of our services over the last week. The area under the graph is broken down by host. You can see that
we are scaling our hosts up and down in response to demand.
At GC we have an extremely spike traffic profile so using autoscaling is important to control costs. Therefore it’s very important not only to deploy new application code
to existing servers but also to be able to very reliably build new servers with minimal risk.
7. Chef Server
App
Server
App
Server
App
Server
App
Server
App
Server
App
Server
30 30 30 30 30
30
To illustrate the risks associated with a traditional Configuration Management approach to building servers, let’s look at the typical Chef architecture.
This is a CM server which hosts the current valid configuration data for the cluster (and by configuration data we also mean setup scripts etc)
The Developer is responsible for pushing new configuration to the CM server and then all app servers periodically pull and execute the latest scripts.
Risks:
- CM server is a SPOF. Chef is painful to scale out.
- CM server needs to be scaled to support max conceivable cluster size (or we have problems when we need it most)
- Thundering herd
8. github.com/miketheman/knife-role-spaghetti
This is a visualization of GC’s role/recipe dependencies in Chef before we moved to Docker.
Risks:
- Spaghetti-like dependencies are impossible to reason about (what happens if I upgrade node.js?)
- Dependencies are indirect and not explicit
- Testing is expensive and time consuming. Devs are disincentivized from testing.
- Coupling issues not discovered until deploy time -> can take down your cluster.
- Rollback can be painful
10. HOW DOES DOCKER
ELIMINATETHESE RISKS?
• Assets are baked into an immutable image at build time
• No deploy-time dependencies on 3rd party repos
• Docker registry is simple and easy to scale
• Dependencies simple, explicit and direct
• Rollback is trivial
11. SCALING ENGINEERING
A less obvious problem with traditional CM approaches is how they inhibit the scaling of engineering. Let’s illustrate with an example…
12. Application
FeatureTeam
Like many companies, GC’s product started out as a small Python app developed by a couple of people. At this point we only have a few users so we can run on a
couple of servers and deployment is simple manual step.
13. Application
FeatureTeam FeatureTeam
As we build more features our application gets bigger and we hire more people to help build and maintain those features. We’re still doing some form of manual
deployment at this point, and though it’s starting to become a bottleneck we’re still prioritizing feature development.
14. Monolithic Application
FeatureTeam FeatureTeam FeatureTeam
We grow further and our application grows accumulating more and more responsibilities. The need to coordinate test + build + deploy necessitates an Ops team to own
this problem.
15. Monolithic Application
FeatureTeam FeatureTeam FeatureTeam
Deployment
(Test + Build + Deploy)
OpsTeam
Following a more “Dev Ops” mantra, these responsibilities form more of a continuum. Devs care about getting their code to prod, Ops care about what the code does,
both cooperate.
Deploying a monolithic app in this way actually works pretty well. The tech stack is fairly static, and forms a shared context which minimizes the friction between dev and
ops teams.
The problem for GC was that this monolithic architecture scaled poorly for us:
- Poor ownership boundaries
- Quality of shared components suffered
- Introducing new languages is difficult to sell
- Different features have different CAP requirements
- Operational problems derived from indirect coupling
16. μ
FeatureTeam FeatureTeam FeatureTeam
OpsTeam
μ
μ μ
μ μ
μ μ
μ μ
μ μ
Deployment
(Test + Build + Deploy)
Solution: Teams own collections of independently-scalable microservices with clear ownership boundaries. Teams ar
But this poses a problem for our previous deployment approach:
- Suddenly Ops need to know how to deploy an ever growing list of technologies
- Information friction between Dev and Ops is high as the context is dynamic
- Deployment using something like Chef becomes more and more complex
- As feature teams are added, Ops becomes a bottleneck, the relationship risks becoming adversarial
17. –Melvin Conway, 1968
“Any organization that designs a system … will
inevitably produce a design whose structure is a
copy of the organization's communication structure.”
Conway’s Law
A collection of teams that design a system will inevitably produce a design which evolves from the minimum amount of out-of-band communication required between
those teams.
18. μ
Feature + OpsTeam Feature + OpsTeam Feature + OpsTeam
μ
μ μ
μ μ
μ μ
μ μ
μ μ
Deployment Deployment Deployment
In the face of the need high-traffic high-complexity communication to get software deployed, teams will be motivated towards compartmentalizing the way they approach
deployment. This is much better as the contextual footprint for each mini-Ops team is manageable.
But there is still a problem here. We risk duplicating effort across teams on “core” deployment activities.
19. CORE DEPLOYMENTTASKS
• Log rotation
• User account creation,
sudoers, SSH keys
• Continuous Integration
• Metrics
• DNS
• Monitoring & alerting
• ulimits
• Tool installation
• …
All of these are important. Doing them well requires that they be owned and continuously improved and maintained as a first class system asset. On feature teams they
will not be treated in this way, we’ll duplicate effort building several half-formed implementations of these things.
20. μ
FeatureTeam FeatureTeam FeatureTeam
μ
μ μ
μ μ
μ μ
μ μ
μ μ
Build Build Build
OpsTeam
“Core” Deployment Pipeline
We still needed an Ops Team to own the core parts of deployment, but needed a way to ensure the interface between the feature and Ops teams to have low information
friction and not require the Ops team to understand n different tech stacks.
We could have tried to use Chef to do this by making each team own its own roles, but you end up running into problems around shared dependencies, global state and
indirect coupling.
Docker provides a neat abstraction which allows these responsibilities to be separated clearly and scalably.
21. HOW DOES DOCKER ALLOW USTO
SCALE ENGINEERING?
1. Development team has complete control over what they deploy
2. Core deployment can still be owned by a dedicated team as a first
class concern
3. Small shared context needed for cross-team communication
1. allows us to scale teams out linearly without creating a centralized bottleneck
2. eliminates wasted duplicate effort and the effort of maintaining a substandard system
3. eliminates waste effort communicating complex requirements in an out of band way
23. Test Build Deploy
We’re going to run through the test-build-deploy pipeline at GameChanger. We’re using a separate service for
each of those, so let’s introduce the cast of characters.
28. Test Build Deploy
We’re going to go through what it takes to wire up an application to work with our pipeline…
29. Test Build Deploy
…and while we’re doing it, we’re going to highlight the ways in which Docker helps us achieve the goals that Tom
was talking about earlier.
30. Python + Postgres:A Simple Application
Let’s consider a simple Python application that works with a Postgres database. It has a bunch of unit tests,
including unit tests that require connecting to an actual Postgres instance to run.
31. Test
Removes some dependency
setup concerns from
application dev
Tests are fully isolated,
coupling is minimized
Fast and parallelizable,
multiple teams can work
on a single app without
slowing each other down
Drone
TheTest Runner
Drone, as we mentioned before, is what we use to run tests. What are the benefits we get from using Drone?
36. Server
Host OS
Each test run is
fully isolated using
containers
Parallel testing
becomes trivial
Fast testing of PRs
reduces likelihood of
breaking the build
Test
37. Build
Receives Git hash and
static dependencies
from Drone
Builds Docker images,
pushes to our private
Docker registry
Jenkins
The Build Server
39. Tested application code
(specific Git hash)
Same library versions used
to test that code, e.g. pip
freeze, npm shrinkwrap
All system libraries,
drivers, etc.
Build
What exactly are we putting into our images?
Note that our image will *not* contain service dependencies like the Postgres we want to run against. We have a
few options for how to connect to a database at runtime.
41. Drone
PyPi Jenkins
Registry
Image
Build
X
…what if we put a bullet in PyPi and are no longer able to get our library dependencies?
Before we answer that, what happened in the old world? We’d deploy new code, our servers would all try to pull
from PyPi, fail, and freak out. Is our site down? Are we up but with incorrect or partial dependencies? It’s not
great. What happens with Docker?
42. Drone
PyPi Jenkins
Registry
Image
Build
X
With Docker, that risk is moved from deploy time to build time. Jenkins will try to pull from PyPi and fail. It won’t
be able to push a new image with your updated code. This is usually a good thing! You can have confidence that
all the images you *do* have will have all their dependencies and be fully working images.
43. Mostly a thin API on
top of our Docker registry
Also owns triggering
deploys across our
infrastructure
Deploy
Bagel
The Deploy Service
Bagel gives us a way to coordinate the images in our Docker registry with their corresponding Git tags, the
dependencies that were baked into the images, etc.
I think just go into a quick demo here, show some cool dependency diffs and PR messages or something.
46. All our machines run identical
OS-level images
Images and runtime config
specified viaYAML
Deploy mechanism on each
machine reconciles spec and
containers currently running
Similar to Docker Compose
All our boxes run off the same machine image (AMI). A YAML file specifying which of our apps should be
deployed to that box is all that distinguishes it. Our pretty-dumb deploy scripts (triggered by Bagel) handle
matching the running state of that box’s containers to what’s in this YAML file and the current deployed versions
according to Bagel.
47. Test Build Deploy
That’s our deploy pipeline. Using Docker, we’ve seen significant gains in simplicity and developer productivity
across our test, build, and deploy stages. Our feature teams can release new services with ease, and our Ops
team has been phased out of existence. Our engineers are now free to focus on problems that benefit our
customers and our business.
Before I go, just a few closing thoughts on Docker as someone who’s spent a bit of time with it…
GameChanger’s market is amateur sports. Whereas ESPN caters to handful of top professional teams in the country, GameChanger provides free tools to the millions of amateur sports teams around the world.
This graph shows the number of requests/second received by one of our services over the last week. The area under the graph is broken down by host. You can see that we are scaling our hosts up and down in response to demand.
At GC we have an extremely spike traffic profile so using autoscaling is important to control costs. Therefore it’s very important not only to deploy new application code to existing servers but also to be able to very reliably build new servers with minimal risk.
To illustrate the risks associated with a traditional Configuration Management approach to building servers, let’s look at the typical Chef architecture.
This is a CM server which hosts the current valid configuration data for the cluster (and by configuration data we also mean setup scripts etc)
The Developer is responsible for pushing new configuration to the CM server and then all app servers periodically pull and execute the latest scripts.
Risks:
- CM server is a SPOF. Chef is painful to scale out.
- CM server needs to be scaled to support max conceivable cluster size (or we have problems when we need it most)
- Thundering herd
This is a visualization of GC’s role/recipe dependencies in Chef before we moved to Docker.
Risks:
- Spaghetti-like dependencies are impossible to reason about (what happens if I upgrade node.js?)
- Dependencies are indirect and not explicit
- Testing is expensive and time consuming. Devs are disincentivized from testing.
- Coupling issues not discovered until deploy time -> can take down your cluster.
- Rollback can be painful
Deploy-time dependencies on multiple external repositories is a big risk.
- Build AMIs (complex, heavy, time consuming, does not allow us to iterate fast enough)
- Host you own mirror for services like PyPI. But then who owns the maintenance of that mirror?
A less obvious problem with traditional CM approaches is how they inhibit the scaling of engineering. Let’s illustrate with an example…
Like many companies, GC’s product started out as a small Python app developed by a couple of people. At this point we only have a few users so we can run on a couple of servers and deployment is simple manual step.
As we build more features our application gets bigger and we hire more people to help build and maintain those features. We’re still doing some form of manual deployment at this point, and though it’s starting to become a bottleneck we’re still prioritizing feature development.
We grow further and our application grows accumulating more and more responsibilities. The need to coordinate test + build + deploy necessitates an Ops team to own this problem.
Following a more “Dev Ops” mantra, these responsibilities form more of a continuum. Devs care about getting their code to prod, Ops care about what the code does, both cooperate.
Deploying a monolithic app in this way actually works pretty well. The tech stack is fairly static, and forms a shared context which minimizes the friction between dev and ops teams.
The problem for GC was that this monolithic architecture scaled poorly for us:
- Poor ownership boundaries
- Quality of shared components suffered
- Introducing new languages is difficult to sell
- Different features have different CAP requirements
- Operational problems derived from indirect coupling
Solution: Teams own collections of independently-scalable microservices with clear ownership boundaries. Teams ar
But this poses a problem for our previous deployment approach:
- Suddenly Ops need to know how to deploy an ever growing list of technologies
- Information friction between Dev and Ops is high as the context is dynamic
- Deployment using something like Chef becomes more and more complex
- As feature teams are added, Ops becomes a bottleneck, the relationship risks becoming adversarial
Conway’s Law
A collection of teams that design a system will inevitably produce a design which evolves from the minimum amount of out-of-band communication required between those teams.
In the face of the need high-traffic high-complexity communication to get software deployed, teams will be motivated towards compartmentalizing the way they approach deployment. This is much better as the contextual footprint for each mini-Ops team is manageable.
But there is still a problem here. We risk duplicating effort across teams on “core” deployment activities.
All of these are important. Doing them well requires that they be owned and continuously improved and maintained as a first class system asset. On feature teams they will not be treated in this way, we’ll duplicate effort building several half-formed implementations of these things.
We still needed an Ops Team to own the core parts of deployment, but needed a way to ensure the interface between the feature and Ops teams to have low information friction and not require the Ops team to understand n different tech stacks.
We could have tried to use Chef to do this by making each team own its own roles, but you end up running into problems around shared dependencies, global state and indirect coupling.
Docker provides a neat abstraction which allows these responsibilities to be separated clearly and scalably.
allows us to scale teams out linearly without creating a centralized bottleneck
eliminates wasted duplicate effort and the effort of maintaining a substandard system
eliminates waste effort communicating complex requirements in an out of band way
We’re going to run through the test-build-deploy pipeline at GameChanger. We’re using a separate service for each of those, so let’s introduce the cast of characters.
mmm, bagel
We’re going to go through what it takes to wire up an application to work with our pipeline…
…and while we’re doing it, we’re going to highlight the ways in which Docker helps us achieve the goals that Tom was talking about earlier.
Let’s consider a simple Python application that works with a Postgres database. It has a bunch of unit tests, including unit tests that require connecting to an actual Postgres instance to run.
Drone, as we mentioned before, is what we use to run tests. What are the benefits we get from using Drone?
What exactly are we putting into our images?
Note that our image will *not* contain service dependencies like the Postgres we want to run against. We have a few options for how to connect to a database at runtime.
So this is how we make the build work with Docker. Why is this actually better than the traditional model? Well…
…what if we put a bullet in PyPi and are no longer able to get our library dependencies?
Before we answer that, what happened in the old world? We’d deploy new code, our servers would all try to pull from PyPi, fail, and freak out. Is our site down? Are we up but with incorrect or partial dependencies? It’s not great. What happens with Docker?
With Docker, that risk is moved from deploy time to build time. Jenkins will try to pull from PyPi and fail. It won’t be able to push a new image with your updated code. This is usually a good thing! You can have confidence that all the images you *do* have will have all their dependencies and be fully working images.
Bagel gives us a way to coordinate the images in our Docker registry with their corresponding Git tags, the dependencies that were baked into the images, etc.
I think just go into a quick demo here, show some cool dependency diffs and PR messages or something.
What happens when we hit the Deploy button in Bagel?
All our boxes run off the same machine image (AMI). A YAML file specifying which of our apps should be deployed to that box is all that distinguishes it. Our pretty-dumb deploy scripts (triggered by Bagel) handle matching the running state of that box’s containers to what’s in this YAML file and the current deployed versions according to Bagel.
That’s our deploy pipeline. Using Docker, we’ve seen significant gains in simplicity and developer productivity across our test, build, and deploy stages. Our feature teams can release new services with ease, and our Ops team has been phased out of existence. Our engineers are now free to focus on problems that benefit our customers and our business.
Before I go, just a few closing thoughts on Docker as someone who’s spent a bit of time with it…