Exploration of factors affecting Config Management code (Chef, Ansible, Puppet, etc) as it ages over time, and what you can do to keep it maintainable. As presented at DevOpsDays Ohio 2016.
Annotating Images for Machine Learning Models: 5 Common MisconceptionsHitech BPO
Don't let common misconceptions hamper the quality of image annotation and hurt the lifecycle of your machine learning models. The presentation clarifies 5 common misconceptions to help you build quality image datasets and rightly drive your machine learning implementation.
This document summarizes Dan Parkes' experience working as a data analyst at Pixel Toys. It discusses his work on three shipped mobile games, with a focus on Warhammer 40,000: Freeblade. It provides tips on good telemetry design, including planning data collection upfront, only capturing necessary data, and culling unused data. It also discusses challenges around limited data resources and balancing retroactive vs. proactive data analysis. The document advocates for automating dashboard and data sharing processes while maintaining flexibility.
This talk describes my way from a lead test engineer to a senior product manager. I am also sharing information about my book Hands-On Mobile App Testing and the testing community.
GIAF UK Winter 2015 - Analytical techniques: A practical guide to answering b...Lauren Cormack
The document provides an overview of analytical techniques for answering business questions. It discusses the four pillars of analytics: data munging, reporting and visualization, analysis and insights, and applied analytics. Specific topics covered include A/B testing best practices, reporting and visualization tools like Tableau, using multiple data sources for analysis, and best practices for data analysis and communication. The document is intended as a practical guide for those working in analytics to help tackle business issues.
Software management...for people who just want to get stuff doneCiff McCollum
This document discusses concepts and techniques for software project management, including planning, estimation, execution, and retrospectives. It covers these concepts at the level of projects, milestones within projects, sprints, and individual stories. Key points emphasized include breaking work into small chunks, using techniques like planning poker and burndown charts, being honest about estimates, and using retrospectives to improve.
Planning Poker is a technique used to estimate effort for tasks in Agile software development. It involves each team member privately selecting a planning poker card representing their estimate for a task. The cards have Fibonacci numbers written on them. The cards are then revealed and discussed if estimates differ, until consensus is reached. Once estimates are established, the team's velocity (amount of work completed per sprint) can be used to predict future release dates. Planning Poker works well because it leverages the wisdom of crowds and averages individual estimates for more accurate results.
This document provides tips for creating effective posters and rates example posters out of 5 based on effectiveness. It advises using color, catchy slogans, statistics, and bold fonts while avoiding overcrowding with text, small fonts, and meaningless images. Readers are asked to evaluate example posters and explain their ratings.
This document discusses various HCI devices and their perceptions versus realities. It addresses voice control, handwriting recognition, and alternatives like keyboards. Expectations of perfect accuracy and ease of use often don't match the actual learning curve and limitations of the technologies. Later sections discuss developing solutions, shifting to platforms, qualifying insights, and addressing perceptions to sell voice control solutions by understanding where accuracy may be acceptable. The document emphasizes understanding how perceptions define realities and delivering the right perceptions through product functions and features.
Annotating Images for Machine Learning Models: 5 Common MisconceptionsHitech BPO
Don't let common misconceptions hamper the quality of image annotation and hurt the lifecycle of your machine learning models. The presentation clarifies 5 common misconceptions to help you build quality image datasets and rightly drive your machine learning implementation.
This document summarizes Dan Parkes' experience working as a data analyst at Pixel Toys. It discusses his work on three shipped mobile games, with a focus on Warhammer 40,000: Freeblade. It provides tips on good telemetry design, including planning data collection upfront, only capturing necessary data, and culling unused data. It also discusses challenges around limited data resources and balancing retroactive vs. proactive data analysis. The document advocates for automating dashboard and data sharing processes while maintaining flexibility.
This talk describes my way from a lead test engineer to a senior product manager. I am also sharing information about my book Hands-On Mobile App Testing and the testing community.
GIAF UK Winter 2015 - Analytical techniques: A practical guide to answering b...Lauren Cormack
The document provides an overview of analytical techniques for answering business questions. It discusses the four pillars of analytics: data munging, reporting and visualization, analysis and insights, and applied analytics. Specific topics covered include A/B testing best practices, reporting and visualization tools like Tableau, using multiple data sources for analysis, and best practices for data analysis and communication. The document is intended as a practical guide for those working in analytics to help tackle business issues.
Software management...for people who just want to get stuff doneCiff McCollum
This document discusses concepts and techniques for software project management, including planning, estimation, execution, and retrospectives. It covers these concepts at the level of projects, milestones within projects, sprints, and individual stories. Key points emphasized include breaking work into small chunks, using techniques like planning poker and burndown charts, being honest about estimates, and using retrospectives to improve.
Planning Poker is a technique used to estimate effort for tasks in Agile software development. It involves each team member privately selecting a planning poker card representing their estimate for a task. The cards have Fibonacci numbers written on them. The cards are then revealed and discussed if estimates differ, until consensus is reached. Once estimates are established, the team's velocity (amount of work completed per sprint) can be used to predict future release dates. Planning Poker works well because it leverages the wisdom of crowds and averages individual estimates for more accurate results.
This document provides tips for creating effective posters and rates example posters out of 5 based on effectiveness. It advises using color, catchy slogans, statistics, and bold fonts while avoiding overcrowding with text, small fonts, and meaningless images. Readers are asked to evaluate example posters and explain their ratings.
This document discusses various HCI devices and their perceptions versus realities. It addresses voice control, handwriting recognition, and alternatives like keyboards. Expectations of perfect accuracy and ease of use often don't match the actual learning curve and limitations of the technologies. Later sections discuss developing solutions, shifting to platforms, qualifying insights, and addressing perceptions to sell voice control solutions by understanding where accuracy may be acceptable. The document emphasizes understanding how perceptions define realities and delivering the right perceptions through product functions and features.
The document discusses common considerations for outsourcing mobile app development projects to freelancers. It provides advice on both the employer and freelancer's responsibilities for project success. For employers, it recommends providing templates to minimize complexity, updating software weekly, and including contact information. For freelancers, it advises not being afraid to ask questions, following directions, and taking on a manageable workload. The document also outlines typical project milestones like choosing the app type and naming it, and errors that can lead to failure such as miscommunication or poor project management.
This document discusses effort estimation techniques for projects. It describes estimating as forming a judgment about the work required, and mentions common techniques like decomposition, expert judgment, analogy, and planning poker. It also covers risk identification and adding buffers to estimates and schedules to account for risks and uncertainties. Key points emphasized are estimating in hours or days, adding 25% to total costs for buffers, and that more estimation perspectives improve the accuracy and consensus of estimates.
How to prep an effective kickoff workshop in 3 steps – UX Camp CPHMagdalena Zadara
How to get the most of the start of a project, get your client onboard with what you are doing and make them feel like they are part of the team. This presentation will be most valuable to UI/UX designers who work directly with clients and have some control over their process.
Boost Your Intelligent Assistants with UX TestingApplause
Businesses turn to intelligent assistants to provide 24/7 support for their customers and to increase efficiency. When intelligent assistants are built well, you can foster customer loyalty and support internal processes by automating simple use cases. It’s a win-win for both customers and businesses.
However, when interactions with intelligent assistants become frustrating it can become a liability.
The key to delivering an effective intelligent assistant is user testing. Join Inge De Bleecker, Senior Director of UX and Conversational AI for Applause, as she breaks down the role user testing plays in the development and growth of intelligent assistants. Learn how to plan and execute a user testing strategy, and use those results to create a highly-capable intelligent assistant.
Debate between leaders from business and research on the question how near we are to the point in time that machines will be better at translating most content, often referred to as the singularity.
Session host: Jaap van der Meer (TAUS)
Presenters and panelists are: Marcello Federico (FBK Trento), Jean Senellart (SYSTRAN), Renato Beninatto (Moravia), Alex Waibel (Karlsruhe Institute of Technology) and Marco Trombetti (Translated).
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
This document discusses the benefits of using mockups in communication, including reducing miscommunication and using less text. It lists several goals such as having everyone start using mockups and creating them quickly. It then provides examples of mockups demonstrating an edit profile form, indicators for cache status, and a margarita recipe. It also mentions resources for finding mockups and taking a mockup "master class." The overall message is that mockups can improve communication and work processes by making designs and workflows more visual and clear.
How to avoid 6 deadly mistakes when building a digital product 2018inFullMobile
This document provides tips for avoiding common mistakes when building digital products. It outlines 6 key areas to focus on: 1) Solve real problems, not hypothetical ones, 2) Sell the product concept before building it to validate market need, 3) Rely on user research like surveys and interviews rather than guessing, 4) Measure everything to understand user behavior and determine what works, 5) Get buy-in from enterprise users early on through focus groups and observations, 6) Think beyond just product and look at the larger business landscape. Following these tips can help mitigate risks and avoid wasting time and resources on products that do not solve real user needs.
The document provides tips for presentations and discusses outsourcing software development. It lists tasks for evaluating and implementing a new system in order. The tasks of identifying vendors and evaluating alternatives can be done together, as can preparing recommendations and system requirement documentation.
This document discusses project management growth practices and contains recommendations in several areas:
1) Be available to your team to reduce dependencies, optimize around available resources which may be constrained by project management, engineering or the team itself.
2) Improve processes by setting up project management software, using demos to drive progress, and dedicating special days to areas like bugs, polish or internal tools.
3) Anticipate risks and have mitigation plans to determine if risks are real problems, and have rollout or other plans to address risks like stability issues.
How to avoid cutting yourself with the double edged sword of Testing Metrics.
- Pros and cons of working with metrics
- Plan a metrics program
- Tips and tricks of working with metrics.
For full webinar recording:
https://www.practitest.com/qa-learningcenter/webinars/testing-metrics/
10 Tactics for Building an Optimization CultureOptimizely
Slides from a presentation of '10 Tactics for Building an Optimization Culture' webinar, hosted by Brooks Bell and Optimizely.
Full webinar recording with audio can be found here: http://optimizely.wistia.com/medias/xf4yk47rml
https://www.optimizely.com/
http://brooksbell.com/
Id camp x dicoding live : persiapan jadi software engineer hebat 101DicodingEvent
Apakah seorang software engineer hebat adalah yang menguasai banyak bahasa pemrograman? Yang serba semua bisa? atau yang menguasai teknologi kekinian? Walaupun setiap individu memiliki standar hebat yang berbeda-beda tergantung dengan goals, passion, dan career path yang akan diambil. Tapi satu yang pasti untuk menjadi software engineer yang hebat ada cara dan langkah yang bisa dipelajari. Apa saja tips dan cara yang bisa kita lakukan untuk menjadi software engineer yang hebat? Hal ini akan kita bahas tips oleh Sidiq Permana (Co-Founder dan CIO - Nusantara Beta Studio) pada Dicoding LIVE x IDCamp dengan tema "Persiapan Jadi Software Engineer Hebat 101".
What good is data if it doesn't become information? And what good is it if it doesn't answer the burning questions? What good is it if it's irrelevant to the ones who need it most? Avoid demonstrating with certainty that you don't understand the question or the people asking it by practicing Insight Design.
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.
The document discusses continuous improvement and how making small incremental changes over time can lead to significant improvements. It promotes the idea of continuous improvement as a mindset and provides some practical tips for implementing continuous improvement, including using a continuous improvement board to track problems, desired future states, tasks, and progress. The document also emphasizes that continuous improvement can be applied to any context to help teams and individuals get better through small changes done consistently over time.
Communication @ Funnelll - Doing Remote-First the right way Funnelll
We are a remote-first company. This is how we make sure our team can effectively work together to create the last marketing and analytics tool you will ever need!
Building lean products with distributed agile teamsIgor Moochnick
- The document discusses principles for building lean products with distributed agile teams, emphasizing constant communication, feedback, transparency, and flexibility.
- Key aspects include prioritizing customer needs, continuous integration and deployment, minimizing waste and bureaucracy, and making decisions at the last responsible moment based on ongoing learning.
- Success requires open communication across all teams, with a focus on removing impediments, capturing feedback, and constantly improving through retrospectives.
We get this question a lot, and being open and transparent we’d like to address it. We have identified four areas that in all probability may create great difficulty for everyone who is trying to build and launch one’s own magazine app.
Read all about at: http://blog.presspadapp.com/what-it-would-be-like-to-build-a-system-for-publishing-magazines-on-mobile-devices/
William Josephson is the co-founder of Solano Labs, which provides automated testing services. Solano Labs has helped several customers significantly reduce their testing times - for example, reducing tests that previously took 18 hours down to 13 minutes. Automated testing allows organizations to deploy code much more frequently and have fewer failures. It reduces costs by finding bugs quicker and allows engineers to make changes more freely.
Leandro Melendez - Switching Performance Left & RightNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
The document discusses common considerations for outsourcing mobile app development projects to freelancers. It provides advice on both the employer and freelancer's responsibilities for project success. For employers, it recommends providing templates to minimize complexity, updating software weekly, and including contact information. For freelancers, it advises not being afraid to ask questions, following directions, and taking on a manageable workload. The document also outlines typical project milestones like choosing the app type and naming it, and errors that can lead to failure such as miscommunication or poor project management.
This document discusses effort estimation techniques for projects. It describes estimating as forming a judgment about the work required, and mentions common techniques like decomposition, expert judgment, analogy, and planning poker. It also covers risk identification and adding buffers to estimates and schedules to account for risks and uncertainties. Key points emphasized are estimating in hours or days, adding 25% to total costs for buffers, and that more estimation perspectives improve the accuracy and consensus of estimates.
How to prep an effective kickoff workshop in 3 steps – UX Camp CPHMagdalena Zadara
How to get the most of the start of a project, get your client onboard with what you are doing and make them feel like they are part of the team. This presentation will be most valuable to UI/UX designers who work directly with clients and have some control over their process.
Boost Your Intelligent Assistants with UX TestingApplause
Businesses turn to intelligent assistants to provide 24/7 support for their customers and to increase efficiency. When intelligent assistants are built well, you can foster customer loyalty and support internal processes by automating simple use cases. It’s a win-win for both customers and businesses.
However, when interactions with intelligent assistants become frustrating it can become a liability.
The key to delivering an effective intelligent assistant is user testing. Join Inge De Bleecker, Senior Director of UX and Conversational AI for Applause, as she breaks down the role user testing plays in the development and growth of intelligent assistants. Learn how to plan and execute a user testing strategy, and use those results to create a highly-capable intelligent assistant.
Debate between leaders from business and research on the question how near we are to the point in time that machines will be better at translating most content, often referred to as the singularity.
Session host: Jaap van der Meer (TAUS)
Presenters and panelists are: Marcello Federico (FBK Trento), Jean Senellart (SYSTRAN), Renato Beninatto (Moravia), Alex Waibel (Karlsruhe Institute of Technology) and Marco Trombetti (Translated).
User stories are estimated in story points to plan project timelines. Story points are a relative unit used to estimate complexity rather than time. The team estimates stories together by first independently assigning points, then discussing to converge on a shared estimate. Velocity is calculated based on the number of points completed in an iteration to predict future capacity. Pair programming may impact velocity but not the story point estimates themselves. Estimates should consider the story complexity and effort from the team perspective rather than individuals.
This document discusses the benefits of using mockups in communication, including reducing miscommunication and using less text. It lists several goals such as having everyone start using mockups and creating them quickly. It then provides examples of mockups demonstrating an edit profile form, indicators for cache status, and a margarita recipe. It also mentions resources for finding mockups and taking a mockup "master class." The overall message is that mockups can improve communication and work processes by making designs and workflows more visual and clear.
How to avoid 6 deadly mistakes when building a digital product 2018inFullMobile
This document provides tips for avoiding common mistakes when building digital products. It outlines 6 key areas to focus on: 1) Solve real problems, not hypothetical ones, 2) Sell the product concept before building it to validate market need, 3) Rely on user research like surveys and interviews rather than guessing, 4) Measure everything to understand user behavior and determine what works, 5) Get buy-in from enterprise users early on through focus groups and observations, 6) Think beyond just product and look at the larger business landscape. Following these tips can help mitigate risks and avoid wasting time and resources on products that do not solve real user needs.
The document provides tips for presentations and discusses outsourcing software development. It lists tasks for evaluating and implementing a new system in order. The tasks of identifying vendors and evaluating alternatives can be done together, as can preparing recommendations and system requirement documentation.
This document discusses project management growth practices and contains recommendations in several areas:
1) Be available to your team to reduce dependencies, optimize around available resources which may be constrained by project management, engineering or the team itself.
2) Improve processes by setting up project management software, using demos to drive progress, and dedicating special days to areas like bugs, polish or internal tools.
3) Anticipate risks and have mitigation plans to determine if risks are real problems, and have rollout or other plans to address risks like stability issues.
How to avoid cutting yourself with the double edged sword of Testing Metrics.
- Pros and cons of working with metrics
- Plan a metrics program
- Tips and tricks of working with metrics.
For full webinar recording:
https://www.practitest.com/qa-learningcenter/webinars/testing-metrics/
10 Tactics for Building an Optimization CultureOptimizely
Slides from a presentation of '10 Tactics for Building an Optimization Culture' webinar, hosted by Brooks Bell and Optimizely.
Full webinar recording with audio can be found here: http://optimizely.wistia.com/medias/xf4yk47rml
https://www.optimizely.com/
http://brooksbell.com/
Id camp x dicoding live : persiapan jadi software engineer hebat 101DicodingEvent
Apakah seorang software engineer hebat adalah yang menguasai banyak bahasa pemrograman? Yang serba semua bisa? atau yang menguasai teknologi kekinian? Walaupun setiap individu memiliki standar hebat yang berbeda-beda tergantung dengan goals, passion, dan career path yang akan diambil. Tapi satu yang pasti untuk menjadi software engineer yang hebat ada cara dan langkah yang bisa dipelajari. Apa saja tips dan cara yang bisa kita lakukan untuk menjadi software engineer yang hebat? Hal ini akan kita bahas tips oleh Sidiq Permana (Co-Founder dan CIO - Nusantara Beta Studio) pada Dicoding LIVE x IDCamp dengan tema "Persiapan Jadi Software Engineer Hebat 101".
What good is data if it doesn't become information? And what good is it if it doesn't answer the burning questions? What good is it if it's irrelevant to the ones who need it most? Avoid demonstrating with certainty that you don't understand the question or the people asking it by practicing Insight Design.
We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.
The document discusses continuous improvement and how making small incremental changes over time can lead to significant improvements. It promotes the idea of continuous improvement as a mindset and provides some practical tips for implementing continuous improvement, including using a continuous improvement board to track problems, desired future states, tasks, and progress. The document also emphasizes that continuous improvement can be applied to any context to help teams and individuals get better through small changes done consistently over time.
Communication @ Funnelll - Doing Remote-First the right way Funnelll
We are a remote-first company. This is how we make sure our team can effectively work together to create the last marketing and analytics tool you will ever need!
Building lean products with distributed agile teamsIgor Moochnick
- The document discusses principles for building lean products with distributed agile teams, emphasizing constant communication, feedback, transparency, and flexibility.
- Key aspects include prioritizing customer needs, continuous integration and deployment, minimizing waste and bureaucracy, and making decisions at the last responsible moment based on ongoing learning.
- Success requires open communication across all teams, with a focus on removing impediments, capturing feedback, and constantly improving through retrospectives.
We get this question a lot, and being open and transparent we’d like to address it. We have identified four areas that in all probability may create great difficulty for everyone who is trying to build and launch one’s own magazine app.
Read all about at: http://blog.presspadapp.com/what-it-would-be-like-to-build-a-system-for-publishing-magazines-on-mobile-devices/
William Josephson is the co-founder of Solano Labs, which provides automated testing services. Solano Labs has helped several customers significantly reduce their testing times - for example, reducing tests that previously took 18 hours down to 13 minutes. Automated testing allows organizations to deploy code much more frequently and have fewer failures. It reduces costs by finding bugs quicker and allows engineers to make changes more freely.
Leandro Melendez - Switching Performance Left & RightNeotys_Partner
Since its beginning, the Performance Advisory Council aims to promote engagement between various experts from around the world, to create relevant, value-added content sharing between members. For Neotys, to strengthen our position as a thought leader in load & performance testing. During this event, 12 participants convened in Chamonix (France) exploring several topics on the minds of today’s performance tester such as DevOps, Shift Left/Right, Test Automation, Blockchain and Artificial Intelligence.
This document discusses continuous delivery decision points and environments. It notes that as DevOps shifts focus from build/deploy automation to continuous delivery, test environments are proliferating and need to be provisioned and managed efficiently. It also emphasizes that collaboration around testing is important, and teams must make strategic decisions about promoting code changes to subsequent stages or iterating further. Maintaining properly designed automated and manual tests is key to making these decisions. Organizational culture and leadership must also support empowering developers and teams to deliver changes incrementally.
Capital One transitioned to DevOps by starting with a SWAT team that automated builds, deployments, and infrastructure for two applications. This improved speed and removed handoffs. Challenges included trying to automate everything at once and handoffs when automation was returned to application teams. Key lessons included focusing on automation and API's, reducing handoffs, avoiding silos, and delivering working solutions over perfection.
This document discusses automated testing, continuous integration, and continuous deployment. It highlights how automation can speed up testing from hours to minutes. Continuous integration involves testing every code change to catch bugs early. Continuous deployment uses automation to automatically release validated changes, making software releases a non-event. These practices allow organizations to deploy code much more frequently and with fewer failures.
В докладе рассказано о том, как автоматизация может упростить жизнь мануальному тестировщику. Автор делиться личным опытом и практическими советами о том, как начать изучать автоматизацию без вреда текущему проекту и процессу тестирования в целом. Расскажу вам какие языки программирования лучше применить в конкретных практических ситуациях. Доклад будет интересен прежде всего для тех тестировщиков, которые хотят научиться автоматизировать, но не знают как и с чего начать.
Lec 1 Introduction to Software Engg.pptxAbdullah Khan
The document contains questions related to software engineering. It begins by defining software and a computer program. It then discusses why software is important, common problems in software development, and examples of severe consequences of software failures. The document asks about software engineering, the differences between computer science and software engineering, and challenges in the field. It also addresses major activities in software development and sources of inherent complexity. Overall, the document poses questions to introduce various foundational concepts in software engineering.
Top 10 DBA Mistakes on Microsoft SQL ServerKevin Kline
From the noted author of SQL in a Nutshell - Microsoft SQL Server is easier to administrate than any other mainstream relational database on the market. But “easier than everyone else” doesn’t mean it’s easy. And it doesn’t mean that database administration on SQL Server is problem free. Since SQL Server frequently grows up from small, home-grown applications, many IT professionals end up encountering issues that others have tackled and solved years ago. Why not learn from those who first blazed the trails of database administration, so that we don’t make the same mistakes over and over again. In fact, wouldn’t you like to learn about those mistakes before they ever happen?
There is a short list of mistakes that, if you know of them in advance, will make your life much easier. These mistakes are the “low hanging fruit” of application design, development, and administration. Once you apply the lessons learned from this session, you’ll find yourself performing at a higher level of efficiency and effectiveness than before.
No reuse without permission. Follow me on social media at kekline and blog at kevinekline.com.
Adapting Scrum in an Organization with Tailored ProcessesPrabhat Sinha
The document discusses challenges with implementing Agile processes in organizations with offshore development teams. Key challenges include lack of face-to-face communication due to time zone differences and cultural barriers, which can lead to misunderstandings and a negative feedback loop. Successful Agile adoption requires strong communication, but remote teams have difficulty communicating effectively. Management commitment and changing metrics to focus on outcomes rather than effort are also important to prevent Agile transformations from failing with offshore teams.
This document discusses scaling a web application, particularly those built with PHP and MySQL. It begins with introductions and then outlines various strategies for scaling applications and databases. For applications, it recommends profiling code and queries to identify bottlenecks, optimizing frameworks, caching, and monitoring. For databases, it suggests technologies like Memcached, database replication using master-slave, sharding, MySQL Cluster, and storage engines. The overall message is that scaling requires understanding applications and systems, identifying pain points, and having a plan to optimize performance as needs grow.
2014-10 DevOps NFi - Why it's a good idea to deploy 10 times per day v1.0Joakim Lindbom
Corporations are struggling with overly complex systems and system landscapes. DevOps is presented as one piece of the puzzle to go for much leaner and simpler landscapes - all in order to increase the readiness for change and innovation.
The presentation also discusses the the basic thought error behind organising according to Design-Build-Run, which is the basis for most ICT IM outsourcing.
Delivered at Machine Translation Summit during a special workshop on post-editing.
November 3rd 2015
Miami, Florida.
In this talk, we describe the latest advances in the world of commercial and academic machine translation development that are having the effect of improving acceptance of the technology and keeping its users happy.
Release software is no less important than activities that precede it.
The Continuous Delivery is a set of practices and methodologies that build an ecosystem for the software development lifecycle.
We will see how to build this ecosystem around the applications developed, for which this release activities becomes a low-risk, inexpensive, fast and predictable.
The document summarizes a presentation about using Zend Server's monitoring and profiling features to diagnose performance problems in applications. It begins with an introduction to the speaker and an overview of the session. It then discusses common issues like slow performance, errors and high memory usage that are difficult to reproduce. The presentation demonstrates how to use Zend Server's monitoring and code tracing to diagnose examples of these types of problems in a sample BeerIOU application. It also covers the performance impact and advantages of Zend Server's monitoring features.
Software testing tools are evolving. More testing frameworks are emerging through the open source community and commercial vendors. In addition, we’re starting to see the rise of machine-learning (ML) and artificial intelligence (AI) in testing solutions.
Given this evolution, it is important to map the tools that match both the practitioners’ skills and their testing types. When referring to the testing practitioners, we mainly look at three different personas:
-The business tester
-The software developer in test (SDET)
-The software developer
These practitioners are tasked with creating, maintaining, and executing unit tests, build acceptance tests, integration, regression, and other nonfunctional tests.
In this webinar led by Perfecto’s Chief Evangelist, Eran Kinsbruner, you will learn the following:
-How should testing types be dispersed among the three personas and throughout the DevOps pipeline?
-What tools should each of these three personas use for the creation and execution of tests?
-What are the key benefits to continuous testing when mapped correctly?
devops, microservices, and platforms, oh my!Andrew Shafer
A story about a boy and his quest to build great software delivered at the Cloud Foundry Summit in Santa Clara May 2015. (https://www.youtube.com/watch?v=rX4mQHPWuUY) Walk through the history of my personal career, and the evolution of the industry highlighting themes like devops, microservices and platforms.
Agile Transformation: People, Process and Tools to Make Your Transformation S...QASymphony
Many companies are currently going through Agile Transformation or thinking about making the transition to agile. While moving to agile can create great opportunity for organizations, the journey to get there can be highly challenging. If you don’t have the right people, process and tools in place, the true benefits of agile may not be recognized. In this webinar, Andrew Stickland, Head of Client Services, for Clearvision and Kevin Dunne, VP of Business Development and Strategy for QASymphony will discuss the best practices for making the agile transformation. In this webinar, we will try to answer the following questions:
- Who are the people I need in place?
- What are the core processes that I need to change?
- What tools do I need?
View the On-Demand webinar here: http://pi.qasymphony.com/agile-transformation-best-practices-webinar-lp060?utm_source=slideshare&utm_medium=slideshare&utm_campaign=Agile%20Transformation%20Webinar
Similar to Maintainability of Configuration Management Code (20)
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
Skybuffer AI: Advanced Conversational and Generative AI Solution on SAP Busin...Tatiana Kojar
Skybuffer AI, built on the robust SAP Business Technology Platform (SAP BTP), is the latest and most advanced version of our AI development, reaffirming our commitment to delivering top-tier AI solutions. Skybuffer AI harnesses all the innovative capabilities of the SAP BTP in the AI domain, from Conversational AI to cutting-edge Generative AI and Retrieval-Augmented Generation (RAG). It also helps SAP customers safeguard their investments into SAP Conversational AI and ensure a seamless, one-click transition to SAP Business AI.
With Skybuffer AI, various AI models can be integrated into a single communication channel such as Microsoft Teams. This integration empowers business users with insights drawn from SAP backend systems, enterprise documents, and the expansive knowledge of Generative AI. And the best part of it is that it is all managed through our intuitive no-code Action Server interface, requiring no extensive coding knowledge and making the advanced AI accessible to more users.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
This presentation provides valuable insights into effective cost-saving techniques on AWS. Learn how to optimize your AWS resources by rightsizing, increasing elasticity, picking the right storage class, and choosing the best pricing model. Additionally, discover essential governance mechanisms to ensure continuous cost efficiency. Whether you are new to AWS or an experienced user, this presentation provides clear and practical tips to help you reduce your cloud costs and get the most out of your budget.
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
3. id clintoncwolfe
• Lead DevOps Consultant at OmniTI Computer
Consulting
• Config Management (CM) specialist in Chef
and Ansible
• Software Engineering wonk
• Perl web developer in a former life
4. Perl, huh?
• mod_perl developer 1996-
2011
• Projects start green, turn
brown, hard to maintain
• I became a proponent of
maintainability
10. Maintainawhat?
The ease with which a system
can be interacted with, over time,
in order to:
• diagnose problems
• make repairs
11. Maintainawhat?
The ease with which a system
can be interacted with, over time,
in order to:
• diagnose problems
• make repairs
• cope with changes in
requirements
12. Maintainawhat?
The ease with which a system
can be interacted with, over time,
in order to:
• diagnose problems
• make repairs
• cope with changes in
requirements
• maximize useful life
20. Yadda Yadda Yadda…
• lots of other things impact Application maintainability
• performance optimization
• rapidly changing feature requests
• languages, libraries, architectures go out of style
• But I won't get into that stuff
• It's hard
• No good answers
• Not as applicable to CM
This is my high school senior picture, from 1995. I started programming soon after this.
Perl culture in the 90's and early 2000's was all about "There's more than one way to do it" – creative freedom, implicit permission to leverage the full flexibility of the language to do whatever you needed to get done.
That kind of lead to a giant mess if you ever needed anyone else to maintain your code, though. So, a movement sprang up, as in many communities, to try to find best practices.
Today, for those still active in the Perl community, there are modules available that encapsulate some of the ideas. There's nothing stopping you from making a big horrible mess of things, of course.
So that's how the Perl community independently journeyed towards maintainability. But more broadly, what do we mean, when we say "maintainability"?
Software has a strong tendency to become more complex over time. But not all complexity is justifiable.
On the right, we have a Saturn V rocket performing the Apollo 17 mission, which sent 3 people through space (where you can die in many horrible ways), landed 2 of them on the Moon, and brought them all back home safely. Also, this time they brought a little car they could drive around. On the moon.
On the left, we have a device intended to open an umbrella.
Take a moment and consider the sort of goals that your systems have, and whether the amount of complexity involved makes sense.
But why would complexity go up over time? Why would you make things like the umbrella opener?
Things usually start simple. But then you have to glue on a new interface, because the Product team decided on a new integration. Also, now it has to be on mobile, and make toast. No design can anticipate all of these things – nor should they, that causes other problems – so things get glued on haphazardly.
There is a lot of business pressure to ADD functionality, but there is rarely pressure to reduce complexity. As time goes on, some parts of the code are used less and less, but still have to stick around for legacy integrations.
If you don't use your code often, you may find that external elements in the environment have changed to the point that your code is no longer functional.
Nothing in the IT community stands still, and so we find that all code is forced to change. If code is written with the assumption that it will never have to change, because things will naively always remain the same, maintainability will never be a consideration – and if you don't build it to be maintainable, it won't happen by accident.
At the time that you are writing code, you head is deeply in the problem – you have a result you want to obtain, and are aware of edge cases – and also the things you don't have to worry about. There is a lot of unspoken context in your head. And, as you write, you may also be running it, gradually improving on the code until it works, or works well enough.
Someone who comes along later doesn't have the context. They don't know what you tried and what you didn't, and they don't know if you wrote it that weird way out of ignorance, cleverness, or as the only way to make it work because of an unrelated issue.
Have you ever tried to figure out what a piece of code is doing, only to run git blame and discover that you were the one who wrote it, six months ago?
Of course, as time goes on, people move on. Different people have different backgrounds, experiences, and skillsets. Some aspects of the codebase may become incomprehensible to the current maintainer.
In really unhealthy situations, there may be legacy codebases that have complete vacancies – there might be no one who is familiar with the code for extended periods of time. When the code is next maintained, you have to rely on docs and testing, as well as a long discovery process. We have solid docs and great tests on all the old code, though, right?
I'm been speaking so far about factors that impact all software development, but especially focusing on things that also impact code used by Operations, like Configuration Management code. There are a lot of other factors out there, which are less applicable.
So, let's look at how things change when we try to start managing all the servers by writing software.
Well, the good news, is that according to the tool vendors, all of this is going to be super easy. Also, their product is easier than the others.
A picture of a baby has never filled me with such rage.
It really doesn't matter how easy the *tool* is to use when the TASK is difficult.
Back in April of this year, there were a pair of posts to ServerFault in which people described accidentally destroying "their entire company", using a similar command line. One purported to be using Ansible to do it. One of them turned out to be a hoax, and the the other sadly didn't.
I won't dive into the details here, but I just wanted to point out that while application developers are working with things like inventory counts, ecommerce, etc – the subjects of this code is typically more recoverable. An infrastructure coder, on the other hand, is literally creating and destroying servers, load balancers, etc. Not only can you destroy the data, you can destroy the server, and the backup server, too.
And what does greater consequences have to do with maintainability? If code is scary to change, you'll be afraid to change it. Code you're afraid to touch is code you can't maintain.
So, naturally, we'd expect the people doing this work to have special training to ensure they are able to do the work safely, right?
The people typically assigned to do CM work – who are often championing its adoption – usually come from an ops background. That's fantastic – they are exactly the right people, with the right enthusiasm and subject-matter expertise.
But as soon as you start writing infrastructure code, you're not a sysadmin anymore – you're a developer. How many year of experience do you have as a developer? In particular, how strong are your instincts around input validation, edge cases, and test fixtures?
Did you know there are people who know about such things? They're called Developers and QA Engineers. You can ask them for help. You've got a great relationship with them, right?
One of the secrets QA Engineers know is that you don't have to test things against reality. Instead, you build facilities to simulate inputs and capture outputs, and isolate the components that you need to verify. When that component is a library file or a software module, those "text fixtures" are fairly straightforward.
When the object you're testing is an entire configured webserver – with provisioned hardware, installed OS, configured services, middleware, and a deployed application, it's less obvious. So, many first-time config management coders simply do the obvious thing, and test against the real thing – possibly even production.
Once you start using CM for the obvious things – like installing and configuring a service – your imagination takes flight. What else could we do with this powerful tool? Now that I'm spending less time doing boring, low-value tasks, perhaps I can use my basic skills in CM to automate something more difficult. Maybe adding and removing people from LDAP. Or, instead of configuring machines, maybe we could create them? And the networks between them? Oh, we could create entire staging environments! And snapshot production data into the staging environments for testing! And what about scale testing? Oh, I bet we could automate database failover!
Some of those are good ideas, and some of them are terrible ideas. Don't do failover with CM.
OK, so I've been talking – a lot - about the various factors that make software maintainability a hard goal, and CM code maintainability a particularly nasty problem. My hope is that you'll be able to apply those those factors to your own situation, and come up with some ideas to improve your specific situation.
That said, here are some general things you can do.
We'll start off with a nice, easy tip: write useful comments.
You can comment in nearly all formats in use these days – except, sigh, JSON.
The advice here isn't just to write comments, though. People to tend to write comments that say what they're doing, which is silly – the code is the part that actually takes actions. A description of this image – skeleton, cat, french horn, spooky girl with plumes coming out of her ears – does not explain why these elements are together.
Instead, try to make comments that reflect Why the task needs to be done. In six months, or in the hands of another teammate, having insight into your intent is much more valuable than insight into your implementation
One of the most basic things you can do to increase safety (and thus make it safer to run and maintain your code) is to add validation checks. If you rely on certain variables to be set – like, which environment to be targeting – you can check for those variables and ensure they have a sane value.
This is used a lot in aerospace. Above, you see a section of the Space Shuttle's control panel. When the shuttle launched, the main engines – the 3 on the back of the orbiter itself – would start up 6 seconds before the solid rocket boosters. Once the SRBs were ignited, the craft was committed to a liftoff attempt, but until then, they could run tests on the engines and abort on the pad. How many times do you think that happened? 6. Not a big deal.
This is really isn't optional. When you were writing Bash scripts, validation was kind of hard, and an afterthought if anything. But you're using much more expressive languages, and so can keep you – and especially your teammates – from causing great harm. Validation can also make the code somewhat self-documenting.
They key point is to perform the validations before you take any actions. If anything fails validation, you will be able to give a helpful message, and abort the run before any harm is done.
Related to variable validation, you may be thinking about where to get those variable values. Both Ansible and Chef provide multi-tiered, override-driven defaulting systems.
They are both completely infuriating, in their own ways. Chef is way, way too complex, offering too many levels and mechanisms, and a two-pass execution model that makes dynamically setting variables really dangerous. Ansible is simpler, but much buggier and poorly documented; while each of the 5 mechanisms will always have the same priority, your context constantly shifts when running a task or a play, and dynamically setting values doesn't work as expected, either.
The long and short of it is that each org needs to identify the simplest possible approach to this that works for them, and then never, ever vary from that.
So how can you make sure that – for example – someone isn't using twelve kinds of attribute definitions in a Chef cookbook? I guess you could grep for it, right?
Well, it turns out there are much better tools than grep. Developers have been using specialized search tools – called "linters", because they help you pick the lint off -- that search through your code (without running it) and apply a bunch of different rules. Some rules might look for formatting, some might look for bad ideas of various kinds, but all linters also allow you to write custom rules – so you can make a rule that says where you want Chef attributes (or Ansible variables) defined.
Did you know that testing is easy? You can use tools like inspec or Serverspec to write super-simple tests. Things like "apache should be installed" or "port 80 should be listening". The goal is to have a small set of smoke tests. If they pass, there might still be something horribly wrong, but if they fail, you know there is, and you can start addressing it.
HAVING tests is more important than writing tests, though. When you have tests, your code becomes more maintainable, because you can make a change, run the tests, and verify that you didn't break any existing functionality. That's a huge confidence booster.
So where are you going to run these tests?
Please don't do it on anything you care about, like a QA environment or (shudder) production. The best workflows allow you to create temporary machines, run your new code, and destroy them, without waiting on anyone's permission. IaaS systems like AWS, GCE, or even local Vagrant/Virtualbox setups can all do this.
So I've mentioned linters, and testing, and getting setup with an ephemeral testing system, and so one. That all sounds like a pain to set up.
Well, it turns out that there are kits that contain all of that and more. ChefDK and AnsibleDK are both kits that contain all of those things. You just download them, install as an OS package, and a whole bunch of tools just magically work together. You don't have to invent that, just go download them.
Included in both AnsibleDK and ChefDK is a code generator. A code generator is a thing that you run when you start a project, that lays out all the files, creates a bunch of small integrations (like setting up your git hooks and your IaaS credentials), and so on. They are pretty flexible, so you can customize them to your team's needs.
The big advantage here is that you can use this to level-set how projects start. When you ask a junior engineer to make a new cookbook by running chef generate cookbook, and you know it will generate a project that already has the linter plugged in, and Test Kitchen setup, you know that they are going to start off having tools telling them when they are doing something wrong, regardless of their previous experience.
Importantly, it also means that you projects will all at least start off looking alike, which makes it easier to transfer knowledge between projects.
Assuming you're using version control that supports a merge-request workflow – which anything git-based certainly does – you have the opportunity to use tools like Github pull requests. If this is your first exposure to this, it may seem like it is all about control: who gets to accept things into the codebase. But more importantly in the long term is the ability to collaborate and communicate on proposed changes using comments in a web UI. This has a number of positive effects: more people on the team know what is going on; you have a record of the thought processes that went into design decicions, and junior engineers can ask questions.
Some teams go so far as to have junior team members explain changes to the rest of the team. If the change isn't understandable, it needs to be simplified. Over time, the code becomes clearer, everyone understand that they are writing for people not computers, and skill levels increase.
A key difference between application cade and configuration managemnt code is its expected lifetime. While app code may be in service for several years or even decades, CM code – even the oldest – has only been in use for 2-3 years. The code base is also typically much smaller. You have much more ability tto simply start over from scratch. Your first CM project may be a disaster; but you can scrap it and start over with lessons learned pretty easily.
Increasingly, though, the need for true configuration management is starting to go away. The era of long-lived maches that we carefully manage through a long life of incremental changes – which is what CM was designed for – is tapering off. If you are using a container-based approach, you might use CM to build the initial image; or you might use something simpler like Packer, since you don't need the idempotency features of a CM tool. If your application developers are moving on to something like Serverless, you may find that there is no need at for CM – it may just be used for building out supporting infra.
Every tool has an era. The Chef code we wrote in 2012 is very different than the code we write today. I don't know what CM code we'll be writing in 2020, but I imagine it will be different, or less.