Looking at your organizational structure is key to removing inertia in the organization around a agile/scrum initiative. In this presentation we'll look at an ideal 'fairytale' as a dot on the horizon.
The document discusses how software architecture is currently static and difficult to change, but emerging technologies could enable dynamic, self-adaptive systems. By combining infrastructure APIs, big data analytics, and a proposed "architecture API", systems could monitor and adjust their structure and behavior in response to changing conditions. This would allow systems to automatically scale resources, replace components, and reconfigure in an intelligent way. The speaker provides an example of a risk estimation system adapting its architecture based on analytics. Overall, the talk argues that cross-disciplinary research is needed to develop first-class runtime models of software architecture.
Today’s mainstream acceptance of Agile+DevOps as the preferred way of working once again raises questions of what architecture work is and who does it. It simultaneously challenges much of our previously accepted wisdom, preferring architecture to be a “shared commons” across the development organisation, while demanding a sophisticated level of software architecture practice to deliver on the promises of Agile+DevOps.
One way of describing this situation is the need to “democratise” software architecture so it becomes a shared responsibility rather than a centralised impediment to rapid delivery. In this talk we’ll examine the challenges of software architecture in today’s modern distributed teams and ask how we might make the architecture of their systems a shared responsibility to allow them to achieve the software architecture that they need at the speed that they need it.
A fast paced review of blockchain technology, applications, architectural characteristics and programming, using Ethereum as the main example.
Presented at the JAX London 2017 conference.
Models, Sketches and Everything In BetweenEoin Woods
ust the mention of the word “modeling” brings back horrible memories of analysis paralysis for many software developers. As a result of the conventional wisdom around Agile development that modeling is usually waste, countless software teams have completely abandoned modeling their systems. The problem is that there is a lot of design information that isn’t in the code, and without any models this information can get lost. Over time, the team ends up with a “big ball of mud.”
In this talk we explain what modeling brings to the development process and its value in different situations, discussing the different levels of formality available, from models to sketches and everything in between. Along the way, we share real-world advice on how a little well-chosen modeling can help avoid chaos.
Shadow IT refers to technology activities that are not controlled by a company's official centralized IT function. It is driven by factors like competitive pressures, a desire for independence and innovation, and the availability of cloud services. There are three main types: practice-driven efforts within business units, rogue projects intentionally hidden from IT, and purpose-driven activities by individuals. While shadow IT can improve agility and productivity, it also duplicates infrastructure, poses compliance risks, and lacks IT support. The document recommends that companies learn from shadow IT to understand real technology needs, consolidate duplicative activities, cooperate with shadow IT groups where possible, and enable independent technology use through APIs while managing security and compliance.
The document discusses how software architecture is currently static and difficult to change, but emerging technologies could enable dynamic, self-adaptive systems. By combining infrastructure APIs, big data analytics, and a proposed "architecture API", systems could monitor and adjust their structure and behavior in response to changing conditions. This would allow systems to automatically scale resources, replace components, and reconfigure in an intelligent way. The speaker provides an example of a risk estimation system adapting its architecture based on analytics. Overall, the talk argues that cross-disciplinary research is needed to develop first-class runtime models of software architecture.
Today’s mainstream acceptance of Agile+DevOps as the preferred way of working once again raises questions of what architecture work is and who does it. It simultaneously challenges much of our previously accepted wisdom, preferring architecture to be a “shared commons” across the development organisation, while demanding a sophisticated level of software architecture practice to deliver on the promises of Agile+DevOps.
One way of describing this situation is the need to “democratise” software architecture so it becomes a shared responsibility rather than a centralised impediment to rapid delivery. In this talk we’ll examine the challenges of software architecture in today’s modern distributed teams and ask how we might make the architecture of their systems a shared responsibility to allow them to achieve the software architecture that they need at the speed that they need it.
A fast paced review of blockchain technology, applications, architectural characteristics and programming, using Ethereum as the main example.
Presented at the JAX London 2017 conference.
Models, Sketches and Everything In BetweenEoin Woods
ust the mention of the word “modeling” brings back horrible memories of analysis paralysis for many software developers. As a result of the conventional wisdom around Agile development that modeling is usually waste, countless software teams have completely abandoned modeling their systems. The problem is that there is a lot of design information that isn’t in the code, and without any models this information can get lost. Over time, the team ends up with a “big ball of mud.”
In this talk we explain what modeling brings to the development process and its value in different situations, discussing the different levels of formality available, from models to sketches and everything in between. Along the way, we share real-world advice on how a little well-chosen modeling can help avoid chaos.
Shadow IT refers to technology activities that are not controlled by a company's official centralized IT function. It is driven by factors like competitive pressures, a desire for independence and innovation, and the availability of cloud services. There are three main types: practice-driven efforts within business units, rogue projects intentionally hidden from IT, and purpose-driven activities by individuals. While shadow IT can improve agility and productivity, it also duplicates infrastructure, poses compliance risks, and lacks IT support. The document recommends that companies learn from shadow IT to understand real technology needs, consolidate duplicative activities, cooperate with shadow IT groups where possible, and enable independent technology use through APIs while managing security and compliance.
This document discusses the costs of managing IT in-house versus outsourcing to an MSP. It notes common costs like hardware, software, downtime and data loss. It also discusses hidden costs like using unqualified staff and less productivity. The presentation cites a Dell CIO saying small businesses often overlook long-term ownership costs by focusing only on short-term purchase savings. Finally, it outlines the benefits an MSP provides through needs assessment, implementation, and ongoing management including proactive maintenance, minimized downtime, and predictable budgets.
Digitization solutions - A new breed of softwareUwe Friedrichsen
This slide deck is about the challenges we have to face if we deal with digitization solutions. As this term currently is massively overused, I first introduce a very simple definition to define what I mean with "digitization solution" in the context of the presentation.
Afterwards, I list the challenges - at least the most relevant ones - that arise from moving into the digitzation solution domain. Based on that, I try to examine the trends, prerequisites and limitations that you are confronted with from an IT point of view and you better need to adapt to if you are confronted with digitization in your company. Last, but not least, I try to derive some practical hints for us as individuals, how we can prepare for such an environment.
As always, the voice track is missing, but I hope also the slides on their own bear some value for you.
This document discusses best practices for team-based database development using version control. It emphasizes that all database code and configuration should be stored in version control for collaboration and risk reduction. The document recommends standards for naming conventions, coding styles, and development processes. It also demonstrates how to configure tools to support automation and efficient workflow within development teams. Effective communication, coordination, and adherence to source control principles are key to smooth team collaboration on database projects.
Dmitriy Desyatkov "Secure SDLC or Security Culture to be or not to be"WrikeTechClub
Рано или поздно любая компания задумывается как о безопасности своего продукта, так и внутренней безопасности, и это неизбежно ведет к выстраиванию security-процессов, стандартов, требований и политик. Этот процесс довольно сложный и трудоемкий, требующий определенной зрелости компании и слаженной работы всех сотрудников. Мы хотели бы рассказать о своем опыте создания security-культуры компании Wrike, в том числе с помощью продукта, который мы делаем. Также мы поделимся опытом решения реальных проблем безопасности, с которыми сталкиваемся сами или наши клиенты.
John Whitney has over 25 years of experience in IT and information security. He is currently a Security Analyst at Edward Jones Investments where he has been the Symantec Endpoint Protection Manager (SEPM) Administrator for three years. In this role, he is responsible for maintaining SEPM across production, test, and development environments. Previously, he held security roles at other companies where he administered security tools like firewalls, antivirus software, and intrusion detection systems. He has a Master's degree in Information Security and Assurance and several security certifications.
Complete coverage of CISSP 7th Chapter - Security Operations. I have made sure to cover all topics from three books in this presentation. For corrections, clarifications, please feel free to reach me.
This keynote was presented by Rebecca Wirfs-Brock at Explore DDD 2017.
The ouroboros (οὐροβόρος in the original Greek) is an image or archetype of a serpent shaped into a circle, clinging to or devouring its own tail in an endless cycle of self-destruction, self-creation, and self-renewal. Becoming a good software designer sometimes feels like that.
Over time, we build up our personal toolkit of design heuristics. To grow as designers, we need to do more than simply design and implement working software. We need to examine and reflect on our work, put our own spin on the advice of experts, and continue to learn better ways of designing.
This is basically a "lessons learned" talk. While dealing with resilient software design for several years meanwhile, I realized along the way that implementing a specific pattern like timeout detection, circuit breaker, back-pressure, etc. is the smallest of the challenges.
As so often in software development, the actual pitfalls that keep you from being successful with your project - here, creating a robust application - are not to be found in the area of creating code. Based on my experiences, the actual pitfalls are to be found in areas that are at best loosely related to resilient software design.
In this talk, I discuss some of those pitfalls that I have experienced more than once along my way. It starts with not understanding the goals of resilient software design, continues from a lack of understanding the characteristics of distributed system, over missing required feedback loops and deficiencies in functional design, to not understanding the trade-offs of applying resilience patterns, and ends with the problem of our continuous collective insight loss.
The main objective of the talk is to sensitize for the pitfalls. Wherever possible I also added some suggestions how to deal with the topics. Unfortunately, some topics neither have an obvious nor a simple solutions - at least none that I would know about ...
As always the voice track is missing and thus a huge part of the content of the talk. Yet, I hope the slides in themselves are of some use for you and offer some helpful ideas and pointers.
Security and Software Engineering BSides St. John's 2017Peter Rawsthorne
Traditionally security has been an afterthought for software engineering. Security becomes important only as the deadline for software going into the production environment approaches. And in many situations only makes it into production due to an executive owning the risk and making it happen. It doesn't have to be this way, with disciplined DevOps complimented with good project management practices we can ensure security isn't an afterthought and the software solution follows the organizations security policies.
Jan de Vries - Becoming antifragile is more important than ever in disruptive...matteo mazzeri
Have you ever wondered why DevOps, Continuous Deployment, canary releases, microservices, chaos engineering and reducing Technical Debt work so well? Why it works at all? These and many other concepts all have one thing in common. They are affected by a hidden force: antifragility.
DevOps Security Coffee - Lazy hackers who think out of the box, but stay in t...Freek Kauffmann
How to create a constructive force field between DevOps engineers and hackers?
NOTE: Slide 4 ('Vision on IT Security') has been altered in hindsight.
For questions, please contact me directly: +316 457 61 857
This document discusses implementing a secure software development lifecycle (SDLC) to improve application security. It outlines why the traditional approach of only involving security experts does not work. Instead, it proposes integrating security practices throughout each phase of the development process, including requirements, design, implementation, verification, and release. This includes training developers, conducting threat modeling and security testing, using security tools in continuous integration, and analyzing results to address issues early. The goal is to reduce security defects over time by changing developer mindsets and integrating security as applications are built.
Beyond the Scan: The Value Proposition of Vulnerability AssessmentDamon Small
Vulnerability Assessment is, by some, regarded as one of the least “sexy” capabilities in information security. However, it is the presenter’s view that it is also a key component of any successful infosec program, and one that is often overlooked. Doing so serves an injustice to the organization and results in many missed opportunities to help ensure success in protecting critical information assets. The presenter will explore how Vulnerability Assessment can be leveraged “Beyond the Scan” and provide tangible value to not only the security team, but the entire business that it supports.
Unleashing the power of machine learning for it ops managementJason Bloomberg
Now that virtualization is a must-have across all modern IT shops, data center operations require comprehensive, real-time insights in order to manage these high-performance production environments.
First-generation operational analytics tools fall short. They are based on static or some form of dynamic thresholds derived from trending and averaging analytical approaches. But in today’s dynamic, high-velocity environments, false positives are far too common and important information is lost in the noise.
Next generation analytic tools should leverage self-learning models powered by machine learning in order to deliver personalized, faster, more accurate operational insights.
While some emerging tools are leveraging self-learning models to identify the anomalies in the individual object behaviors based on reported statistics. However capturing the anomalies alone of individual metrics is not sufficient enough for two major reasons:
• First, it does not capture the interplay between the data and therefore learn those behaviors (such as IOPS, latencies, CPU).
• Second, it does not provide the actionable insight but rather identifies some anomaly that may or may not require attention.
Learn now next generation approach is addressing this gap using machine learning principals that incorporate the notion the topology and the interplay between the data to to derive root causes in order to identify actual issues and provide meaningful recommendations.
Join industry analyst Jason Bloomberg, president of Intellyx, who will discuss the challenges facing IT operations today and why advanced machine learning, graph technology, and topological data analysis are now critical elements needed by today’s IT operations analytics technology. Next, Jim Shocrylas, Product Manager, SIOS Technology will explain how next-generation machine learning is enabling IT analytics tools to help IT managers resolve performance issues, ensure resource optimization, and meet service level agreements for mission critical applications in VMware environments. He will then walk through common use cases for the practical application of this advanced technology.
DevSecCon Asia 2017 Arun N: Securing chatopsDevSecCon
This document discusses securing ChatOps workflows. It begins by introducing ChatOps and how the architecture works, with chat apps and bots playing big roles. Hubot is highlighted as a popular bot option. Typical CI/CD workflows are shown integrating with chat notifications. Risks of potential loopholes are discussed when using ChatOps. The document focuses on plugging these loopholes by implementing two-factor authentication, restricting access via hardware/software tokens, defining user roles, limiting access across multiple chat systems/rooms, and setting fine-grained IAM policies for bots running on platforms like AWS.
Some of the most famous information breaches over the past few years have been a result of entry through embedded and IoT system environments. Often these breaches are a result of unexpected system architecture and service connectivity on the network that allows the hacker to enter through an embedded device and make their way to the financial or corporate servers. Experts in embedded security discuss key security issues for embedded systems and how to address them.
- Mike Slinn is an expert in evaluating blockchain and technology companies through technical due diligence to assess risks and opportunities for investors and startups.
- He has extensive experience advising companies on technology strategy, product development, and organizational structure to prepare them for investment or acquisition.
- His evaluations are tailored to each company and situation, and can range from quick assessments to multi-week engagements involving on-site reviews and written reports with recommendations.
How to start as IT system analyst
How the system analyst works?
What are roles, a system analyst do when working on company, (startup, corporate)
What skills a system analyst must have?
want to be a system analyst? join our course at www.gaivo-systemworks.com
The document discusses the four pillars of DevOps at Hiscox: culture, process, people, and technology. It describes how each pillar is implemented at Hiscox, including establishing cross-functional teams, shared goals and incentives, emphasis on automation and continuous integration/delivery, and measuring everything. The goals are to break down silos, increase agility and the pace of change, and align teams to work together seamlessly across the software delivery lifecycle. A Platform Services Group is also discussed which helps standardize processes and tools to further enable DevOps practices across teams.
Cyber Scotland Connect: What is Security Engineering?Harry McLaren
Harry McLaren is a managing consultant at ECS who gives a presentation on cybersecurity engineering. Cybersecurity engineering involves building systems, deploying configurations, integrating systems, and developing solutions to protect against, detect, and respond to threats. It is important for engineering projects to consider people, process, technology, the end user, support requirements, and how the solution fits within the business and IT strategies. The presentation provides examples of scenario walkthroughs and best practices for engineers, such as using automation, version control, containers, and cloud technologies.
This document discusses the costs of managing IT in-house versus outsourcing to an MSP. It notes common costs like hardware, software, downtime and data loss. It also discusses hidden costs like using unqualified staff and less productivity. The presentation cites a Dell CIO saying small businesses often overlook long-term ownership costs by focusing only on short-term purchase savings. Finally, it outlines the benefits an MSP provides through needs assessment, implementation, and ongoing management including proactive maintenance, minimized downtime, and predictable budgets.
Digitization solutions - A new breed of softwareUwe Friedrichsen
This slide deck is about the challenges we have to face if we deal with digitization solutions. As this term currently is massively overused, I first introduce a very simple definition to define what I mean with "digitization solution" in the context of the presentation.
Afterwards, I list the challenges - at least the most relevant ones - that arise from moving into the digitzation solution domain. Based on that, I try to examine the trends, prerequisites and limitations that you are confronted with from an IT point of view and you better need to adapt to if you are confronted with digitization in your company. Last, but not least, I try to derive some practical hints for us as individuals, how we can prepare for such an environment.
As always, the voice track is missing, but I hope also the slides on their own bear some value for you.
This document discusses best practices for team-based database development using version control. It emphasizes that all database code and configuration should be stored in version control for collaboration and risk reduction. The document recommends standards for naming conventions, coding styles, and development processes. It also demonstrates how to configure tools to support automation and efficient workflow within development teams. Effective communication, coordination, and adherence to source control principles are key to smooth team collaboration on database projects.
Dmitriy Desyatkov "Secure SDLC or Security Culture to be or not to be"WrikeTechClub
Рано или поздно любая компания задумывается как о безопасности своего продукта, так и внутренней безопасности, и это неизбежно ведет к выстраиванию security-процессов, стандартов, требований и политик. Этот процесс довольно сложный и трудоемкий, требующий определенной зрелости компании и слаженной работы всех сотрудников. Мы хотели бы рассказать о своем опыте создания security-культуры компании Wrike, в том числе с помощью продукта, который мы делаем. Также мы поделимся опытом решения реальных проблем безопасности, с которыми сталкиваемся сами или наши клиенты.
John Whitney has over 25 years of experience in IT and information security. He is currently a Security Analyst at Edward Jones Investments where he has been the Symantec Endpoint Protection Manager (SEPM) Administrator for three years. In this role, he is responsible for maintaining SEPM across production, test, and development environments. Previously, he held security roles at other companies where he administered security tools like firewalls, antivirus software, and intrusion detection systems. He has a Master's degree in Information Security and Assurance and several security certifications.
Complete coverage of CISSP 7th Chapter - Security Operations. I have made sure to cover all topics from three books in this presentation. For corrections, clarifications, please feel free to reach me.
This keynote was presented by Rebecca Wirfs-Brock at Explore DDD 2017.
The ouroboros (οὐροβόρος in the original Greek) is an image or archetype of a serpent shaped into a circle, clinging to or devouring its own tail in an endless cycle of self-destruction, self-creation, and self-renewal. Becoming a good software designer sometimes feels like that.
Over time, we build up our personal toolkit of design heuristics. To grow as designers, we need to do more than simply design and implement working software. We need to examine and reflect on our work, put our own spin on the advice of experts, and continue to learn better ways of designing.
This is basically a "lessons learned" talk. While dealing with resilient software design for several years meanwhile, I realized along the way that implementing a specific pattern like timeout detection, circuit breaker, back-pressure, etc. is the smallest of the challenges.
As so often in software development, the actual pitfalls that keep you from being successful with your project - here, creating a robust application - are not to be found in the area of creating code. Based on my experiences, the actual pitfalls are to be found in areas that are at best loosely related to resilient software design.
In this talk, I discuss some of those pitfalls that I have experienced more than once along my way. It starts with not understanding the goals of resilient software design, continues from a lack of understanding the characteristics of distributed system, over missing required feedback loops and deficiencies in functional design, to not understanding the trade-offs of applying resilience patterns, and ends with the problem of our continuous collective insight loss.
The main objective of the talk is to sensitize for the pitfalls. Wherever possible I also added some suggestions how to deal with the topics. Unfortunately, some topics neither have an obvious nor a simple solutions - at least none that I would know about ...
As always the voice track is missing and thus a huge part of the content of the talk. Yet, I hope the slides in themselves are of some use for you and offer some helpful ideas and pointers.
Security and Software Engineering BSides St. John's 2017Peter Rawsthorne
Traditionally security has been an afterthought for software engineering. Security becomes important only as the deadline for software going into the production environment approaches. And in many situations only makes it into production due to an executive owning the risk and making it happen. It doesn't have to be this way, with disciplined DevOps complimented with good project management practices we can ensure security isn't an afterthought and the software solution follows the organizations security policies.
Jan de Vries - Becoming antifragile is more important than ever in disruptive...matteo mazzeri
Have you ever wondered why DevOps, Continuous Deployment, canary releases, microservices, chaos engineering and reducing Technical Debt work so well? Why it works at all? These and many other concepts all have one thing in common. They are affected by a hidden force: antifragility.
DevOps Security Coffee - Lazy hackers who think out of the box, but stay in t...Freek Kauffmann
How to create a constructive force field between DevOps engineers and hackers?
NOTE: Slide 4 ('Vision on IT Security') has been altered in hindsight.
For questions, please contact me directly: +316 457 61 857
This document discusses implementing a secure software development lifecycle (SDLC) to improve application security. It outlines why the traditional approach of only involving security experts does not work. Instead, it proposes integrating security practices throughout each phase of the development process, including requirements, design, implementation, verification, and release. This includes training developers, conducting threat modeling and security testing, using security tools in continuous integration, and analyzing results to address issues early. The goal is to reduce security defects over time by changing developer mindsets and integrating security as applications are built.
Beyond the Scan: The Value Proposition of Vulnerability AssessmentDamon Small
Vulnerability Assessment is, by some, regarded as one of the least “sexy” capabilities in information security. However, it is the presenter’s view that it is also a key component of any successful infosec program, and one that is often overlooked. Doing so serves an injustice to the organization and results in many missed opportunities to help ensure success in protecting critical information assets. The presenter will explore how Vulnerability Assessment can be leveraged “Beyond the Scan” and provide tangible value to not only the security team, but the entire business that it supports.
Unleashing the power of machine learning for it ops managementJason Bloomberg
Now that virtualization is a must-have across all modern IT shops, data center operations require comprehensive, real-time insights in order to manage these high-performance production environments.
First-generation operational analytics tools fall short. They are based on static or some form of dynamic thresholds derived from trending and averaging analytical approaches. But in today’s dynamic, high-velocity environments, false positives are far too common and important information is lost in the noise.
Next generation analytic tools should leverage self-learning models powered by machine learning in order to deliver personalized, faster, more accurate operational insights.
While some emerging tools are leveraging self-learning models to identify the anomalies in the individual object behaviors based on reported statistics. However capturing the anomalies alone of individual metrics is not sufficient enough for two major reasons:
• First, it does not capture the interplay between the data and therefore learn those behaviors (such as IOPS, latencies, CPU).
• Second, it does not provide the actionable insight but rather identifies some anomaly that may or may not require attention.
Learn now next generation approach is addressing this gap using machine learning principals that incorporate the notion the topology and the interplay between the data to to derive root causes in order to identify actual issues and provide meaningful recommendations.
Join industry analyst Jason Bloomberg, president of Intellyx, who will discuss the challenges facing IT operations today and why advanced machine learning, graph technology, and topological data analysis are now critical elements needed by today’s IT operations analytics technology. Next, Jim Shocrylas, Product Manager, SIOS Technology will explain how next-generation machine learning is enabling IT analytics tools to help IT managers resolve performance issues, ensure resource optimization, and meet service level agreements for mission critical applications in VMware environments. He will then walk through common use cases for the practical application of this advanced technology.
DevSecCon Asia 2017 Arun N: Securing chatopsDevSecCon
This document discusses securing ChatOps workflows. It begins by introducing ChatOps and how the architecture works, with chat apps and bots playing big roles. Hubot is highlighted as a popular bot option. Typical CI/CD workflows are shown integrating with chat notifications. Risks of potential loopholes are discussed when using ChatOps. The document focuses on plugging these loopholes by implementing two-factor authentication, restricting access via hardware/software tokens, defining user roles, limiting access across multiple chat systems/rooms, and setting fine-grained IAM policies for bots running on platforms like AWS.
Some of the most famous information breaches over the past few years have been a result of entry through embedded and IoT system environments. Often these breaches are a result of unexpected system architecture and service connectivity on the network that allows the hacker to enter through an embedded device and make their way to the financial or corporate servers. Experts in embedded security discuss key security issues for embedded systems and how to address them.
- Mike Slinn is an expert in evaluating blockchain and technology companies through technical due diligence to assess risks and opportunities for investors and startups.
- He has extensive experience advising companies on technology strategy, product development, and organizational structure to prepare them for investment or acquisition.
- His evaluations are tailored to each company and situation, and can range from quick assessments to multi-week engagements involving on-site reviews and written reports with recommendations.
How to start as IT system analyst
How the system analyst works?
What are roles, a system analyst do when working on company, (startup, corporate)
What skills a system analyst must have?
want to be a system analyst? join our course at www.gaivo-systemworks.com
The document discusses the four pillars of DevOps at Hiscox: culture, process, people, and technology. It describes how each pillar is implemented at Hiscox, including establishing cross-functional teams, shared goals and incentives, emphasis on automation and continuous integration/delivery, and measuring everything. The goals are to break down silos, increase agility and the pace of change, and align teams to work together seamlessly across the software delivery lifecycle. A Platform Services Group is also discussed which helps standardize processes and tools to further enable DevOps practices across teams.
Cyber Scotland Connect: What is Security Engineering?Harry McLaren
Harry McLaren is a managing consultant at ECS who gives a presentation on cybersecurity engineering. Cybersecurity engineering involves building systems, deploying configurations, integrating systems, and developing solutions to protect against, detect, and respond to threats. It is important for engineering projects to consider people, process, technology, the end user, support requirements, and how the solution fits within the business and IT strategies. The presentation provides examples of scenario walkthroughs and best practices for engineers, such as using automation, version control, containers, and cloud technologies.
Puppet Camp Austin 2015: Getting Started with PuppetPuppet
This document provides an overview of getting started with Puppet. It discusses setting goals for Puppet implementation, understanding key concepts and vocabulary, developing modules, testing code, and sharing modules. The document emphasizes keeping implementations simple, safe, secure and scalable through practices like loose coupling, orthogonal design, and experimentation. It also recommends focusing on quality, testing, and avoiding duplication and complexity when developing Puppet code and modules.
Designing Flexibility in Software to Increase Securitylawmoore
"Software security" is becoming a hot topic but true security must go beyond bounds checking and memory leaks. Outside forces such as customer demands, competition and regulatory requirements will eventually force changes in the software architecture so designing a flexible software architecture that reacts to those impacts while maintaining a security state is very critical.
Nimble Framework - Software architecture and design in agile era - PSQT Templatetjain
This document discusses guidelines for creating software architecture in an agile environment rather than defined processes. It outlines several principles for agile architecture including collective ownership, addressing uncertainty rather than justifying delays, and prioritizing reasoning over rituals. It proposes using "thought layers" rather than processes, including aligning with enterprise frameworks, making major technical decisions, and defining coding patterns. Architectural decisions should be revisited continually. Tools like an "obesity matrix" can help document and choose between architectural options.
The document outlines Mike Harris's presentation on eXtreme Programming (XP). It begins by introducing the structure of the presentation, which will explain why XP is important and outline some of its key development practices. It then provides two case studies of projects, one that was underperforming and one that appeared high performing but had similar underlying issues. The document dives into what XP is, outlining its values, principles and practices. It concludes by discussing outcomes the speaker found when applying XP practices.
Enterprise system implementation strategies and phasesJohn Cachat
Implementation Strategies
Full blown
Staggered or Phased
Implementation Phases
Project planning
Application exploration
System design
System testing
System activation – “go live”
johncachat@hotmail.com
www.peproso.com
Smart Platform Infrastructure with AWSJames Huston
Learn from some of our insights and create a smart infrastructure that let's your team sleep at night!
Presented @DevOpsDays_CLT Feb 2017 by James Huston @hustonjs
Why We Need Architects (and Architecture) on Agile ProjectsRebecca Wirfs-Brock
This is an updated version of this talk which I will present at Agile 2013.
The rhythm of agile software development is to always be working on the next known, small batch of work. Is there a place for software architecture in this style of development? Some people think that software architecture should simply emerge and doesn’t require ongoing attention. But it isn’t always prudent to let the software architecture emerge at the speed of the next iteration. Complex software systems have lots of moving parts, dependencies, challenges, and unknowns. Counting on the software architecture to spontaneously emerge without any planning or architectural investigation is at best risky.
So how should architecting be done on agile projects? It varies from project to project. But there are effective techniques for incorporating architectural activities into agile projects. This talk explains how architecture can be done on agile projects and what an agile architect does.
My Keynote from BSidesTampa 2015 (video in description)Andrew Case
This is the slides from keynote presentation at BSidesTampa 2015. A recording of the talk can be found at: https://www.youtube.com/watch?v=751bkSD2Nn8&t=1m35s
A journey into application security will cover the relation and evolution of application security with the different approaches to development from Waterfall to Devops.
Enterprise architecture (EA) can potentially promote a common business vision within your organization, provide guidance to improve both business and IT decision making, and improve IT efficiencies. Unfortunately many EA teams struggle to provide these benefits, often because they are perceived as ivory tower or being too difficult to work with.
The adoption of disciplined agile and lean strategies that are based on collaboration, enablement, and streamlining the flow of work are the keys to EA success. Disciplined strategies that produce light-weight, yet still sufficient, artifacts are the key to your success. This presentation explores both the success factors and failure factors surrounding EA, pragmatic strategies for a lean/agile approach to EA, and how EA is supported and enhanced by the Disciplined Agile framework. This isn’t your grandfather’s EA strategy.
Transition to feature teams - Gil Wasserman - Agile Israel 2013AgileSparks
Feature teams structure is a well known good-engineering-practice, especially for agile, busniess driven organizations. However, transferring an organization from component to feature teams is always a challange. Most organization actually keep their component driven structure and way of operation. This lecture is intended for those who have already been convinced about the benefits and value of feature teams, but are still hesitant to make the change. In this lecture we shall discuss optional migration paths and share practical considerations and tips to help make the transition effective and worth doing.
Just Trust Everyone and We Will Be Fine, Right?Scott Carlson
As a CISO, you have been asked why you can't just trust your employees to do the right thing. What benefit to the business comes from technical security controls? You have likely been asked to reduce risk and action every funded project at once. In this session, we will realistically consider which projects can reduce risk most quickly, which layers of security are most important, and how things like privilege management, vulnerability control, over-communicating, and simply reducing the attack surface can bring peace of mind and actual direct improvements to your information security posture.
Continuously Deploying Culture: Scaling Culture at Etsy - Velocity Europe 2012Patrick McDonnell
There was a time not long ago when Etsy was laden with barriers, silos, broken communication, and noncooperation. This talk will focus on the various stages of Etsy's cultural development from the early days to present. We will tell of how Etsy overcame numerous challenges and built a strong company culture while continuing to scale.
Continuously Deploying Culture: Scaling Culture at Etsy - Velocity Europe 2012Michael Rembetsy
There was a time not long ago when Etsy was laden with barriers, silos, broken communication, and noncooperation. This talk will focus on the various stages of Etsy's cultural development from the early days to present. We will tell of how Etsy overcame numerous challenges and built a strong company culture while continuing to scale.
Dell's current management infrastructure for its servers emerged in an accidental, unplanned way and lacks consistency. This makes the infrastructure expensive to develop, test, and maintain over time. Dell would benefit from transitioning to an intentional architecture that is proactively planned and uses shared, standardized interfaces. This would reduce costs while allowing Dell to better control and profit from its management offerings. It could also provide customers with a more unified experience across Dell's products and simpler ways to manage their infrastructure.
Gearing Startups for Success through Product Engineering99X Technology
In the August edition of the #99XTWebinar Series, catch two of 99X Technology’s tech experts as they share some intuitive insights into product engineering for startups, and how they harnessed digital transformation to successfully launch a product.
For over 30 years, JDA has been the leading provider of end-to-end, integrated retail and supply chain planning and execution solutions. Their Open Source Center of Excellence (OSCOE) Is charged with standardizing implementation of open source software used within the JDA software ecosystem. JDA experts will share lessons learned and benefits reaped by building Open Source Center of Excellence.
Similar to Reducing inertia in organizations is the key to a successful DevOps transition (20)
Joep introduces himself as a Pathfinder and discusses imposter syndrome. The document discusses how imposter syndrome causes people to doubt their abilities and accomplishments. It suggests that imposter syndrome comes from an internal broken measuring stick where one cannot internalize success and believes others are more accomplished. The document provides suggestions on how to overcome imposter syndrome such as writing down compliments and accomplishments, practicing pair programming, celebrating failures, speaking publicly, and talking about imposter syndrome with others.
Veeam Webinar - Backing up Zarafa with SureBackupJoep Piscaer
This document discusses backing up the Zarafa Collaboration Platform application with Veeam SureBackup to maintain application consistency. It explains that application consistency is important to prevent data loss or corruption during backups. It evaluates the Zarafa components and identifies the MySQL database as requiring consistency. It describes using scripts to lock the database during backups for consistency. It also discusses how SureBackup can be used to automatically verify restores of the Zarafa application from backups.
Veeam Webinar - Case study: building bi-directional DRJoep Piscaer
This document outlines a case study for building bidirectional disaster recovery (DR) between two virtualized infrastructures located on separate sites. The project goals were to reduce recovery time objectives (RTO) from weeks to hours, reduce recovery point objectives (RPO) from infinite to a day, and implement a DR solution using Veeam software. The solution involved using Veeam's distributed backup architecture with proxies and repositories on each site to back up VMs locally and to the remote site. Reverse incremental backups were used to minimize storage usage. A live demo was presented to showcase the solution.
Veeam webinar - Deduplication best practicesJoep Piscaer
This document discusses best practices for using data deduplication with Veeam Backup & Replication 6.5 and Windows Server 2012. It recommends using data deduplication for backups with long retention periods of over 60 days to reduce storage costs. It provides guidance on planning and configuring deduplication, including sizing estimates, optimizing the backup repository, using forward incremental backups, and enabling inline and compression deduplication. It also demonstrates how Windows Server 2012 provides global deduplication across backup jobs and volumes.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
Climate Impact of Software Testing at Nordic Testing DaysKari Kakkonen
My slides at Nordic Testing Days 6.6.2024
Climate impact / sustainability of software testing discussed on the talk. ICT and testing must carry their part of global responsibility to help with the climat warming. We can minimize the carbon footprint but we can also have a carbon handprint, a positive impact on the climate. Quality characteristics can be added with sustainability, and then measured continuously. Test environments can be used less, and in smaller scale and on demand. Test techniques can be used in optimizing or minimizing number of tests. Test automation can be used to speed up testing.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
3. 3 concepts
• Autonomy
• Flow
• Simplicity
Solve inertia of
• Team culture
• Finance
• Infrastructure
4. Autonomous &
multidisciplinary Teams
• Independently develop and release into production
• Have all the skills, roles and tools
to reach team goal and mission
• Don’t hide behind cultural inertia
and defense mechanisms
• Operate outside comfort zone
• Responsible for a business outcome,
not a business function
5. Align tech systems along
organizational boundaries
• Align systems with teams
• Take ownership of system
• Break down dependencies
• Loose coupling reduces complexity
• Take advantage of Conway’s Law
6. Organize in cherries
• Building block of the agile organization
• Technologically adjacent teams form
tribes
• Work and solve similar issues together
reduce ripple effect to smaller, less
complex scale
• Teams are small, 5-8 members
• Do by-the-book scrum
Scrum teams
Cell
Team lead
Product owner
Aligned to a tech domain
7. Ownership gives freedom
• Freedom gives
bottom-up choice
• No forced usage of central IT
resources
(infrastructure, software)
• Choose your own resources
• Different teams choose
differently
9. Software:
Buy or Build?
Use COTS
In a standard
way
As intended
Customization
is loosely
coupled
Or develop
completely
custom
10. Infrastructure in
the value stream
• Cloud and Infra expertise embedded in the value stream
• Infra is not the biggest bottleneck in the pipeline anymore
• Work on business outcome (speed, value),
not ’central IT’ function (cost control)
• Unfit infrastructure, wasteful handovers, bureaucratic
ticketing system and slow approval gates annihilated
11. Think small
• Single unit flow through the pipeline
• Short feedback loop with immediate result
• No batch processing of commits
• Minimal amount of work-in-progress
• No code waiting to go to production
• Nothing stuck in the pipeline
• No unused artifacts produced
• No context switching for developer
• No loss of information during handover
• But only where it makes sense
12. Preventing re-work
fail small and
learn immediately
• Team is responsible for running code in production
• All operational aspects, maintenance, roadmap
• Code is tested thoroughly before release
• Team does investigation, mitigation and post-mortem
• Feedback into pipeline to prevent and improve
• Incentivizes ’first time right’ and quick remediation
13. Expertise in Chapters
• Could call them Pathfinders
• Experts in their field
• Coaching and Learning as primary output
• Team Leads gravitate to soft skills
• Pathfinders gravitate to hard skills
• Pathfinders lead Guilds and Chapters
• Are not HR-responsible
• Play a big part in technical overview
(‘architecture’)
• Lead the bigger initiatives
14. ‘Central IT’ is a
decentralized guild
• Manage the end-to-end collection of
connected simplicities ( ‘architecture’)
• Standardize design patterns and cloud
consumption across teams (‘operation’)
• Safeguard non-functional aspects
• Cost optimization (buy as a group)
• Identity & Access Management
• Observability & Monitoring
• Security, compliance
• Performance
• Reliability
• Risk management (lock-in)
15. Pick one or two to take with you
Build autonomous &
multidisciplinary Teams
in ‘Cherry’ structure
That are responsible for
a business outcome,
not a business function
Who’ll align tech
systems along
organizational
boundaries
break down
dependencies
and think small
Have infra & cloud
expertise embedded in
the value stream
Who will not use unfit
infrastructure, wasteful
handovers, bureaucratic
ticketing system and
slow approval gates
Because they are
responsible for running
code in production
And want minimal re-
work and errors
(‘first time right’
and ‘fail small’)
‘Central IT’ is a
decentralized guild
Pathfinders lead
Chapters
Which manages
architecture as a
collection of connected
simplicities
Help standardize design
patterns and cloud
consumption across
teams
And safeguard non-
functional aspects
Tweet me your fairytale story @jpiscaer