Software projects were historically managed on a bet the farm model. They succeeded or they failed. And when they failed (as big software projects often did), the consequences were typically dire for, not only organizations as a whole, but for many of the individuals involved. Today, by contrast, many software and the development projects have evolved toward a much more incremental, iterative, and experimental process that takes cues from the open source model which excuses (and even rewards) certain types of failure.
In this session, we’ll discuss how failure can be turned into a positive. This includes the organizational dynamics associated with tolerating uncertain outcomes, the need to define acceptable failure parameters, and the technical means by which experimentation can be automated in ways that amplify the positive while minimizing the effect of negative outcomes.
As DevOps practices have been put into wide use, it's become evident that developers and operations aren't merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations - together with other practices such as security - to collaborate and coexist with less overhead and conflict than in the past.
In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are developed using DevOps approaches, and are deployed on software-defined, and often containerized, infrastructures - and where operations itself is increasingly another "as a service" capability from the perspective of developers.
How does the operations tool chest change? How does the required skill set differ? How are the interactions between operations and other IT and business organizations different from in the past? How can operations provide the confidence to the entire organization that this new pipeline is still delivering non-functional requirements such as regulatory compliance and a secure and certified operating environment? How does operations safely consume vendor and upstream dependencies while meeting developer desires for the latest and greatest?
Operations is more important than ever for a business to derive value from its IT organization. But the roles and the goals of operations are significantly different than they were historically.
The New Platform: You Ain't Seen Nothing YetGordon Haff
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. Today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happening in hardware and elsewhere in the stack.
Containers: Don't Skeu Them Up. Use Microservices Instead.Gordon Haff
from LinuxCon Japan 2016
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
In this session, Red Hat's Gordon Haff and William Henry will discuss how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Platform-as-a-Service has rightly been celebrated as a way to increase developer productivity and thereby help companies get the new applications and services they need online (and making money) faster. It also helps admins meet the needs of those developers faster and with less manual effort. But PaaS goes beyond developers and beyond dev/test. Efficient application multi-tenancy and auto-scaling are also key features for production environments. Furthermore, developers may love that PaaS abstracts away platform details that they don't care about. But this abstraction also means that platform changes can happen without affecting developers, a big win for architects and procurement officers. In short, PaaS is for everyone.
As an industry, we’ve mostly moved on from naive notions about cloud computing being inherently “safe” or “risky.” However, more sophisticated discussions require both greater nuance and greater rigor. This presentation takes attendees through frameworks for evaluating and mitigating potential issues in hybrid cloud environments, discusses key risk factors to consider, and describes some of the relevant standards and provider certifications. This is a broad and sometimes complex topic. However, it’s very manageable if individual risk factors are considered systematically and specifically. This session will give IT professionals tools and knowledge to help them make informed decisions.
Containers: Don't Skeu Them Up (LinuxCon Dublin)Gordon Haff
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
This session discusses how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Software projects were historically managed on a bet the farm model. They succeeded or they failed. And when they failed (as big software projects often did), the consequences were typically dire for, not only organizations as a whole, but for many of the individuals involved. Today, by contrast, many software and the development projects have evolved toward a much more incremental, iterative, and experimental process that takes cues from the open source model which excuses (and even rewards) certain types of failure.
In this session, we’ll discuss how failure can be turned into a positive. This includes the organizational dynamics associated with tolerating uncertain outcomes, the need to define acceptable failure parameters, and the technical means by which experimentation can be automated in ways that amplify the positive while minimizing the effect of negative outcomes.
As DevOps practices have been put into wide use, it's become evident that developers and operations aren't merging to become one discipline. Nor is operations simply going away. Rather, DevOps is leading software development and operations - together with other practices such as security - to collaborate and coexist with less overhead and conflict than in the past.
In his session at @DevOpsSummit at 19th Cloud Expo, Gordon Haff, Red Hat Technology Evangelist, will discuss what modern operational practices look like in a world in which applications are more loosely coupled, are developed using DevOps approaches, and are deployed on software-defined, and often containerized, infrastructures - and where operations itself is increasingly another "as a service" capability from the perspective of developers.
How does the operations tool chest change? How does the required skill set differ? How are the interactions between operations and other IT and business organizations different from in the past? How can operations provide the confidence to the entire organization that this new pipeline is still delivering non-functional requirements such as regulatory compliance and a secure and certified operating environment? How does operations safely consume vendor and upstream dependencies while meeting developer desires for the latest and greatest?
Operations is more important than ever for a business to derive value from its IT organization. But the roles and the goals of operations are significantly different than they were historically.
The New Platform: You Ain't Seen Nothing YetGordon Haff
The now mainstream platform changes stemming from the first Internet boom brought many changes but didn’t really change the basic relationship between servers and the applications running on them. In fact, that was sort of the point. Today’s workloads require a new model and a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker, distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed and scalable applications, take advantage of innovation stemming from a wide variety of open source projects, span hybrid environments, and be adaptable to equally fundamental changes happening in hardware and elsewhere in the stack.
Containers: Don't Skeu Them Up. Use Microservices Instead.Gordon Haff
from LinuxCon Japan 2016
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
In this session, Red Hat's Gordon Haff and William Henry will discuss how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Platform-as-a-Service has rightly been celebrated as a way to increase developer productivity and thereby help companies get the new applications and services they need online (and making money) faster. It also helps admins meet the needs of those developers faster and with less manual effort. But PaaS goes beyond developers and beyond dev/test. Efficient application multi-tenancy and auto-scaling are also key features for production environments. Furthermore, developers may love that PaaS abstracts away platform details that they don't care about. But this abstraction also means that platform changes can happen without affecting developers, a big win for architects and procurement officers. In short, PaaS is for everyone.
As an industry, we’ve mostly moved on from naive notions about cloud computing being inherently “safe” or “risky.” However, more sophisticated discussions require both greater nuance and greater rigor. This presentation takes attendees through frameworks for evaluating and mitigating potential issues in hybrid cloud environments, discusses key risk factors to consider, and describes some of the relevant standards and provider certifications. This is a broad and sometimes complex topic. However, it’s very manageable if individual risk factors are considered systematically and specifically. This session will give IT professionals tools and knowledge to help them make informed decisions.
Containers: Don't Skeu Them Up (LinuxCon Dublin)Gordon Haff
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
This session discusses how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Peatonalizar el Centro Histórico, poner parquímetros, cambiar el sentido de vialidad u ordenar la salida de niños de las escuelas son algunas de las ideas del proyecto “El Querétaro que Queremos Todos” que hoy presentó a la opinión pública el diputado Diego Foyo López.
How OpenStack is paralleling Linux adoption (and how it isn't)Gordon Haff
OpenStack is paralleling and will likely continue to parallel the adoption of another open source project that has become enormously popular and successful—namely Linux. The parallels are educational and useful in that they lend insight into the rate at which adoption takes place and what we might expect successful adoption to look like. At the same time, this session will provide appropriate caveats about assuming that OpenStack can be viewed as just a latter-day Linux. By applying this sort of historical perspective, we can better understand what might be the most effective approaches to collaboration, community-building, and cooperation moving forward.
How open source is driving DevOps innovation: CloudOpen NA 2015Gordon Haff
It’s no coincidence that all the interest around DevOps today comes at a time when open source technologies and processes are so dominant in cloud computing, data storage and analysis, and--increasingly--in networking. Innovations in Linux and other projects, including containers, configuration management, and continuous integration, are what make DevOps workflows and portable application deployments possible. But it’s also the result of open source culture, practices, and the tools supporting those practices that have made iterative development and collaboration such a powerful model for creating great software in communities. And now, they’re also providing a template for how to develop and operate applications internally within enterprises. In this session, we will discuss how open source tools and practices can be applied to create effective DevOps workflows and practices.
The New Open Distributed Application ArchitectureGordon Haff
The platform for developing and running modern workloads has changed. This new platform brings together the open source innovation being driven in containers and container packaging, in distributed resource management and orchestration, and in DevOps toolchains and processes to deploy infrastructure and management optimized for the new class of distributed application that is becoming the norm.
In this session, Red Hat's Gordon Haff discusses the key trends coming together to change IT infrastructure and the applications that will run on it. These include:
Container-based platforms designed for modern application development and deployment
The ability to design microservices-based applications using modular and reusable parts
The orchestration of distributed components
Data integration with mobile and Internet-of-Things services
Iterative development, testing, and deployment using Platform-as-a-Service and integrated continuous delivery systems
Manufacturing Plus Open Source Equals DevOpsGordon Haff
From DevOps Summit Silicon Valley, November 2015
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration.
Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling. In this session, Red Hat’s Gordon Haff will discuss the lessons and processes that DevOps can apply from manufacturing using:
- Container-based platforms designed for modern application development and deployment.
- The ability to design microservices-based applications using modular and reusable parts.
- Iterative development, testing, and deployment using Platform-as-a-Service and integrated continuous delivery systems.
DevOps: Lessons from Manufacturing and Open SourceGordon Haff
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration.
Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling. In this session, Red Hat’s Gordon Haff will discuss the lessons and processes that DevOps can apply from manufacturing using:
- Container-based platforms designed for modern application development and deployment.
- The ability to design microservices-based applications using modular and reusable parts.
- Iterative development, testing, and deployment using platform-as-a-service and integrated continuous delivery systems.
The New Distributed Application InfrastructureGordon Haff
Today’s workloads require a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker (or other packaging methods), distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed, scalable applications, work with a wide variety of open source packages, and provide a universally understandable interface for developers and administrators worldwide.
Peatonalizar el Centro Histórico, poner parquímetros, cambiar el sentido de vialidad u ordenar la salida de niños de las escuelas son algunas de las ideas del proyecto “El Querétaro que Queremos Todos” que hoy presentó a la opinión pública el diputado Diego Foyo López.
How OpenStack is paralleling Linux adoption (and how it isn't)Gordon Haff
OpenStack is paralleling and will likely continue to parallel the adoption of another open source project that has become enormously popular and successful—namely Linux. The parallels are educational and useful in that they lend insight into the rate at which adoption takes place and what we might expect successful adoption to look like. At the same time, this session will provide appropriate caveats about assuming that OpenStack can be viewed as just a latter-day Linux. By applying this sort of historical perspective, we can better understand what might be the most effective approaches to collaboration, community-building, and cooperation moving forward.
How open source is driving DevOps innovation: CloudOpen NA 2015Gordon Haff
It’s no coincidence that all the interest around DevOps today comes at a time when open source technologies and processes are so dominant in cloud computing, data storage and analysis, and--increasingly--in networking. Innovations in Linux and other projects, including containers, configuration management, and continuous integration, are what make DevOps workflows and portable application deployments possible. But it’s also the result of open source culture, practices, and the tools supporting those practices that have made iterative development and collaboration such a powerful model for creating great software in communities. And now, they’re also providing a template for how to develop and operate applications internally within enterprises. In this session, we will discuss how open source tools and practices can be applied to create effective DevOps workflows and practices.
The New Open Distributed Application ArchitectureGordon Haff
The platform for developing and running modern workloads has changed. This new platform brings together the open source innovation being driven in containers and container packaging, in distributed resource management and orchestration, and in DevOps toolchains and processes to deploy infrastructure and management optimized for the new class of distributed application that is becoming the norm.
In this session, Red Hat's Gordon Haff discusses the key trends coming together to change IT infrastructure and the applications that will run on it. These include:
Container-based platforms designed for modern application development and deployment
The ability to design microservices-based applications using modular and reusable parts
The orchestration of distributed components
Data integration with mobile and Internet-of-Things services
Iterative development, testing, and deployment using Platform-as-a-Service and integrated continuous delivery systems
Manufacturing Plus Open Source Equals DevOpsGordon Haff
From DevOps Summit Silicon Valley, November 2015
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration.
Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling. In this session, Red Hat’s Gordon Haff will discuss the lessons and processes that DevOps can apply from manufacturing using:
- Container-based platforms designed for modern application development and deployment.
- The ability to design microservices-based applications using modular and reusable parts.
- Iterative development, testing, and deployment using Platform-as-a-Service and integrated continuous delivery systems.
DevOps: Lessons from Manufacturing and Open SourceGordon Haff
Manufacturing has widely adopted standardized and automated processes to create designs, build them, and maintain them through their life cycle. However, many modern manufacturing systems go beyond mechanized workflows to introduce empowered workers, flexible collaboration, and rapid iteration.
Such behaviors also characterize open source software development and are at the heart of DevOps culture, processes, and tooling. In this session, Red Hat’s Gordon Haff will discuss the lessons and processes that DevOps can apply from manufacturing using:
- Container-based platforms designed for modern application development and deployment.
- The ability to design microservices-based applications using modular and reusable parts.
- Iterative development, testing, and deployment using platform-as-a-service and integrated continuous delivery systems.
The New Distributed Application InfrastructureGordon Haff
Today’s workloads require a new platform for development and execution. The platform must handle a wide range of recent developments, including containers and Docker (or other packaging methods), distributed resource management, and DevOps tool chains and processes. The resulting infrastructure and management framework must be optimized for distributed, scalable applications, work with a wide variety of open source packages, and provide a universally understandable interface for developers and administrators worldwide.