This session will discuss the success story from Walmart on how they built a set of services on the mainframe to provide capabilities at a large scale for their distributed teams, as well as discuss the transformation required for mainframe teams to achieve this success.
Like many startups, Coursera began its data storage journey with MySQL, a familiar and industry-proven database. As Coursera's user base grew from several thousand to many millions, we found that MySQL provided limited availability and restricted our ability to scale easily. New product initiatives and requirements provided a perfect opportunity to revisit our choice of core workhorse database.
After evaluating several NoSQL databases, including MongoDB, DynamoDB and HBase, we elected to transition to Cassandra . Cassandra's relative maturity, masterless architecture (for availability), tunable consistency, and stable low-latency performance made it a clear winner for our needs.
Learn more about what it takes to transition from SQL to Cassandra in this talk.
Delivering Insights from 20M+ Smart Homes with 500M+ DevicesDatabricks
We started out processing big data using AWS S3, EMR clusters, and Athena to serve Analytics data extracts to Tableau BI.
However as our data and teams sizes increased, Avro schemas from source data evolved, and we attempted to serve analytics data through Web apps, we hit a number of limitations in the AWS EMR, Glue/Athena approach.
This is a story of how we scaled out our data processing and boosted team productivity to meet our current demand for insights from 20M+ Smart Homes and 500M+ devices across the globe, from numerous internal business teams and our 150+ CSP partners.
We will describe lessons learnt and best practices established as we enabled our teams with DataBricks autoscaling Job clusters and Notebooks and migrated our Avro/Parquet data to use MetaStore, SQL Endpoints and SQLA Console, while charting the path to the Delta lake…
Spark as part of a Hybrid RDBMS Architecture-John Leach Cofounder Splice MachineData Con LA
In this talk, we will discuss how we use Spark as part of a hybrid RDBMS architecture that includes Hadoop and HBase. The optimizer evaluates each query and sends OLTP traffic (including CRUD queries) to HBase and OLAP traffic to Spark. We will focus on the challenges of handling the tradeoffs inherent in an integrated architecture that simultaneously handles real-time and batch traffic. Lessons learned include: - Embedding Spark into a RDBMS - Running Spark on Yarn and isolating OLTP traffic from OLAP traffic - Accelerating the generation of Spark RDDs from HBase - Customizing the Spark UI The lessons learned can also be applied to other hybrid systems, such as Lambda architectures.
Bio:-
John Leach is the CTO and Co-Founder of Splice Machine. With over 15 years of software experience under his belt, John’s expertise in analytics and BI drives his role as Chief Technology Officer. Prior to Splice Machine, John founded Incite Retail in June 2008 and led the company’s strategy and development efforts. At Incite Retail, he built custom Big Data systems (leveraging HBase and Hadoop) for Fortune 500 companies. Prior to Incite Retail, he ran the business intelligence practice at Blue Martini Software and built strategic partnerships with integration partners. John was a key subject matter expert for Blue Martini Software in many strategic implementations across the world. His focus at Blue Martini was helping clients incorporate decision support knowledge into their current business processes utilizing advanced algorithms and machine learning. John received dual bachelor’s degrees in biomedical and mechanical engineering from Washington University in Saint Louis. Leach is the organizer emeritus for the Saint Louis Hadoop Users Group and is active in the Washington University Elliot Society.
Using Apache Spark for Predicting Degrading and Failing Parts in AviationDatabricks
Throughout naval aviation, data lakes provide the raw material for generating insights into predictive maintenance and increasing readiness across many platforms. Successfully leveraging these data lakes can be technically challenging.
Dr. Elephant: Achieving Quicker, Easier, and Cost-Effective Big Data Analytic...Spark Summit
Is your job running slower than usual? Do you want to make sense from the thousands of Hadoop & Spark metrics? Do you want to monitor the performance of your flow, get alerts and auto tune them? These are the common questions every Hadoop user asks but there is not a single solution that addresses it. We at Linkedin faced lots of such issues and have built a simple self serve tool for the hadoop users called Dr. Elephant. Dr. Elephant, which is already open sourced, is a performance monitoring and tuning tool for Hadoop and Spark. It tries to improve the developer productivity and cluster efficiency by making it easier to tune jobs. Since its open source, it has been adopted by multiple organizations and followed with a lot of interest in the Hadoop and Spark community. In this talk, we will discuss about Dr. Elephant and outline our efforts to expand the scope of Dr. Elephant to be a comprehensive monitoring, debugging and tuning tool for Hadoop and Spark applications. We will talk about how Dr. Elephant performs exception analysis, give clear and specific suggestions on tuning, tracking metrics and monitoring their historical trends. Open Source: https://github.com/linkedin/dr-elephant
Using Apache Cassandra, Apache Spark and Apache Kafka is a powerful combination to realize streaming analytics, from online recommendations to metrics applications. We will have a look at this setup in the context of popular architectures such as Lambda and Kappa and discuss requirements such as latency and processing guarantees in a practical setup. This talk includes a demo of a streaming analytics application using the Mesosphere DCOS as a deployment environment.
In this session we review the design of the current capabilities of a partially completed feature in Apache Geode - the ability to act as a backend for Redis client applications. We’ll explore potential use cases and future direction that this capability might evolve.
Like many startups, Coursera began its data storage journey with MySQL, a familiar and industry-proven database. As Coursera's user base grew from several thousand to many millions, we found that MySQL provided limited availability and restricted our ability to scale easily. New product initiatives and requirements provided a perfect opportunity to revisit our choice of core workhorse database.
After evaluating several NoSQL databases, including MongoDB, DynamoDB and HBase, we elected to transition to Cassandra . Cassandra's relative maturity, masterless architecture (for availability), tunable consistency, and stable low-latency performance made it a clear winner for our needs.
Learn more about what it takes to transition from SQL to Cassandra in this talk.
Delivering Insights from 20M+ Smart Homes with 500M+ DevicesDatabricks
We started out processing big data using AWS S3, EMR clusters, and Athena to serve Analytics data extracts to Tableau BI.
However as our data and teams sizes increased, Avro schemas from source data evolved, and we attempted to serve analytics data through Web apps, we hit a number of limitations in the AWS EMR, Glue/Athena approach.
This is a story of how we scaled out our data processing and boosted team productivity to meet our current demand for insights from 20M+ Smart Homes and 500M+ devices across the globe, from numerous internal business teams and our 150+ CSP partners.
We will describe lessons learnt and best practices established as we enabled our teams with DataBricks autoscaling Job clusters and Notebooks and migrated our Avro/Parquet data to use MetaStore, SQL Endpoints and SQLA Console, while charting the path to the Delta lake…
Spark as part of a Hybrid RDBMS Architecture-John Leach Cofounder Splice MachineData Con LA
In this talk, we will discuss how we use Spark as part of a hybrid RDBMS architecture that includes Hadoop and HBase. The optimizer evaluates each query and sends OLTP traffic (including CRUD queries) to HBase and OLAP traffic to Spark. We will focus on the challenges of handling the tradeoffs inherent in an integrated architecture that simultaneously handles real-time and batch traffic. Lessons learned include: - Embedding Spark into a RDBMS - Running Spark on Yarn and isolating OLTP traffic from OLAP traffic - Accelerating the generation of Spark RDDs from HBase - Customizing the Spark UI The lessons learned can also be applied to other hybrid systems, such as Lambda architectures.
Bio:-
John Leach is the CTO and Co-Founder of Splice Machine. With over 15 years of software experience under his belt, John’s expertise in analytics and BI drives his role as Chief Technology Officer. Prior to Splice Machine, John founded Incite Retail in June 2008 and led the company’s strategy and development efforts. At Incite Retail, he built custom Big Data systems (leveraging HBase and Hadoop) for Fortune 500 companies. Prior to Incite Retail, he ran the business intelligence practice at Blue Martini Software and built strategic partnerships with integration partners. John was a key subject matter expert for Blue Martini Software in many strategic implementations across the world. His focus at Blue Martini was helping clients incorporate decision support knowledge into their current business processes utilizing advanced algorithms and machine learning. John received dual bachelor’s degrees in biomedical and mechanical engineering from Washington University in Saint Louis. Leach is the organizer emeritus for the Saint Louis Hadoop Users Group and is active in the Washington University Elliot Society.
Using Apache Spark for Predicting Degrading and Failing Parts in AviationDatabricks
Throughout naval aviation, data lakes provide the raw material for generating insights into predictive maintenance and increasing readiness across many platforms. Successfully leveraging these data lakes can be technically challenging.
Dr. Elephant: Achieving Quicker, Easier, and Cost-Effective Big Data Analytic...Spark Summit
Is your job running slower than usual? Do you want to make sense from the thousands of Hadoop & Spark metrics? Do you want to monitor the performance of your flow, get alerts and auto tune them? These are the common questions every Hadoop user asks but there is not a single solution that addresses it. We at Linkedin faced lots of such issues and have built a simple self serve tool for the hadoop users called Dr. Elephant. Dr. Elephant, which is already open sourced, is a performance monitoring and tuning tool for Hadoop and Spark. It tries to improve the developer productivity and cluster efficiency by making it easier to tune jobs. Since its open source, it has been adopted by multiple organizations and followed with a lot of interest in the Hadoop and Spark community. In this talk, we will discuss about Dr. Elephant and outline our efforts to expand the scope of Dr. Elephant to be a comprehensive monitoring, debugging and tuning tool for Hadoop and Spark applications. We will talk about how Dr. Elephant performs exception analysis, give clear and specific suggestions on tuning, tracking metrics and monitoring their historical trends. Open Source: https://github.com/linkedin/dr-elephant
Using Apache Cassandra, Apache Spark and Apache Kafka is a powerful combination to realize streaming analytics, from online recommendations to metrics applications. We will have a look at this setup in the context of popular architectures such as Lambda and Kappa and discuss requirements such as latency and processing guarantees in a practical setup. This talk includes a demo of a streaming analytics application using the Mesosphere DCOS as a deployment environment.
In this session we review the design of the current capabilities of a partially completed feature in Apache Geode - the ability to act as a backend for Redis client applications. We’ll explore potential use cases and future direction that this capability might evolve.
Simplifying Disaster Recovery with Delta LakeDatabricks
There’s a need to develop a recovery process for Delta table in a DR scenario. Cloud multi-region sync is Asynchronous. This type of replication does not guarantee the chronological order of files at the target (DR) region. In some cases, we can expect large files to arrive later than small files.
#GeodeSummit - Where Does Geode Fit in Modern System ArchitecturesPivotalOpenSourceHub
In this talk, Eitan Suez explores the question: Where does Geode fit in an organization's system architecture? Geode is a unique and feature-rich product that perhaps hasn't seen as much adoption as it deserves. Today's apps are no longer the straightforward, database-backed web applications we used to build a few years ago. Applications have become more sophisticated, as they've had to meet the need to scale, to be reliable, fault-tolerant, and to integrate with other systems. In this talk, Eitan will suggest one particular fit for Geode in the context of a CQRS architecture, and welcomes you to attend, and to contribute by sharing how you've put Geode to use in your organization.
How to Enable Industrial Decarbonization with Node-RED and InfluxDBInfluxData
Graphite Energy’s thermal energy storage (TES) platform encourages clients to offset their traditional energy consumption with low-cost renewable energy sources. Their customers include manufacturers, mines, steelmakers and aluminum plants. IIoT data is collected about energy usage, fuel consumption, temperatures, solar panels, wind farms, process steam and air dryers. Discover how Graphite Energy uses InfluxDB to monitor their zero-emission energy solution.
In this webinar, Byron Ross will dive into:
Graphite Energy’s approach to reducing their clients’ carbon footprint
Their methodology to collecting sensor data used to make their operations more green
Why they chose a time series database over a data historian
From Idea to Model: Productionizing Data Pipelines with Apache AirflowDatabricks
When supporting a data science team, data engineers are tasked with building a platform that keeps a wide range of stakeholders happy. Data scientists want rapid iteration, infrastructure engineers want monitoring and security controls, and product owners want their solutions deployed in time for quarterly reports.
The workshop tells about HBase data model, architecture and schema design principles.
Source code demo:
https://github.com/moisieienko-valerii/hbase-workshop
Cassandra Summit 2014: Launching PlayStation 4 with Apache CassandraDataStax Academy
Presenters: Alexander Filipchick and Staff Software Engineer, Staff Software Engineers at Sony Network Entertainment
Since the launch of the PlayStation 4, many of the PSN features have been delivered using Cassandra. We will be talking about our experience as we launched one of the most popular gaming consoles in the world on well over 300 nodes.
- Why we picked Cassandra
- Exactly what PSN features for PS4 are powered by Cassandra
- The infrastructure used to deploy our clusters
- How we monitor system heath
- How we design, test and deploy
- Issues we faced and lessons learned along the way
Azure SQL Database is just SQL Server under the covers. However, there are some distinctive differences and new functionality. This session covers some of the new tools and methods available to help you make you Azure SQL Database Run as fast as possible.
Keep your Metadata Repository Current with Event-Driven Updates using CDC and...confluent
The data science techniques and machine learning models that provide the greatest business value and insights require data that spans enterprise silos. To integrate this data, and ensure you’re joining on the right fields, you need a comprehensive, enterprise-wide metadata repository. More importantly, you need it to be always up to date. Nightly updates are simply not good enough when customers and users expect near-real-time responsiveness.
The challenge with keeping a metadata repository up to date lies not with cloud services or distributed storage frameworks, but rather with the relational database management systems (RDBMSs) that dot the enterprise landscape. At Comcast, we’ve found it relatively easy to feed our Apache Atlas metadata repo incrementally from Hadoop and AWS, using event-driven pushes to a dedicated Apache Kafka topic that Atlas listens to. Such pushes are not practical with RDBMSs, however, since the event-driven technique there is the database trigger. Triggers are so invasive and potentially detrimental to performance that your DB admin likely won’t allow one for detecting metadata changes.
Triggers are out. Pulling the complete current state of metadata from a RDBMS at regular intervals and calculating the deltas is too slow and unworkable. And, it turns out that out-of-the-box log-based change data capture (CDC) is also dead-end because metadata changes are represented in transaction logs as SQL DDL strings, not as atomic insert/update/delete operations as for data.
So, how do you keep your metadata repository always up to date with the current state of your RDBMS metadata? Our group solved this challenge by creating an alternate method for CDC on RDBMS metadata based on database system tables. Our query-based CDC serves as a Kafka Connect source for our Apache Atlas sink, providing event-driven, continuous updates to RDBMS metadata in our repository, but does not suffer from the usual limitations/disadvantages of vanilla query-based CDC. If you’re facing a similar challenge, join us at this session to learn more about the obstacles you’ll likely face and how you can overcome them using the method we implemented.
#GeodeSummit Keynote: Creating the Future of Big Data Through 'The Apache Way"PivotalOpenSourceHub
Keynote at Geode Summit 2016 by Dr. Justin Erenkrantz, Bloolmberg LP. Creating the Future of Big Data Through "The Apache Way" and why this matters to the community
LinkedIn serves traffic for its 467 million members from four data centers and multiple PoPs spread geographically around the world. Serving live traffic from from many places at the same time has taken us from a disaster recovery model to a disaster avoidance model where we can take an unhealthy data center or PoP out of rotation and redistribute its traffic to a healthy one within minutes, with virtually no visible impact to users. The geographical distribution of our infrastructure also allows us to optimize the end-user's experience by geo routing users to the best possible PoP and datacenter.
This talk provide details on how LinkedIn shifts traffic between its PoPs and data centers to provide the best possible performance and availability for its members. We will also touch on the complexities of performance in APAC, how IPv6 is helping our members and how LinkedIn stress tests data centers verify its disaster recovery capabilities.
DOES SFO 2016 - Mark Imbriaco - Lessons From the Bleeding EdgeGene Kim
DevOps news is dominated by discussions about tools, and with good reason. It's not unusual for the amount of infrastructure-related code in a system to approach or even exceed the amount of code dedicated to the actual problem the system is solving, even in small systems. As our systems scale in size and complexity, we invest an ever increasing amount of resources into building solutions to help manage our our complex technical systems. And rightly so.
What's often overlooked, however, is the human component of our systems. All too often our approaches to tools, processes, and systems management attempt to remove humans rather than empower them.
I'll make the case that humans are not a source of entropy to be safeguarded against in our systems, but rather a fundamental source of resilience and even efficiency. We'll discuss ways that we can use this point of view to our advantage when constructing our systems to move faster without sacrificing safety. We'll look at things like tools and our interactions with them, team collaboration, and even organizational structure and policies.
We've had plenty of talks about building for web scale, cloud scale, and even planetary scale. Let's spend some time talking about designing for human scale.
Simplifying Disaster Recovery with Delta LakeDatabricks
There’s a need to develop a recovery process for Delta table in a DR scenario. Cloud multi-region sync is Asynchronous. This type of replication does not guarantee the chronological order of files at the target (DR) region. In some cases, we can expect large files to arrive later than small files.
#GeodeSummit - Where Does Geode Fit in Modern System ArchitecturesPivotalOpenSourceHub
In this talk, Eitan Suez explores the question: Where does Geode fit in an organization's system architecture? Geode is a unique and feature-rich product that perhaps hasn't seen as much adoption as it deserves. Today's apps are no longer the straightforward, database-backed web applications we used to build a few years ago. Applications have become more sophisticated, as they've had to meet the need to scale, to be reliable, fault-tolerant, and to integrate with other systems. In this talk, Eitan will suggest one particular fit for Geode in the context of a CQRS architecture, and welcomes you to attend, and to contribute by sharing how you've put Geode to use in your organization.
How to Enable Industrial Decarbonization with Node-RED and InfluxDBInfluxData
Graphite Energy’s thermal energy storage (TES) platform encourages clients to offset their traditional energy consumption with low-cost renewable energy sources. Their customers include manufacturers, mines, steelmakers and aluminum plants. IIoT data is collected about energy usage, fuel consumption, temperatures, solar panels, wind farms, process steam and air dryers. Discover how Graphite Energy uses InfluxDB to monitor their zero-emission energy solution.
In this webinar, Byron Ross will dive into:
Graphite Energy’s approach to reducing their clients’ carbon footprint
Their methodology to collecting sensor data used to make their operations more green
Why they chose a time series database over a data historian
From Idea to Model: Productionizing Data Pipelines with Apache AirflowDatabricks
When supporting a data science team, data engineers are tasked with building a platform that keeps a wide range of stakeholders happy. Data scientists want rapid iteration, infrastructure engineers want monitoring and security controls, and product owners want their solutions deployed in time for quarterly reports.
The workshop tells about HBase data model, architecture and schema design principles.
Source code demo:
https://github.com/moisieienko-valerii/hbase-workshop
Cassandra Summit 2014: Launching PlayStation 4 with Apache CassandraDataStax Academy
Presenters: Alexander Filipchick and Staff Software Engineer, Staff Software Engineers at Sony Network Entertainment
Since the launch of the PlayStation 4, many of the PSN features have been delivered using Cassandra. We will be talking about our experience as we launched one of the most popular gaming consoles in the world on well over 300 nodes.
- Why we picked Cassandra
- Exactly what PSN features for PS4 are powered by Cassandra
- The infrastructure used to deploy our clusters
- How we monitor system heath
- How we design, test and deploy
- Issues we faced and lessons learned along the way
Azure SQL Database is just SQL Server under the covers. However, there are some distinctive differences and new functionality. This session covers some of the new tools and methods available to help you make you Azure SQL Database Run as fast as possible.
Keep your Metadata Repository Current with Event-Driven Updates using CDC and...confluent
The data science techniques and machine learning models that provide the greatest business value and insights require data that spans enterprise silos. To integrate this data, and ensure you’re joining on the right fields, you need a comprehensive, enterprise-wide metadata repository. More importantly, you need it to be always up to date. Nightly updates are simply not good enough when customers and users expect near-real-time responsiveness.
The challenge with keeping a metadata repository up to date lies not with cloud services or distributed storage frameworks, but rather with the relational database management systems (RDBMSs) that dot the enterprise landscape. At Comcast, we’ve found it relatively easy to feed our Apache Atlas metadata repo incrementally from Hadoop and AWS, using event-driven pushes to a dedicated Apache Kafka topic that Atlas listens to. Such pushes are not practical with RDBMSs, however, since the event-driven technique there is the database trigger. Triggers are so invasive and potentially detrimental to performance that your DB admin likely won’t allow one for detecting metadata changes.
Triggers are out. Pulling the complete current state of metadata from a RDBMS at regular intervals and calculating the deltas is too slow and unworkable. And, it turns out that out-of-the-box log-based change data capture (CDC) is also dead-end because metadata changes are represented in transaction logs as SQL DDL strings, not as atomic insert/update/delete operations as for data.
So, how do you keep your metadata repository always up to date with the current state of your RDBMS metadata? Our group solved this challenge by creating an alternate method for CDC on RDBMS metadata based on database system tables. Our query-based CDC serves as a Kafka Connect source for our Apache Atlas sink, providing event-driven, continuous updates to RDBMS metadata in our repository, but does not suffer from the usual limitations/disadvantages of vanilla query-based CDC. If you’re facing a similar challenge, join us at this session to learn more about the obstacles you’ll likely face and how you can overcome them using the method we implemented.
#GeodeSummit Keynote: Creating the Future of Big Data Through 'The Apache Way"PivotalOpenSourceHub
Keynote at Geode Summit 2016 by Dr. Justin Erenkrantz, Bloolmberg LP. Creating the Future of Big Data Through "The Apache Way" and why this matters to the community
LinkedIn serves traffic for its 467 million members from four data centers and multiple PoPs spread geographically around the world. Serving live traffic from from many places at the same time has taken us from a disaster recovery model to a disaster avoidance model where we can take an unhealthy data center or PoP out of rotation and redistribute its traffic to a healthy one within minutes, with virtually no visible impact to users. The geographical distribution of our infrastructure also allows us to optimize the end-user's experience by geo routing users to the best possible PoP and datacenter.
This talk provide details on how LinkedIn shifts traffic between its PoPs and data centers to provide the best possible performance and availability for its members. We will also touch on the complexities of performance in APAC, how IPv6 is helping our members and how LinkedIn stress tests data centers verify its disaster recovery capabilities.
DOES SFO 2016 - Mark Imbriaco - Lessons From the Bleeding EdgeGene Kim
DevOps news is dominated by discussions about tools, and with good reason. It's not unusual for the amount of infrastructure-related code in a system to approach or even exceed the amount of code dedicated to the actual problem the system is solving, even in small systems. As our systems scale in size and complexity, we invest an ever increasing amount of resources into building solutions to help manage our our complex technical systems. And rightly so.
What's often overlooked, however, is the human component of our systems. All too often our approaches to tools, processes, and systems management attempt to remove humans rather than empower them.
I'll make the case that humans are not a source of entropy to be safeguarded against in our systems, but rather a fundamental source of resilience and even efficiency. We'll discuss ways that we can use this point of view to our advantage when constructing our systems to move faster without sacrificing safety. We'll look at things like tools and our interactions with them, team collaboration, and even organizational structure and policies.
We've had plenty of talks about building for web scale, cloud scale, and even planetary scale. Let's spend some time talking about designing for human scale.
DOES SFO 2016 - Greg Padak - Default to OpenGene Kim
Large enterprises have hierarchical organizations to define areas of responsibility and drive better accountability. Those structures often block cross-team interactions and knowledge sharing that slow innovation and agility. We will discuss strategies that use open platforms to drive meaningful development outcomes through collaboration and productivity across the enterprise.
DOES SFO 2016 - Cornelia Davis - DevOps: Who Does What?Gene Kim
Within the IT organizational structures that have dominated the last several decades roles and responsibilities are fairly standardized. But with the dramatic changes that DevOps practices and supporting toolsets bring, many are left feeling a bit off balance - it’s no longer clear who is responsible for even things as “straight-forward” as development or operations.
In this talk I will take traditional roles that are distributed across fairly standard IT structures and sort them into a new organizational context. What is the role of the Enterprise Architect? Who does capacity planning and how? How can change management step out of the way all while still satisfying the requirements of safe deployments? How do agile teams interface with personnel responsible for maintaining legacy systems? I’ll leave the audience with a blueprint for a new organizational structure.
DOES SFO 2016 - Daniel Perez - Doubling Down on ChatOps in the EnterpriseGene Kim
HPE's Research Development & Engineering team has been on a fast-tracked DevOps journey over the past couple of years.
During our DOES 2014 talk we shared our deployment of ElectricFlow as a highly available and centralized self-service solution that has enabled HPE developers to quickly onboard onto ElectricFlow for build/test/deployment pipelines in a repeatable and cost-effective way.
At DOES 2015 we expanded on our investments into a comprehensive monitoring, self-healing, and accelerated deployment strategy across all of our applications to further bridge our Dev and Ops gap with greater visibility into our environments and to accelerate our time-to-market with repeatable and fully automated deploys.
Join us this year as we continue in this journey with our biggest transformation yet: the proliferation of ChatOps within our organization. We will discuss the decisions that lead us to these investments, the key lessons we have learned, and share our various Hubot integrations and capabilities.
DOES SFO 2016 - Alexa Alley - Value Stream MappingGene Kim
Value Stream Mapping can streamline development processes and workflows. This talk will cover how Hearst has done internal Value Stream Mapping workshops to improve team collaboration and release times.
In this talk, I will discuss Value Stream Mapping and how it has helped transform internal processes for businesses within Hearst to adopt a DevOps culture. I’ll walk through the successes and learning experiences we’ve gained by holding VSM sessions at different businesses, in varying verticals at Hearst. We will review real examples of workflows, release times, benefits to the contributors and business, and how the collaboration has helped teams. While there are great successes, I will also share where we saw room for improvement and how we continually make changes to bring the most value to our teams. The most important value is how these have helped to start building a DevOps mindset in a company of over 25,000 employees.
DOES SFO 2016 - Michael Nygard - Tempo, Maneuverability, InitiativeGene Kim
Tempo. Most people are familiar with it in the musical sense. It’s the speed, cadence, rhythm that the music is played. It drives the music forward - and pulls it back. But there’s more to tempo than a musical beat. In war, like in business, tempo - the speed at which you can transition from one task to the next - is a critical component for victory.
No single person nor department owns tempo. Somebody can’t just shout, “I now control the tempo,” and take charge. If you operate at a faster tempo than your cycle time allows, then you’ll get thrashing. The rate of tempo emerges organically as companies move around that action loop of sensing, deciding and acting.
Tempo emerges from the convergence of architecture, infrastructure, organization, and mindset. All these things have to align to achieve tempo. None of them can be changed in isolation.
In this talk, we will look at different models for transforming an organization to high tempo and high performance. We'll see how that can get derailed and what to do about it.
DOES SFO 2016 - Greg Maxey and Laurent Rochette - DSL at ScaleGene Kim
t last year’s DOES conference, we introduced the new Domain Specific Language (DSL) for Electric Flow and painted a vision for how it could revolutionize application release automation (ARA) for very large enterprise implementations.
We are pleased to share with you our experiences and learnings from such a large scale implementation in a financial services company that we’ve been working on this past year. This is a very large implementation—hundreds of ‘platforms’, each containing hundreds of application components each targeting hundreds of ‘device types’, that is, thousands of components distributed across tens of thousands of end points in data centers across the world.
Because of regulatory and quality concerns, complex multi-environment stage testing and promotion systems with clear separation of duties must be enforced. While Electric Flow provided the core functionality to achieve these goals, there was a considerable amount of customization required to support legacy applications, tools and processes. All of the custom work done by the Electric Cloud professional services teams was done in DSL, that is, source code first. Customizations are maintained in a source control system and applied to the various staging environments through automated script execution managed by Electric Flow. While the Electric Flow UI was not used to author content, it was used to verify implementation and provide a convenient ways for the client to monitor progress of their application delivery. The result was a highly maintainable and scalable implementation that could be customized and adjusted on a moment’s notice. Indeed, the project has been managed in a lean agile manner with three week sprints.
As organizations invest in DevOps to release more frequently, there’s a need to treat the database tier as an integral part of your automated delivery pipeline – to build, test and deploy database changes just like any other part of your application.
However, databases (particularly RDBMS) are different from source code, and pose unique challenges to Continuous Delivery - especially in the context of deployments. Often, code changes require updating or migrating the database before the application can be deployed. A deployment method that works for installing a small database or a green-field application may not be suitable for industrial-scale databases. Updating the database can be more demanding than updating the app layer: database changes are more difficult to test, and rollbacks are harder. Furthermore, for organizations who strive to minimize service interruption to end users, database updates with no-downtime are a laborious operation.
Your DB stores the most mission-critical and sensitive data of your organization (transaction data, business data, user information, etc.). As you update your database, you’d want to ensure data integrity, ACID, data retention, and have a solid rollback strategy - in case things go wrong …
This talk covers strategies for database deployments and rollbacks:
• What are some patterns and best practices for reliably deploying databases as part of your CD pipeline?
• How do you safely rollback database code?
• How do you ensure data integrity?
• What are some best practices for handling advanced scenarios and backend processes, such as scheduled tasks, ETL routines, replication architecture, linked databases across distributed infrastructure, and more.
• How to handle legacy database, alongside more modern data management solutions?
DOES SFO 2016 - Marc Priolo - Are we there yet? Gene Kim
2 years ago at DOES14, I presented “Vision Versus Execution: Implementing Continuous Delivery”. I shared how we achieved a big Continuous Delivery win – increasing software test coverage and delivery velocity and efficiency.
Since then, we have been busy scaling DevOps, Continuous Delivery and Lean principles across teams and practices throughout Urban Science. This rollout included both a cultural aspect, as well as an implementation of a centralized, shared, self-service automation solution for our teams – enabling them to “opt-in” to an automated pipeline.
In this talk I will present anecdotes and learnings gathered through our experience over the past two years and discuss the challenges and the value of scaling DevOps across the organization.
DOES SFO 2016 - Kaimar Karu - ITIL. You keep using that word. I don't think i...Gene Kim
Let’s get this straight. ITIL is not about implementing dozens of processes, or about establishing a CAB to review every change request, or about the never-ending story of creating a CMDB. The ITIL framework has been designed to help IT organizations to move from being a black box technology provider – often viewed as a disposable cost centre – to becoming a service provider, and a true partner for the rest of the business. We know – we own the framework.
Unless your customer can achieve their objectives with the technology you run, and can get assistance when needed, no-one cares whether your architecture is built on a monolith, uses microservices, or can brag about being serverless. Agile as a mind-set covers the whole value chain, but common practices are limited to development only. DevOps as a philosophy covers the whole value chain, but common practices are limited to the deployment-focused intersection of development and operations only. Understanding the organisation's strategy, developing the product strategy, and dealing with customer issues are expected to be taken care of by someone else, as if by magic. Because of this, DevOps faces a risk of becoming the largest local optimisation exercise ever undertaken for way too many organisations
In tens of thousands of companies around the world, ITIL has helped to develop an organizational capability that has provided them with a competitive advantage. More than three million people have been certified, and ten times as many trained over the years. Yet, we have all heard the horror stories, too. So what is it that separates a successful adoption of ITIL from an unsuccessful attempt at implementing the framework? What are the common problematic practices and anti-patterns we have seen in the wild, and what does the guidance in ITIL really say? How can you move from a broken approach to IT Service Management to one that delivers value. Can you still use ITIL in the DevOps world? Do you even need to? Or, perhaps, the questions is whether DevOps can survive (in the enterprise) without embracing the service mind-set.
DOES SFO 2016 - Topo Pal - DevOps at Capital OneGene Kim
In my previous years’ talks at DevOps Enterprise Summit, I spoke about starting and scaling of DevOps at Capital One; importance of Open Source, Open Technology and Innovations in DevOps.
This year, I will present Capital One’s journey of maturing in DevOps and Continuous Delivery. My presentation will cover our current areas of focus: Delivery Pipeline, Flow and Measurements. I will also share some of the problems we faced and what we did to solve them.
DOES SFO 2016 - Avan Mathur - Planning for Huge ScaleGene Kim
Installing one CI server or configuring a deployment pipeline for a specific application might be easy enough. However, as enterprises look to scale their DevOps adoption and optimize their software delivery practices across the organization (to support additional teams, product lines, application releases, processes and infrastructure) -- software delivery pipeline(s) need to scale to support enterprise workloads.
For some enterprises, this means having a pipeline that can withstand the velocity and throughput of thousands of product releases, supporting tens of thousands of developers and distributed teams, hundreds of thousands of infrastructure nodes, multitudes of inter-dependent application components, or millions of builds and test-cases.
This scale poses unique challenges and implications for your pipeline design. This talk covers best practices for analyzing and (re)designing your software delivery pipeline – regardless of your chosen tool-set or technologies. Obtain tips and tools for ensuring your pipelines and DevOps infrastructure have the right architecture and feature-set to support your software production as it scales, while also ensuring manageability, governance, security, and compliance.
Learn best practices for how to:
1) Plan for scale: how to project for the types of performance indicators/vectors you’d need to scale across.
2) How to design of your pipeline and supporting infrastructure and operations (such as data retention, artifact retrieval, monitoring, etc.).
3) Design your pipeline workflows and processes to allow reusability and standardization across the organization, while also enabling flexibility to support the needs of specific teams/apps.
4) Design your pipeline in a way that enables fast rollout- easy onboarding thousands of applications, across hundreds of teams
5) Incorporate security access controls, approval gates and compliance checks as part of your pipeline and have them standard across all releases
6) Ensure your architecture support HA, DR and business continuity.
DOES SFO 2016 - Steve Mayner - Transformational LeadershipGene Kim
Adopting DevOps principles and practices frequently leads enterprises down a path to significant cultural and organizational change. This creates a real barrier for DevOps advocates to overcome, since leading researchers sparked by John Kotter’s claim of a 70% failure rate for organizational change have confirmed through scientific study that these types of transformative efforts are more likely to fail than to succeed. Fortunately, all is not lost! The scientific community has also uncovered a powerful tool that consistently increases the success rate of transformational change. The secret weapon is leadership… but not just any style of leadership…
In this session, Steve Mayner will share the research he has uncovered in his own doctoral journey on the power of transformational leadership to drive successful organizational change. How enterprise leaders cast vision, encourage individual growth, demonstrate authenticity, and challenge followers to maximize their creative potential can have a greater influence on the success
DOES16 San Francisco - Heather Mickman - DevOps At Target: Year 3Gene Kim
DevOps At Target: Year 3
Heather Mickman, Sr. Director Target Technology Services, Target
Description:
DevOps at Target: journey to microservices and cloud native architecture
DevOps Enterprise Summit San Francisco 2016
DOES SFO 2016 San Francisco - Julia Wester - Predictability: No Magic RequiredGene Kim
Predictability: No Magic Required
Julia Wester, Improvement Coach, LeanKit
When you merge onto a freeway and are stuck in bumper-to-bumper traffic, you know right away that its going to be a long trip. Similarly, you can predict the cycle time of your work before it is finished without time consuming, and often incorrect, estimation. Sound like magic? Fortunately for all of us, it's not.
This talk explains the basics of queueing theory; demonstrates how allocation models and pull policies affect the cycle time of work; discusses the effects of batch size and variability on queues; and teaches how to successfully monitor your workflow to get leading indicators of effectiveness. With this information, you'll be doing better forecasting, and achieving better outcomes, in no time!
DOES SFO 2016 - Paula Thrasher & Kevin Stanley - Building Brilliant Teams Gene Kim
After an initial DevOps transformation as a company, we had to grapple with how to scale and grow the talent and workforce to build a NextGen DevOps-minded company of 18,000+ people. We have built a number of programs to expand awareness, encourage growth mindsets, and drive workforce development. We will share the different ways we are working to "Build Brilliant Teams" to drive our DevOps transformations.
DOES16 San Francisco - David Blank-Edelman - Lessons Learned from a Parallel ...Gene Kim
Lessons Learned from a Parallel Universe
David N. Blank-Edelman, Technical Evangelist, Apcera
Just within the last ten or so years, we have seen at least two separate communities evolve at the crossroads of development and operations. The first—DevOps—grew up very much in public, the second matured sequestered within the halls of “special” companies like Google and Facebook and is only now starting to gain visibility and traction in the wider world. The DevOps and Site Reliability Engineering (SRE) communities barely speak, yet both have common ancestors and much to offer each other. Let’s look at what they have in common, how they differ, and what are the key things we can learn from both.
DevOps Enterprise Summit San Francisco 2016
Data Capture Market of 2014 - Navigating Competitive LandscapeDmitri Khanine
This session provides a deeper understanding of 2014 Data Capture market and makes it easy for you to navigate the world of competing vendors, products and prices. We will also take a deeper look and understand the strengths and competitive advantages of Oracle Capture and Oracle Forms Recognition
SQL vs. NoSQL. It's always a hard choice.Denis Reznik
This will be an interesting and sometimes fun session with a small demo. This session will answer some of your questions and force you to think about new questions. It will not be very technical, so it's ok for choose another more technical session from the schedule :) But if will decide to come, I can assure you, that you will not be disappointed. We will do a thought experiment with one famous public high-loaded website, will look at advantages and disadvantages of SQL and NoSQL databases, and will choose the best database engine for it.
Oracle in the 2014 edition of its Open World rolled out new database public cloud service with its DBaaS offerings, but this is just a piece in each company's technological architecture. Businesses still have the need to create a Private cloud and discover the driver to create it; Wether it is a measured service,consolidation or rapid provisioning, finding this driver will be the initial building block for it. This presentation will give you an insight on how a Private Cloud is architected, how the service catalog is the most important brick and how get the benefit of this upcoming era of Databases.
Learn what you need to consider when moving from the world of relational databases to a NoSQL document store.
Hear from Developer Advocate Glynn Bird as he explains the key differences between relational databases and JSON document stores like Cloudant, as well as how to dodge the pitfalls of migrating from a relational database to NoSQL.
Add Redis to Postgres to Make Your Microservices Go Boom!Dave Nielsen
Slides for talk delivered at PostgresOpen 2018 in San Francisco https://postgresql.us/events/pgopen2018/schedule/session/538-add-redis-to-postgres-to-make-your-microservice-go-boom/
Если раньше при старте нового проекта нам нужно было выбрать одну из доступных на тот момент SQL баз данных, то за последние 5 лет ситуация кардинально изменилась. Теперь выбор стал гораздо сложнее. SQL или NoSQL? Сloud или on-premises? Если SQL/NoSQL - то какая именно? А может использовать и то и другое?
В данном докладе мы постараемся представить общий обзор доступных сегодня решений для хранения данных и определиться с критериями выбора.
Speaker: Ruben Terceño, Senior Solutions Architect, MongoDB
Level: 200 (Intermediate)
Track: Jumpstart
The MongoDB aggregation framework allows you to perform real-time analytics on your live operational data set. It's an important tool to understand when considering analytics options for your application. In this session we will give you an overview of basic aggregation functionality. You should walk away with an understanding of when to use the aggregation framework for your needs and how to leverage different functions for different purposes.
This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB aggregation basics so you can dive into more advanced sessions later in the day.
What You Will Learn:
- Discover the Aggregation Framework
- Understand the sweet spot for MongoDB Analytics
- Have fun crushing numbers!
Jumpstart: Using Aggregation for Analytics
Speaker: Ruben Terceño, Senior Solutions Architect, MongoDB
Level: 200 (Intermediate)
Track: Jumpstart
The MongoDB aggregation framework allows you to perform real-time analytics on your live operational data set. It's an important tool to understand when considering analytics options for your application. In this session we will give you an overview of basic aggregation functionality. You should walk away with an understanding of when to use the aggregation framework for your needs and how to leverage different functions for different purposes.
This is a Jumpstart session, held before the keynotes, designed to give you an overview of MongoDB aggregation basics so you can dive into more advanced sessions later in the day.
What You Will Learn:
- Discover the Aggregation Framework
- Understand the sweet spot for MongoDB Analytics
- Have fun crushing numbers!
Enkitec E4 Barcelona : SQL and Data Integration Futures on Hadoop : Mark Rittman
There are many options for providing SQL access over data in a Hadoop cluster, including proprietary vendor products such as Oracle Big Data SQL on the Oracle Big Data Appliance along with open-source technologies such as Apache Hive, Cloudera Impala and Apache Drill; customers are using those to provide reporting over their Hadoop and relational data platforms, and looking to add capabilities such as calculation engines, data integration and federation along with in-memory caching to create complete analytic platforms. In this session we'll look at the options that are available, compare database vendor solutions with their open-source alternative, and see how emerging vendors are going beyond simple SQL-on-Hadoop products to offer complete "data fabric" solutions that bring together old-world and new-world technologies and allow seamless offloading of archive data and compute work to lower-cost Hadoop platforms.
In the world of big data, legacy modernization, siloed organizations, empowered customers, and mobile devices, making informed choices about your enterprise infrastructure has become more important than ever. The alternatives are abundant, and the successful Enterprise Architect must constantly discern which new technology is just a shiny object and which will add true business value.
DataStax C*ollege Credit: What and Why NoSQL?DataStax
In the first of our bi-weekly C*ollege Credit series Aaron Morton, DataStax MVP for Apache Cassandra and Apache Cassandra committer and Robin Schumacher, VP of product management at DataStax, will take a look back at the history of NoSQL databases and provide a foundation of knowledge for people looking to get started with NoSQL, or just wanting to learn more about this growing trend. You will learn how to know that NoSQL is right for your application, and how to pick a NoSQL database. This webinar is C* 101 level.
Big Data for Oracle Devs - Towards Spark, Real-Time and Predictive AnalyticsMark Rittman
This is a session for Oracle DBAs and devs that looks at the cutting edge big data techs like Spark, Kafka etc, and through demos shows how Hadoop is now a a real-time platform for fast analytics, data integration and predictive modeling
Similar to DOES SFO 2016 - Rich Jackson & Rosalind Radcliffe - The Mainframe DevOps Team Saves the Day (20)
DOES SFO 2016 - Steve Brodie - The Future of DevOps in the EnterpriseGene Kim
DevOps adoption is growing rapidly, especially in the enterprise. What started as a “keeping up with the unicorns” grassroots movement within more forward thinking companies, has matured to large, complex enterprises now often being on the forefront of DevOps innovation.
DOES SFO 2016 - Aimee Bechtle - Utilizing Distributed Dojos to Transform a Wo...Gene Kim
Aimee Bechtle of Capital One’s Card Technology Advanced Engineering team will share how they have utilized Distributed Dojos to transform to a workforce skilled in DevOpsSec, public cloud and automation. Their Distributed Dojo strategy was formed when they needed to quickly and efficiently meet the challenges of a large cloud migration but were limited by local resources. Reaching out to a prominent retail chain they learned how draw from their engineering talent to form short-term, highly focused delivery teams. These teams now work cohesively across multiple locations to solve the challenges introduced when migrating such a large-scale, complex infrastructure to the cloud. They will explain how within weeks several Dojo teams were formed and releasing automation that not only supported Card Technology’s DevOpsSec and cloud mission, but provided associates with new skills that could be proliferated throughout the company.
DOES SFO 2016 - Ray Krueger - Speed as a Prime DirectiveGene Kim
Speed as a Prime Directive
Ray Krueger, Vice President of Engineering, Hyatt Hotels Corporation
Hyatt is transforming into a technology company that delivers digital experiences in the Hospitality industry. We're applying Continuous Delivery in order to achieve our goals faster. In the process, we are simplifying and abstracting legacy environments and building a hospitality technology platform.
DOES SFO 2016 - Kevina Finn-Braun & J. Paul Reed - Beyond the Retrospective: ...Gene Kim
At DOES15, we presented the work we'd done at Salesforce to take their SRE teams to the "blameless cloud." We worked with various roles in the SRE teams so they could start asking the right questions about failure, and through the postmortem and retrospective process, begin to make lasting changes in _how_ Salesforce worked with and remediated identified failures.
But DevOps espouses less siloed thinking and more shared responsibilities, so we found postmortems within the SRE organization weren't enough. As Salesforce was moving toward a model of "service ownership," teams along
the entire software delivery value stream needed to start to understand their roadblocks to remediation and what aspects of the complex system they worked in were impeding their ability to "own their service."
We'll discuss the second phase of our work in helping these operations _and product_ teams gain a deeper understanding of service ownership, and why
just "DevOps'ing it up" wasn't quite enough on its own to help. plus we'll introduce an expanded model from last year's talk that incorporates human factors and complexity theory. These additions helped prime the teams to more effectively grapple with the challenges facing them on the road to true service ownership.
DOES SFO 2016 - Andy Cooper & Brandon Holcomb - When IT Closes the DealGene Kim
Equifax powers the financial future of individuals and organizations around the world. Using the combined strength of unique trusted data, technology and innovative analytics, Equifax has grown from a consumer credit company into a leading provider of insights and knowledge that helps its customers make informed decisions.
Delivering on that trust requires both business and technical operations excellence. Faced with the growing challenges of the modern marketplace, the Equifax IT organization embarked on a top-to-bottom cultural and technical transformation. This presentation will outline how the Equifax IT team has taken steps towards transforming itself into a nimble, efficient and internally-capable organization. Topics will include key management lessons learned, budget realignment, creating partnerships across organizational boundaries and strategic projects to focus the organization’s transformation efforts. The early results? IT is no longer viewed as a liability to the business, instead IT is now an asset – a strategic partner that is actively helping to close deals.
DOES SFO 2016 - Courtney Kissler - Inspire and Nurture the Human SpiritGene Kim
Joining another enterprise retailer and discovering similarities and differences with how DevOps is being adopted has been an extremely interesting experience. I will share what I’ve learned so far and how the Point of Service team is practicing lean techniques, optimizing delivery of value and measuring outcomes to enable continuous improvement.
DOES SFO 2016 - Matthew Barr - Enterprise Git - the hard bits Gene Kim
Source code: Just put it in git, right? Enterprise scale? Github!
But what about when you have a *lot* of source code? Thousands of repositories? No problem! Github Enterprise or Bitbucket Server to the rescue!
Now: Add PCI & SOX. Confidential information. Separation of concerns. Audit. SSO. Centralized SSH key management. DR. Geographic diversity.
This is the part where you roll up your sleeves, and start doing the real work.
This talk starts where the vendors stop- discussing workflows to keep work moving, security & audit protections to ensure code integrity, and automation to connect to other enterprise systems.
DOES SFO 2016 - Sam Guckenheimer & Ed Blankenship "Moving to One Engineering ...Gene Kim
Microsoft has been on a transformation both culturally as well as technically by consolidating engineering systems to One Engineering System. Along the way, we've had many learnings that we'll share from soup to nuts: adopting Git at scale, realigning our talent competencies, reorganizing, becoming data driven, and delivering continuously through lots of automation & cloud adoption.
DOES16 San Francisco - Opal Perry - Technology Transformation: How Team Value...Gene Kim
Technology Transformation: How Team Values Boost Customer Value
Opal Perry, Divisional CIO, Claims, Allstate Insurance
At Allstate, the largest publicly held personal lines property and casualty insurer in America, we constantly innovate for the good of our customers. It’s part of who we are and the legacy we’ve been building since 1931. Recently, we set about recasting the organization's technical and engineering discipline to make it core to the company, and moving technology up the value chain. But technology is just one piece of the transformation. Opal will discuss how an explicit focus on culture and values, together with new ways of working, empower product teams and bring valuable technology to customers with greater speed and agility.
DOES16 San Francisco - Dominica DeGrandis - Time Theft: How Hidden and Unplan...Gene Kim
Time Theft: How Hidden and Unplanned Work Commit the Perfect Crime
Dominica DeGrandis, Director, Training & Coaching, LeanKit
Invisible work competes with known work. Invisible work blindsides people, leaving teams unaware of mutually critical information, until it’s too late.
Married to this problem, is the question, how does one plan for, or allocate capacity for the invisible? It’s tough to analyze something you can’t see. Incognito work doesn’t show up in metrics. Hidden work stalls and blocks important priorities and masks dependencies. Risk accumulates from work delivered late and started late.
The solution is to put conditions in place that allow unplanned work to be seen and measured -- particularly high risk work involving far-reaching decisions. This talk shows you how to do just that.
DevOps Enterprise Summit San Francisco 2016
DOES16 San Francisco - Marc Ng - SAP’s DevOps Journey: From Building an App t...Gene Kim
SAP’s DevOps Journey: From Building an App to Building a Cloud
Marc Ng, Cloud Infrastructure Engineering & Automation, SAP
SAP has been using a DevOps & Continuous Delivery approach for building its web and mobile apps for several years, and is now building and running a global cloud at the scale needed to support the digital transformation needs of its customers. This talk recaps the story of how SAP originally adopted DevOps practices before moving on to describe how the Cloud Infrastructure Services team is building and operating its 3rd generation cloud automation system using microservices, containers and open-source software.
DevOps Enterprise Summit San Francisco 2016
DOES16 San Francisco - Charles Betz - Influencing Higher Education to Create ...Gene Kim
Influencing Higher Education to Create the Future DevOps Workforce
Charles Betz, Coordinator, Minnesota State Digital Curricula Initiative
"Where will we find the talent?"
The feedback loops are slow for higher education, and institutions are only now beginning to respond to the opportunities of DevOps. How can we accelerate this process?
This fast-paced talk will cover both macro- and micro-scale efforts. Over the summer, 11 faculty from Minnesota teaching colleges worked with industry thought leaders to draft a report, “Digital Curricula: Toward next-generation IT education.” The report (including a survey on current digital workforce) compiled hundreds of learning objectives from leading digital and DevOps practices, for instructors and commercial trainers around the world to use in course development.
This report (free and sponsored by the Advance-IT Center of Excellence in the Minnesota State University System) is being distributed this October to hundreds of computing and IT faculty across the 6th-largest education system in the U.S. and will be presented here for the first time to an industry audience.
As a worked example at the course level, the University of St. Thomas offers a survey course on IT delivery, using a “flipped model” with recorded lectures and experiential labs. An open source, 8-node, software-defined virtual cluster based on open technologies is used to illustrate continuous delivery, infrastructure automation, and Agile concepts for the course’s 12 open source lab sessions, as well as collaborative topics such as product management, work management, and operations. Come hear discussion of the motivations, teaching philosophy, technical practices, and results of this pioneering course.
DevOps Enterprise Summit San Francisco 2016
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
When stars align: studies in data quality, knowledge graphs, and machine lear...
DOES SFO 2016 - Rich Jackson & Rosalind Radcliffe - The Mainframe DevOps Team Saves the Day
1. The Mainframe DevOps Team Saves the Day
Rich Jackson – Principal Systems Engineer – Walmart
Rosalind Radclifffe – Distinguished Engineer - IBM
DevOps Enterprise Summit – San Francisco, CA – Nov. 9, 2016
2. • Established in 1962
• 2.3 million associates
• Serving 260 million customers a week
• 11,500+ retail units under 63 banners in 28 countries
• eCommerce sites in 11 countries
• $482+ billion in sales
Who is Walmart
#DOES162
Key Facts
3. Who we are
Principal Systems Engineer at
Walmart Technology
Rich Jackson
#DOES163
Distinguished Engineer and Chief
Architect for DevOps at IBM
Rosalind Radcliffe
4. • Inventory – it’s a big deal
• Small Batches
• Out of Stock vs. Carrying Inventory
• Retail Link – Share the information
• Differentiator – …and a game changer
• Overhaul for the 2010s
Background
#DOES164
Inventory Management Innovation
5. • Web server layer and cache layer
• Session State
• Appliances – the appeal
• Timeline and other solutions
• “In the middle of difficulty lies opportunity”
The Problem
#DOES165
Caching
6. • Skunkworks project
• Minimize Developer burden
• Stash and retrieve data… no biggie
• Non-functional requirements were the focus
• z/OS, CICS, & VSAM
• Assembler & COBOL
The Solution
#DOES166
Caching Service
7. • Preconceived Notions
• If you can break it, don’t use it
• Can you handle 100 TPS?
• 500?, 1000?, 2000?, 4000?
• Ugh…. Okaaay
The Solution
#DOES167
Resistance
8. • Successful go-live
• Dev team is now a big advocate
• Still in production today
• ~21 Billion requests with no disruption
The Results
#DOES168
Success
10. More Services
Object Stores
Key/Value object store with rich
feature set and array of DBMS-like
capabilities
ID Management
Creating web service IDs, reset and
resume RACF IDs
Queues
Create new MQ queues and queue
remote definitions
InfoSec/Crypto
Various cryptographic functions
over HTTP
#DOES1610
11. Git Some Services
Enterprise Cache
Key/Value store for transient
object caching
FAM
Key/Value object store with rich
feature set and array of DBMS-
like capabilities
zUID
Unique ID generator over HTTP
#DOES1611