This document discusses the future of IT and cloud computing. It describes how cloud computing is the latest in a series of disruptive innovations in IT, following mainframes, client-server systems, and the internet. The document outlines how cloud computing enables new capabilities like automated deployment, massive scale through resource pooling, and access from anywhere through broad networks. It also discusses how cloud supply chains have become more complex and how security must be managed throughout these extended ecosystems. Finally, it summarizes that cloud computing is driving the industrialization of IT and that automated testing and quality control are essential to improve productivity.
Let's Get Start Your Preparation for CSA Certificate of Cloud Security Knowle...Amaaira Johns
Start Here---> https://bit.ly/35Y3fiW <---Get complete detail on CCSK exam guide to crack CCSK v4. You can collect all information on CCSK tutorial, practice test, books, study material, exam questions, and syllabus. Firm your knowledge on CCSK v4 and get ready to crack CCSK certification. Explore all information on CCSK exam with the number of questions, passing percentage, and time duration to complete the test.
Certificate of Cloud Security Knowledge, widely known as CCSK training course is an end to end knowledge-focused training and certification program that helps security professionals gain deep insights of the cloud security and related aspects while delivering far reaching understanding of how to address various cloud security concerns.
https://www.infosectrain.com/courses/certificate-cloud-security-knowledge-ccsk/
Our webinar presents a critical analysis of serverless technology and our thoughts about its future. We use Emerging Technology Analysis Canvas (ETAC), a framework built to analyze emerging technologies, as the methodology of our study. Based on our analysis, we believe that serverless can significantly impact applications and software development workflows.
We’ve also made two further observations:
Limitations, such as tail latencies and cold starts, are not deal breakers for adoption. There are significant use cases that can work with existing serverless technologies despite these limitations.
We see a significant gap in required tooling and IDE support, best practices, and architecture blueprints. With proper tooling, it is possible to train existing enterprise developers to program with serverless. If proper tools are forthcoming, we believe serverless can cross the chasm in 3-5 years.
A detailed analysis can be found here: A Survey of Serverless: Status Quo and Future Directions. Join our webinar as we discuss this study, our conclusions, and evidence in detail.
Let's Get Start Your Preparation for CSA Certificate of Cloud Security Knowle...Amaaira Johns
Start Here---> https://bit.ly/35Y3fiW <---Get complete detail on CCSK exam guide to crack CCSK v4. You can collect all information on CCSK tutorial, practice test, books, study material, exam questions, and syllabus. Firm your knowledge on CCSK v4 and get ready to crack CCSK certification. Explore all information on CCSK exam with the number of questions, passing percentage, and time duration to complete the test.
Certificate of Cloud Security Knowledge, widely known as CCSK training course is an end to end knowledge-focused training and certification program that helps security professionals gain deep insights of the cloud security and related aspects while delivering far reaching understanding of how to address various cloud security concerns.
https://www.infosectrain.com/courses/certificate-cloud-security-knowledge-ccsk/
Our webinar presents a critical analysis of serverless technology and our thoughts about its future. We use Emerging Technology Analysis Canvas (ETAC), a framework built to analyze emerging technologies, as the methodology of our study. Based on our analysis, we believe that serverless can significantly impact applications and software development workflows.
We’ve also made two further observations:
Limitations, such as tail latencies and cold starts, are not deal breakers for adoption. There are significant use cases that can work with existing serverless technologies despite these limitations.
We see a significant gap in required tooling and IDE support, best practices, and architecture blueprints. With proper tooling, it is possible to train existing enterprise developers to program with serverless. If proper tools are forthcoming, we believe serverless can cross the chasm in 3-5 years.
A detailed analysis can be found here: A Survey of Serverless: Status Quo and Future Directions. Join our webinar as we discuss this study, our conclusions, and evidence in detail.
General discussions
Why cloud?
The terminology: relating virtualization and cloud
Types of Virtualization and Cloud deployment model
Decisive factors in migration
Hands-on cloud deployment
Cloud for banks
Digital Transformation in 2018: DX 4 3-2-1James Kelly
Digital Transformation Into What?
DX SOTU
DX 4-3-2-1 to Engineering Simplicity
Secure, Automated, Hybrid Multicloud is the Platform
DigitalOps, DevOps and GitOps for DX
Architecture for Agile
Cloud-Grade Network Reliability Engineering (NRE)
Selling Software
Maturing IoT solutions with Microsoft Azure (Sam Vanhoutte & Glenn Colpaert a...Codit
“Internet of Things” is changing our world and today the Internet of Things knows almost as many applications as there are types of devices connected. In this session, Sam and Glenn will give an overview of the latest IoT solutions, the different learnings from the field and explain which key components are instrumental to integrating your solutions to the Azure IoT platform to ensure they are robust, future-proof and secure.
With all the hype around Cloud and SDN, business decision makers are finding themselves trying to navigate through many new concepts and consequently needing to change the way they have traditionally selected their IT infrastructure. Technologies are now becoming more integrated and it is more important than ever to help your business be agile enough to keep up with the demands of your users and your customers. Come hear from Lisa Guess to learn how organizations can embrace Cloud technologies such as automation, SDN and Orchestration platforms to help you build next-generation networks.
Cloud Computing the new buzz word.
This presentation was presented by CA Anand Prakash Jangid at a regional conference of The Institute of Chartered Accountants of India at Hyderabad.
Cloud Computing: Architecture, IT Security and Operational PerspectivesMegan Eskey
A 2010 presentation on NASA Nebula that makes no reference to OpenStack (or pinet) dated a month after OpenStack was released to the public as open source. There is no link between Nebula and OpenStack.
Risc and velostrata 2 28 2018 lessons_in_cloud_migrationRISC Networks
Learn how to accelerate and
de-risk your cloud migration project
Despite the surge in enterprises migrating applications to the public cloud, more than half of all projects are delayed or over budget and an even greater number are more difficult than expected.1
Cloud Migrations don’t begin when you start moving applications into the cloud. They begin with your application landscape discovery and assessment. The second phase comprises the actual migration where applications are moved to the public cloud. Working with purpose-built enterprise-grade cloud migration platforms, especially those that partner to integrate both phases greatly simplifies and accelerates projects.
RISC Networks and Velostrata have teamed up to deliver this webinar where we’ll share real-world examples, tips, and tricks on crafting a seamless cloud migration from start to completion.
General discussions
Why cloud?
The terminology: relating virtualization and cloud
Types of Virtualization and Cloud deployment model
Decisive factors in migration
Hands-on cloud deployment
Cloud for banks
Digital Transformation in 2018: DX 4 3-2-1James Kelly
Digital Transformation Into What?
DX SOTU
DX 4-3-2-1 to Engineering Simplicity
Secure, Automated, Hybrid Multicloud is the Platform
DigitalOps, DevOps and GitOps for DX
Architecture for Agile
Cloud-Grade Network Reliability Engineering (NRE)
Selling Software
Maturing IoT solutions with Microsoft Azure (Sam Vanhoutte & Glenn Colpaert a...Codit
“Internet of Things” is changing our world and today the Internet of Things knows almost as many applications as there are types of devices connected. In this session, Sam and Glenn will give an overview of the latest IoT solutions, the different learnings from the field and explain which key components are instrumental to integrating your solutions to the Azure IoT platform to ensure they are robust, future-proof and secure.
With all the hype around Cloud and SDN, business decision makers are finding themselves trying to navigate through many new concepts and consequently needing to change the way they have traditionally selected their IT infrastructure. Technologies are now becoming more integrated and it is more important than ever to help your business be agile enough to keep up with the demands of your users and your customers. Come hear from Lisa Guess to learn how organizations can embrace Cloud technologies such as automation, SDN and Orchestration platforms to help you build next-generation networks.
Cloud Computing the new buzz word.
This presentation was presented by CA Anand Prakash Jangid at a regional conference of The Institute of Chartered Accountants of India at Hyderabad.
Cloud Computing: Architecture, IT Security and Operational PerspectivesMegan Eskey
A 2010 presentation on NASA Nebula that makes no reference to OpenStack (or pinet) dated a month after OpenStack was released to the public as open source. There is no link between Nebula and OpenStack.
Risc and velostrata 2 28 2018 lessons_in_cloud_migrationRISC Networks
Learn how to accelerate and
de-risk your cloud migration project
Despite the surge in enterprises migrating applications to the public cloud, more than half of all projects are delayed or over budget and an even greater number are more difficult than expected.1
Cloud Migrations don’t begin when you start moving applications into the cloud. They begin with your application landscape discovery and assessment. The second phase comprises the actual migration where applications are moved to the public cloud. Working with purpose-built enterprise-grade cloud migration platforms, especially those that partner to integrate both phases greatly simplifies and accelerates projects.
RISC Networks and Velostrata have teamed up to deliver this webinar where we’ll share real-world examples, tips, and tricks on crafting a seamless cloud migration from start to completion.
Scalable cloud governance, risk management and compliancePeter HJ van Eijk
Cloud consumers are primarily worried about security. If you are a cloud provider, or cloud broker, learn how to get improve your trustworthiness to your customers efficiently and scalable, by integrating governance, risk management and compliance.
Web performance is good, understanding performance is better.
What you need to understand in order to be able to have IT systems that perform well at a reasonable cost.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Pushing the limits of ePRTC: 100ns holdover for 100 daysAdtran
At WSTS 2024, Alon Stern explored the topic of parametric holdover and explained how recent research findings can be implemented in real-world PNT networks to achieve 100 nanoseconds of accuracy for up to 100 days.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
6. Your job
Let’s assume that you work with IT. As an IT
manager, systems or solutions architect,
QA, or any other IT role. Or you are involved
in procuring IT or as an auditor or on the
legal side of IT.
7.
8. How did we get here?
Each of these steps was a disruptive innovation …
1970-80s
Mainframe
1990s
Client/Server
2000s
Internet
2010s
Cloud, Social,
Mobile
9. Disruptive innovations
Characteristics
• Not as good (initially)
• Much cheaper
• Addresses ‘over-served’
customers
• Rapidly improving
• Eventually drives original out
of the market
Examples
• Wikipedia
• PC
• Internet
• Cloud computing
https://en.wikipedia.org/wiki/Disruptive_innovation
14. IT is getting more complicated
• Moore’s law
• More technology
• More components
• More programming languages
• More interfaces and devices
• More pervasive IT
• More threats
• More brainpower required
• More productivity required
24. CCM in a simple supply chain
IaaS SaaS Consumers
“pass through compliance”
25. Examples of the CSA GRC Stack
components
GRC stack
component
Example element
CCM AAC-02: Independent reviews and assessments shall be performed at
least annually […]
CAIQ AAC-02.3: Do you conduct regular application penetration tests of your
cloud infrastructure as prescribed by industry best practices and
guidance?
Cloud Audit http:/…
/cloudaudit/org/cloudsecurityalliance/guidance/AAC-02.3
CTP "It is 11 pm, when was the last pentest on this application done?"
26. The Cloud, what is it?
Cloud computing is a service based delivery
model (outsourcing) for IT with 5 essential
characteristics leading to new business
value. (and new risk )
28. 5 essential characteristics lead to
new value and new risk
• Resource pooling. Multiple customers
• On-demand self-service. Unilateral
provisioning, programmable infrastructure
• Broad network access. Network and
client
• Rapid elasticity. Speedy provisioning and
deprovisioning
• Measured Service. Pay per use
32. Feature velocity through
continuous delivery
Number of deployments per day
(source: “The Phoenix Project”, 2012)
Company Deploy Frequency Deploy Lead Time
Amazon 23.000/day Minutes
Google 5.500/day Minutes
Netflix 500/day Minutes
Twitter 3/week Minutes
Typical enterprise 1/9 months Months
At higher deploy frequency, reliability increases
33. From code to production
Source code
External libraries
Trusted base OS
Repository, i.e.
Github
Build and test
server, i.e.
Jenkins,
Circleci
Source code
Static and dynamic testing
Security testing
34. Old School Auditor shock
• Testing in
production?
• No patching of
servers?
• No weekly CAB
meetings?
35. Lean production &
the three ways of DevOps
1. Systems thinking:
reduce Muda
2. Rapid feedback loops:
Jidoka
3. Continuous improvement:
Kaizen, Chaos engineering
36. Elementsof CloudSecurity
• Controls instead of ownership
• Hyper segregation andautomated deployment
Infrastructure
• Encryption
• Key management at scale
Data
• Ecosystem includes Security as a Service
• DevSecOps andArchitect for failure
Application
• Federated IDM
• Account androle segregation
User
• Contracts, SLA, Exit plan
• Continuous monitoring andlogging
Governance
37. C o u r s e s t r u c t u r e
Infrastructuresecurityforcloud Managingcloud
security& risk
Data securityforcloud
Introto cloudcomputing
Securingcloudapplications,users,&relatedtechnologies Cloudsecurity
operations
1
• NIST definitions
• Essential characteristics
• Service models
• Deployment models
2 3
4 5 6
• Securing base infrastructure
• Management plane security
• Securing virtual hosts & networks
• IaaS, PaaS, SaaS security
• Risk & governance
• Legal & compliance
• Audit
• Data governance
• Cloud data architectures
• Data security & encryption
• CASB and data loss prevention
• BC/DR
• Application security
• Identity & access management
• Related technologies
• What to look for in a cloud provider
• Security as a Service
• Incident response
CCSK Course Structure
38. Summary
• Cloud computing is just part of the
industrialization of IT
• Automated testing and quality control is
essential to move to the next level of
productivity
My name is Peter van Eijk, and I am one of the world’s most experienced independent cloud computing trainers.
Let me talk to you about why cloud computing is the future of IT, and why it is the future of your job in IT. IT has always been a pretty dynamic environment, but cloud computing may well be the biggest change in the industry so far.
So what do you think? Is cloud computing the next wave in Information technology?
is it a load of marketing hype,
or is it a jump into the deep?
There is truth in all of these.
Personally
I think that Cloud computing is primarily a state of mind, a way of thinking. And for some reason or another, this picture is more funny in Amsterdam
Let’s assume that you work with IT. As an IT manager, systems or solutions architect, or any other IT role. Or you are involved in procuring IT or as an auditor or on the legal side of IT.
I will give you a new perspective on how to organize IT, and that view will allow you to figure out which part of your job will disappear, and which part of your job will grow.
So, cloud computing is a revolutionary change. Or is it? Maybe cloud computing is following the same rules as other maturing industries.
To understand the future, we must first understand the past.
The first commercial applications of IT were on the mainframe. Which is a single computer filling a large room. There was one department running it, and it was serving an entire company. It had lots of people managing it.
This is the kind of hardware that I wrote my first computer program on in 1973.
Then we move to client server, in the eighties, where basically every department had a computer. And each computer had its own system manager. So ownership changed a lot here, as well as the way in which decisions were made on computing.
Then came the PC, and a little later: the internet. One of the things that happened there was that companies no longer owned all the computers that users worked on. We moved stuff to customers that worked on their own computers. People at home owned them. Once again a shift in the way ownership and responsibility for it was organized.
Then we get the breakthrough of cloud computing. Where companies no longer even own the servers that they work on.
One way of looking at this whole shift is looking at who runs it, who pays the electricity bill? Is it a central department? Is it each business unit? Is it the user? Or is it totally outsourced to another company?
In a few minutes we will look at how the amount of people changes that we need to run one computer. Just remember that it took a whole team to manage a single mainframe computer.
Now each of these steps: mainframe, client/server, PC, cloud, is a disruptive change.
It is a big step change, it is not a gradual change. It is a change that really brings us to a different level of understanding and management.
And disruptive change is not a trendy marketing word, it is a solid business innovation concept developed by Harvard Business School Professor Clayton Christensen.
According to Christensen, and I am simplifying here, A disruptive innovation is about something that is not as good, initially, as the thing that it replaces. But it is much cheaper, so it appeals to people who were “overserved” by the original product. Customers Who could not afford the original product.
And because of this, there is a good business model, and the new provider can afford to innovate further.
Which means it can be rapidly improving. And when it is rapidly improving, it will eventually overtake the original and replace it and drive it out of the market.
Let’s look at some examples here.
Like Wikipedia, to start with.
What was Wikipedia to begin with? It was nothing more than a handful of articles. Nowhere near the completeness of the encyclopedia Britannica.
But it was good enough for a lot of people, and it was definitely a lot cheaper than the encyclopedia Britannica.
Fast forward a couple of years, and which is the one that survives? Is it Wikipedia or is it the encyclopedia Britannica?
The PC is another example. The original PC was a really simple machine. It had only 64 Kilobytes of memory.
I remember that I saw one and I was thinking: you cannot take this seriously. This is not a real computer, It cant even multitask. 64 Kilobytes? Are you joking? My bootloader is bigger than that.
But is has innovated and innovated and innovated, and we have now come to the point that your average server is a souped up PC that is more powerful than the mainframe that it replaces.
That is what we call a disruptive innovation. (6’12).
Of course, then we have the internet replacing leased lines at a fraction of the cost.
And the next thing up: cloud computing.
In the beginning, it did not sound like much, but it has evolved and evolved, and right now we have to understand that cloud computing can actually be better, cheaper, and safer than the style of computing it replaces. A few years ago for example, I was measuring the uptime of cloud services, and back then they were already better than most corporate IT.
Each of these disruptive innovations has something you might call a tipping point.
Disruptive innovation leads to Tipping Points (Malcolm
Gladwell).
Each of these disruptive innovations has something you might call a tipping point. For a long time the world is on one side, and suddenly, like a seesaw, it has tipped to the other side.
Once it passes the threshold, most of it suddenly moves to the other side.
You go like, hey, what happened?
And if you are one of the people whose job is involved in that change, you can complain about it. You can say, it is a change for the worse, the mainframe was such a good machine.
You can start complaining about it, you can keep complaining about it, or you can try to find out what you have learned in your last job, and carry that over to the next generation of technology. I think you can do that, but that is a different story.
So the end result for this, for IT, is that it is not your daddy’s datacenter anymore.
So the end result for this, for IT, is that it is not your daddy’s datacenter anymore.
You know, the big fortress on the hill, protected by a large moat and big walls. It is no longer there. This is no longer the model for IT.
Digital data is no longer confined to the walls of the data center.
It is the whole world that is being connected,
It is the whole world that is being connected, and your systems are basically plugged in to that giant world wide grid of interconnected machines.
Now what makes these developments happening?
The biggest driver for information technology is Moore’s law
https://www.flickr.com/photos/jurvetson/25046013104
The biggest driver for information technology is Moore’s law (8:17). [plaatje nodig]
Where we can see that every two years the number of components on a chip doubles.
As a result, the capacity in terms of processing and storage, increases by more than an order of magnitude every decade.
In my lifetime I have seen the price/performance ratio of computing, storage and bandwidth improve by a factor of 1 million.
That drop in price, goes hand in hand with an increase in the amount of IT stuff that we have to manage. And you can imagine, is also is like orders of magnitude.
And what does that mean for the number of people that have to manage this? Is that increasing by orders of magnitude as well?
No, it is not, because we don’t have that many people.
The productivity of managing that IT really has to go up.
And it actually does go up.
We started out with 10 people managing one computer, a little later it was one person managing one computer, then one person managing 10 computers. And now we see for specific functions people managing thousands or even tens of thousands of computers.
So that productivity has to go up.
At the same time, we need a lot more skills to run that stuff.
At the same time, we need a lot more skills to run that stuff.
We have more technology, more components, more programming languages, more interfaces, more different devices, more existing software.
And our IT is having more of an impact on business and society. From the back office, it expanded into the front office, and from the front office, it moved to * being * the product for some companies. Think Netflix, Uber, Airbnb. But just about every company cannot go back to manual order taking or production control.
And the threat landscape in terms of the number of people that are trying to hack and abuse that technology is also increasing rapidly. Not to mention * their* skill level.
That means that we need a lot more brain power to manage all that IT. A lot more brainpower.
And it is no longer feasible anymore to run that IT entirely in your own shop.
We need a way to organize IT that does not force us to deploy a hundred skilled people for a one hundred person company. (11:39)
Now let’s look at how that problem has been addressed historically.
Henry Ford allegedly invented, or at least made big, the assembly line method of automobile production. Where a lot of people were working on very specialist jobs in that assembly line. And that made productivity really go up massively.
In IT it is no different. We have specialists for just about everything. Network engineers, server and storage admins. Database administrators, front end and back end software developers. And so on. But the parallels go much deeper.
Ford, River Rouge plant
In this picture here you can see how Ford’s assembly line was laid out in his River Rouge factory. It was built out between 1910 and 1920. It is still there.
The River Rouge plant was the first full assembly line cranking out cars. On one end of the factory, through the River Rouge, ships would come in with iron ore. On the other end T-Fords would come out. In between was the assembly line and everything needed to produce the car. The Ford company had its own steel mill, its own electricity plant, its own glass factory.
Everything was there on the River Rouge plant. At the time it was the largest industrial estate in the world.
But after the assembly line was introduced, other changes started happening.
Fast forward a couple of decades later, and this plant is no longer exclusively owned by the Ford Motor Company. What happened was that that division of labor was extended beyond the company. Components of the whole factory were outsourced.
The Ford company no longer owns the steel mill; it is outsourced to another company with more steel mills. It no longer owns the electricity plant, that is now part of a large electricity company.
Again we see specialization, this time beyond the borders of a specific company.
The assembly line has now become part of a supply chain of companies, each of which delivers to the next company downstream in the flow of goods.
Specialization of labor above the level of an individual company. And currently only twenty percent of the value of the car is produced by the car company, the rest is outsourced.
The same has happened with Apple.
First 50 Apple I were soldered together in the kitchen of steve jobs parents.
The same has happened with Apple.
The original Apple 1 computer, was developed around 1978 by Steven Jobs and Steve Wozniak. The first order was 50 of these, by an electronics shop. They soldered these together on the kitchen table in Steven Jobs parent’s house.
Again, fast forward a few decades, and this is what the iPhone supply chain looks like now. It
First 50 Apple I were soldered together in the kitchen of steve jobs parents.
Again, fast forward a few decades, and this is what the iPhone supply chain looks like now. It is produced around the entire world. Apple no longer manufactures these; it is all outsourced. The only thing that Apple does is design them, and market them.
They are produced in Global supply chains.
But this also creates its own new problems.
In a supply chain we can no longer solve production glitches by having one department head talk to another department head.
We need to make the control over that supply chain more explicit.
I ran into an example of that last year, when I was delivering a course in Tunisia. At night in the hotel I chatted up with a German guy.
Turns out he was there to audit the supply chain of a big German car manufacturer.
A company in Tunisia was producing a part of the electrical system of the car. Not even the whole electrical system.
The guy was out there to audit their process; he did not even look much at the product, only at the process. If the process is good, the product will be good.
And he was coming back every couple of months or so to audit that part of the supply chain.
He also explained that if this company stopped delivering, the assembly line in Germany would stop in 48 hours. So they had strict delivery agreements, and fines when they failed.
The upshot of all this is that today’s car is not only cheaper, comparatively, it is also more fuel efficient, more functional, safer, it is better in all dimensions. You can even get it in another color than black.
And this is how specialization of labor in a supply chain works.
And with cloud computing, it is a similar thing.
Cloud companies basically create an IT services supply chain.
And here is a diagram that shows how that might look like, simplified.
As a cloud customer, you consume services from a Software as a Services provider.
And a lot of these SaaS providers actually have their infrastructure sourced upstream.
Of course, the reality is a little bit more complicated than this. It more looks like this picture.
Where you have a ton of SaaS providers, the average company has a few hundred. Similarly a lot of Platform providers upstream. And then it is all hosted on Amazon.
No, just kidding, there is also hosting on Microsoft, Google and a few others. And some SaaS providers actually host it themselves.
So there you have it, the IT services supply chain.
The new model for delivering IT services. Better, cheaper, safer (potentially).
More complicated, more moving parts, more people to manage it. Different ways to control it.
Now once we have this, what happens (17:56)?
Once we have this, new things become possible. New things that might include your future job role.
This could be an animation with Powtoon.
AAC-02 Independent reviews and assessments shall be performed at least annually to ensure that the organization addresses nonconformities of established policies, standards, procedures and compliance obligations.
Updated to the Guidance v4 sequence
Once we have this, new things become possible. New things that might include your future job role.
Once we have programmable infrastructure, we can do automated integration, automated testing, automated deployment. We can do DevOps.
Once we have flexible capacity at scale, we can do Big Data.
Once we have that stuff out there, on the network, it makes more sense to develop social and mobile applications. For individuals as well as for companies.
And as a consequence, it has never been cheaper, or easier, to start a software company. Or set up any company’s IT, for that matter.
I reckon that, if you put your mind to it, even with a team of just 2 or 3 people, you can start a company, including its development pipeline, in less than a day, just by procuring the right cloud services.
Let’s focus on one of these. The one that is really driving this IT transformation, from the business perspective, is the devops revolution.
Back in the old days, a new version of the software was released once a year. It took a while to implement. It had to be installed on all the machines, databases migrated and so on. Very complicated.
With DevOps, you automate a lot of the entire software development and deployment pipeline. This is called continuous integration, or even continuous delivery. And it allows companies such as Amazon, Netflix, banks, to deploy new services, or bits of services, literally thousands of times a day.
And that creates, for their business, enormous agility. Enormous feature velocity. Which allows them to innovate quicker and better, and adapt quicker to the market.
And as you remember from Darwin, it is not the strongest of species that survive, but the ones that can adapt to changing circumstances quicker. It is the same for companies.
Now if you are in IT, or related to IT, what are you going to do with this?
What should you do? This is the new world, this is inevitable. It is already there.
This is the new way that IT will be ran. Of course, it will take another 10 years for this to reach 95% saturation, but there is only one way, one direction, that this is going.
We still have mainframes, but there is not a lot of work in mainframes.
We still have minicomputers, but there is not a lot of work in minicomputers anymore.
We still have PCs, but there is not so much work in managing these anymore. In fact we are often leaving that to the users.
Cloud computing is where the new action is.
If you want to succeed in that world, you have to know about the nuts and bolts, how is it pieced together.
You have to know how to control suppliers, and everybody in that ecosystem upstream from you. And demonstrate to the people downstream that you are in control.
You have to know how to control all kinds of risks.
And you need to know where you fit in that ecosystem. Don’t be stuck.
And once you get it, there is a great new world out there.
Check-in source code into repository (i.e. github)
Repository triggers build server (i.e. jenkins or codeship (service))
Build server runs build (i.e. ‘make’ ), includes external components and patches
Build server runs tests (unit, integration, performance, security)
Tests that fail get notified back to developer (might also be github)
Build server then pushes runnable image to image repository (VM or container)
‘The other side of the wall’ picks up the images
(note Mark Coleman had sheets hier over)
Autoscaler decides to instantiate more instances and orchestrates the workflow around that (notifying load balancer, security policies etc). Example ecosystems here are Mesos ? Kubernetes?
Once image boots up (this is almost always a firstboot), it will update for patches, runs a health check / security scan and reports for duty to the orchestrator.
The load balance can have additional intelligence in actually directing traffic to the various (versions) of instances.