This document discusses steps for designing an application architecture from a starting point of "HelloWorld" to production-ready. It covers: 1) starting with a basic UI, 2) factors to consider like the UI, directory server, documents, etc, 3) implementing business intelligence from the start, 4) preparing for higher volumes, and 5) using Scrum methodologies. The overall goal is providing a structured approach to application architecture design.
Creating Complete Test Environments in the Cloud: Skytap & Parasoft WebinarSkytap Cloud
By utilizing virtualization technology in the SDLC, specifically service virtualization and virtual dev/test labs, companies can increase test coverage in less time and ultimately produce better software faster.
Download this complimentary webinar from Skytap and Parasoft now and learn how to to combine service virtualization with cloud-based dev/test environments.
Solving todays problems with oracle integration cloudHeba Fouad
Oracle Integration Cloud Service Features and Highlights
Integrating SaaS and on premise application with ICS
Integrating SaaS and SaaS application with ICS
Solving Cloud integration problems with ICS
presented by Heba Fouad
Fusion Middleware Specialist
●Overall introduction of Ichiba
Introduction
●Redis Cluster in Rakuten Ichiba
How we use Redis Cluster in Rakuten Ichiba
●R Framework
The challenge of updating a legacy system sharing code between multiple teams, using an in-house developed library for the Rakuten Ichiba Frontend side.
●Rakuten Catalog Platform- Classification Approach for 280,000,000 Ichiba items -
1. Taxonomy Strategy(Analyze, Adoption)
2. Rakuten Catalog Platform Classification Ichiba Item data -> Taxonomy(Taxonomy(Genre/Tag/Attribute) management/development) -> Catalog(Product Master) -> Data governance system -> Data Processing Unit -> Auto classification(Item information/Image)
●How to reconstruct a million-user app
Describes why we decided to rewrite our app, what difficulties we faced and how we create the new structure to ensure it's flexible, stable and maintainable.
https://tech.rakuten.co.jp/
The presentation by Andrei Yurkevich of Altoros @ the Cloud Foundry Summit 2015. It describes the metrics needed to quantify the value that the Cloud Foundry PaaS brings to an organization.
Creating Complete Test Environments in the Cloud: Skytap & Parasoft WebinarSkytap Cloud
By utilizing virtualization technology in the SDLC, specifically service virtualization and virtual dev/test labs, companies can increase test coverage in less time and ultimately produce better software faster.
Download this complimentary webinar from Skytap and Parasoft now and learn how to to combine service virtualization with cloud-based dev/test environments.
Solving todays problems with oracle integration cloudHeba Fouad
Oracle Integration Cloud Service Features and Highlights
Integrating SaaS and on premise application with ICS
Integrating SaaS and SaaS application with ICS
Solving Cloud integration problems with ICS
presented by Heba Fouad
Fusion Middleware Specialist
●Overall introduction of Ichiba
Introduction
●Redis Cluster in Rakuten Ichiba
How we use Redis Cluster in Rakuten Ichiba
●R Framework
The challenge of updating a legacy system sharing code between multiple teams, using an in-house developed library for the Rakuten Ichiba Frontend side.
●Rakuten Catalog Platform- Classification Approach for 280,000,000 Ichiba items -
1. Taxonomy Strategy(Analyze, Adoption)
2. Rakuten Catalog Platform Classification Ichiba Item data -> Taxonomy(Taxonomy(Genre/Tag/Attribute) management/development) -> Catalog(Product Master) -> Data governance system -> Data Processing Unit -> Auto classification(Item information/Image)
●How to reconstruct a million-user app
Describes why we decided to rewrite our app, what difficulties we faced and how we create the new structure to ensure it's flexible, stable and maintainable.
https://tech.rakuten.co.jp/
The presentation by Andrei Yurkevich of Altoros @ the Cloud Foundry Summit 2015. It describes the metrics needed to quantify the value that the Cloud Foundry PaaS brings to an organization.
Java SE is ideal for building lightweight microservices and those services are increasingly being deployed to the cloud. Cloud platforms are attractive deployment targets due to their high availability, affordability, ease of management, and access to services like object storage, messaging, and databases. And when well architected, Cloud Java apps exhibit a number of qualities like portability, updatability, configurability, composability, and scalability.
As organizations invest in DevOps to release more frequently, there’s a need to treat the database tier as an integral part of your automated delivery pipeline – to build, test and deploy database changes just like any other part of your application.
However, databases (particularly RDBMS) are different from source code, and pose unique challenges to Continuous Delivery - especially in the context of deployments. Often, code changes require updating or migrating the database before the application can be deployed. A deployment method that works for installing a small database or a green-field application may not be suitable for industrial-scale databases. Updating the database can be more demanding than updating the app layer: database changes are more difficult to test, and rollbacks are harder. Furthermore, for organizations who strive to minimize service interruption to end users, database updates with no-downtime are a laborious operation.
Your DB stores the most mission-critical and sensitive data of your organization (transaction data, business data, user information, etc.). As you update your database, you’d want to ensure data integrity, ACID, data retention, and have a solid rollback strategy - in case things go wrong …
This talk covers strategies for database deployments and rollbacks:
• What are some patterns and best practices for reliably deploying databases as part of your CD pipeline?
• How do you safely rollback database code?
• How do you ensure data integrity?
• What are some best practices for handling advanced scenarios and backend processes, such as scheduled tasks, ETL routines, replication architecture, linked databases across distributed infrastructure, and more.
• How to handle legacy database, alongside more modern data management solutions?
This presentation covers both the Cloud Foundry Elastic Runtime (known by many as just "Cloud Foundry") as well as the Operations Manager (known by many as BOSH). For each, the main components are covered with interactions between them.
Skytap parasoft webinar new years resolution- accelerate sdlcSkytap Cloud
In this webinar, co-hosted by Parasoft and Skytap, find out how to get your software lifecycle in shape for the New Year. You'll learn strategies for helping DevOps and Test collaborate in ways that make your SDLC leaner and more scalable.
Depending on their size and complexity, content management systems such as Sitecore can require various workflows and tools for DevOps management. The choice in processes largely depends upon the scale and depth of your DevOps projects.
Deploying DevOps strategies on Microsoft Azure makes it easy to convert your network, virtual machines, databases, and more from infrastructure into code, enabling you to increase speed and reduce risk.
We discussed the benefits of Sitecore DevOps on Microsoft Azure, including using Microsoft Azure and Microsoft Azure (VSTS) to:
-Automate the build-out of Sitecore environments
-Automate code and content deployment
-Use Azure Resource Manager templates, PowerShell, and -VSTS to provision Sitecore environments
-Automate Sitecore installations
-Move your Sitecore databases into Azure SQL
Strangling the Monolith With a Data-Driven Approach: A Case StudyVMware Tanzu
SpringOne Platform 2017
David Julia, Pivotal; Simon P Duffy, Pivotal
"The scene: A complex procedure cost estimation system with hundreds of unknown business rules hidden in a monolithic application. A rewrite is started. If our system gives an incorrect result, the company is financially on the hook. A QA team demanding month-long feature freezes for testing. A looming deadline to cut over to the new system with severe financial penalties for missing the date. Tension is high. The business is nervous, and the team isn’t confident that it can replace the system without introducing costly bugs. Does that powder-keg of a project sound familiar?
Enter Project X: At a pivotal moment in the project, the team changed their approach. They’d implement a unique, data-driven variation of the strangler pattern. They’d run their system in production alongside the legacy system, while collecting data on their system’s accuracy, falling back to the legacy system when answers differed. True to Lean Software development, they would amplify learning and use data to drive their product decisions.
The end result: An outstanding success. Happy stakeholders, business buy-in to release at will, a vastly reduced QA budget, reusable microservices, and one heck of a Concourse continuous delivery pipeline. We achieved all of this, while providing a system that was provably better than the legacy subsystem we replaced.
This talk will appeal to engineers, managers, and product managers.
Join us for a 30 minute session where we review this case study and learn how you too can:
Build statistically significant confidence in your system with data-driven testing
Strangle the Monolith safely
Take a Lean approach to legacy rewrites
Validate your system’s accuracy when you don’t know the legacy business rules
Leverage Continuous Delivery in a Legacy Environment
Get Business and QA buy-in for Continuous Delivery
Articulate the business value of data-driven product decisions"
Software Engineering as the Next Level Up from Programming (Oracle Groundbrea...Lucas Jellema
Software engineering is programming with the added dimension of time: programs that can evolve and scale, be maintained and be operated by multiple people over a longer period of time. What does it take to do software engineering in a professional manner - beyond mere programming? As programmers, our main goal is to make IT work. To translate functional specification into executable code. And sure, that is the least we can do. But we have more responsibility than this. We have to produce software that is robust and will reliably handle expected and unexpected cases. Software that is scalable and can handle expected and somewhat unexpected load gracefully. With minimal operating costs and in the greenest way possible. Software that is observable and manageable and that can be evolved with changing and new functional requirements and with changing technology. Software that will be legacy in the original, positive meaning of the word. That does not depend on the one big brain in our team or on the guy that has been around for three decades. Software that we know is good and can comfortably be modified in a controlled and productive way. We have to grow from excellent programmers to professional software engineers. This session talks about what it takes to create our code with honor. It discusses automation at every level in the build, rollout and monitoring of infrastructure (as code), platform and application, using CI/CD pipelines and DevOps procedures and tools. The session talks about testing – before and during development as well as after each change anywhere in the system and for both functional and non-functional aspects. Test driven development, regression testing and smoke testing are among the concepts discussed. The term ‘clean code’ refers to code that is readable, testable and maintainable. Through code analysis and peer reviews and by performing refactoring we constantly refine our software to be collectively adaptable. The session demonstrates the concepts discussed with code samples in the context of cloud native programming. As software engineers, we have an obligation to society, to our peers and to ourselves to not only write software that does the job, but to create code that is good. Ours is a great and meaningful line of work, especially if we raise our game professionally to code with honor.
A Single Platform to Run All The Things - Kubernetes for the Enterprise - LondonVMware Tanzu
A Single Platform to Run All The Things - Kubernetes for the Enterprise - London
Ed Hoppitt
EMEA Lead Applications Transformation, VMware
28th March 2018
How to successfully load test over a million concurrent users stp con demoApica
Does your company attract millions of visitors, users or even subscribers to your site or application? Whether you answered yes or no, it’s still a great idea to know what it takes to test 2+ million concurrent users, fast. In this presentation, you’ll get a first-hand, live walk-through of Apica Load Test doing a mega test of 2 million concurrent users.
By talking about Microsoft's journey to Cloud cadence, this talk goes through all the DevOps practices such as Infrastructure as Code, CI/CD, Release Management and Hypothesis Driven Development.
It also introduces the impact of Docker and PaaS in DevOps.
Beyond DevOps - How Netflix Bridges the GapJosh Evans
Operating a massively scalable, constantly changing, distributed global service is a daunting task. We innovate at breakneck speed to attract new customers and stay ahead of the competition. Simultaneously improving service quality and enabling rapid, continuous change seems impossible on the surface.
At Netflix, Operations Engineering is a centralized organization whose charter is to accomplish just that by applying high-leverage software engineering practices like continuous delivery. real-time analytics, and automation to solve operational problems. It's well established that many traditional IT Operations teams struggle to bridge the gap with software engineering. Operations Engineering is no exception. And while DevOps as a construct seeks to address this gap, it doesn't go far enough. It does not explain how to bridge the gap or even why it's important to do so.
In this talk we’ll use Netflix Operations Engineering as a case study to address these questions. We'll explore common challenges faced by operational teams and strategies to overcome them.
Powering Business Transformation with Oracle Exadata: a Capgemini Case StudyCapgemini
What is the best way to get thousands of users from dozens of different countries who are used to local autonomy to buy into a global, centrally organized shared services system?
For Capgemini, Oracle Exadata Database Machine was the answer. The addition of Oracle Exadata to Capgemini’s global business intelligence financial system led to a fourfold decrease in reporting times and a 100 percent increase in reporting volumes and performance, thereby demonstrating how global shared services and processes can lead to dramatic increases in efficiency and automation.
Read this presentation to find out how you can use some of the best practices and lessons learned from Capgemini’s implementation of Oracle Exadata to enable transformation in your own organization.
https://www.capgemini.com/oracle/oracle-engineered-systems
DOES SFO 2016 - Avan Mathur - Planning for Huge ScaleGene Kim
Installing one CI server or configuring a deployment pipeline for a specific application might be easy enough. However, as enterprises look to scale their DevOps adoption and optimize their software delivery practices across the organization (to support additional teams, product lines, application releases, processes and infrastructure) -- software delivery pipeline(s) need to scale to support enterprise workloads.
For some enterprises, this means having a pipeline that can withstand the velocity and throughput of thousands of product releases, supporting tens of thousands of developers and distributed teams, hundreds of thousands of infrastructure nodes, multitudes of inter-dependent application components, or millions of builds and test-cases.
This scale poses unique challenges and implications for your pipeline design. This talk covers best practices for analyzing and (re)designing your software delivery pipeline – regardless of your chosen tool-set or technologies. Obtain tips and tools for ensuring your pipelines and DevOps infrastructure have the right architecture and feature-set to support your software production as it scales, while also ensuring manageability, governance, security, and compliance.
Learn best practices for how to:
1) Plan for scale: how to project for the types of performance indicators/vectors you’d need to scale across.
2) How to design of your pipeline and supporting infrastructure and operations (such as data retention, artifact retrieval, monitoring, etc.).
3) Design your pipeline workflows and processes to allow reusability and standardization across the organization, while also enabling flexibility to support the needs of specific teams/apps.
4) Design your pipeline in a way that enables fast rollout- easy onboarding thousands of applications, across hundreds of teams
5) Incorporate security access controls, approval gates and compliance checks as part of your pipeline and have them standard across all releases
6) Ensure your architecture support HA, DR and business continuity.
Java SE is ideal for building lightweight microservices and those services are increasingly being deployed to the cloud. Cloud platforms are attractive deployment targets due to their high availability, affordability, ease of management, and access to services like object storage, messaging, and databases. And when well architected, Cloud Java apps exhibit a number of qualities like portability, updatability, configurability, composability, and scalability.
As organizations invest in DevOps to release more frequently, there’s a need to treat the database tier as an integral part of your automated delivery pipeline – to build, test and deploy database changes just like any other part of your application.
However, databases (particularly RDBMS) are different from source code, and pose unique challenges to Continuous Delivery - especially in the context of deployments. Often, code changes require updating or migrating the database before the application can be deployed. A deployment method that works for installing a small database or a green-field application may not be suitable for industrial-scale databases. Updating the database can be more demanding than updating the app layer: database changes are more difficult to test, and rollbacks are harder. Furthermore, for organizations who strive to minimize service interruption to end users, database updates with no-downtime are a laborious operation.
Your DB stores the most mission-critical and sensitive data of your organization (transaction data, business data, user information, etc.). As you update your database, you’d want to ensure data integrity, ACID, data retention, and have a solid rollback strategy - in case things go wrong …
This talk covers strategies for database deployments and rollbacks:
• What are some patterns and best practices for reliably deploying databases as part of your CD pipeline?
• How do you safely rollback database code?
• How do you ensure data integrity?
• What are some best practices for handling advanced scenarios and backend processes, such as scheduled tasks, ETL routines, replication architecture, linked databases across distributed infrastructure, and more.
• How to handle legacy database, alongside more modern data management solutions?
This presentation covers both the Cloud Foundry Elastic Runtime (known by many as just "Cloud Foundry") as well as the Operations Manager (known by many as BOSH). For each, the main components are covered with interactions between them.
Skytap parasoft webinar new years resolution- accelerate sdlcSkytap Cloud
In this webinar, co-hosted by Parasoft and Skytap, find out how to get your software lifecycle in shape for the New Year. You'll learn strategies for helping DevOps and Test collaborate in ways that make your SDLC leaner and more scalable.
Depending on their size and complexity, content management systems such as Sitecore can require various workflows and tools for DevOps management. The choice in processes largely depends upon the scale and depth of your DevOps projects.
Deploying DevOps strategies on Microsoft Azure makes it easy to convert your network, virtual machines, databases, and more from infrastructure into code, enabling you to increase speed and reduce risk.
We discussed the benefits of Sitecore DevOps on Microsoft Azure, including using Microsoft Azure and Microsoft Azure (VSTS) to:
-Automate the build-out of Sitecore environments
-Automate code and content deployment
-Use Azure Resource Manager templates, PowerShell, and -VSTS to provision Sitecore environments
-Automate Sitecore installations
-Move your Sitecore databases into Azure SQL
Strangling the Monolith With a Data-Driven Approach: A Case StudyVMware Tanzu
SpringOne Platform 2017
David Julia, Pivotal; Simon P Duffy, Pivotal
"The scene: A complex procedure cost estimation system with hundreds of unknown business rules hidden in a monolithic application. A rewrite is started. If our system gives an incorrect result, the company is financially on the hook. A QA team demanding month-long feature freezes for testing. A looming deadline to cut over to the new system with severe financial penalties for missing the date. Tension is high. The business is nervous, and the team isn’t confident that it can replace the system without introducing costly bugs. Does that powder-keg of a project sound familiar?
Enter Project X: At a pivotal moment in the project, the team changed their approach. They’d implement a unique, data-driven variation of the strangler pattern. They’d run their system in production alongside the legacy system, while collecting data on their system’s accuracy, falling back to the legacy system when answers differed. True to Lean Software development, they would amplify learning and use data to drive their product decisions.
The end result: An outstanding success. Happy stakeholders, business buy-in to release at will, a vastly reduced QA budget, reusable microservices, and one heck of a Concourse continuous delivery pipeline. We achieved all of this, while providing a system that was provably better than the legacy subsystem we replaced.
This talk will appeal to engineers, managers, and product managers.
Join us for a 30 minute session where we review this case study and learn how you too can:
Build statistically significant confidence in your system with data-driven testing
Strangle the Monolith safely
Take a Lean approach to legacy rewrites
Validate your system’s accuracy when you don’t know the legacy business rules
Leverage Continuous Delivery in a Legacy Environment
Get Business and QA buy-in for Continuous Delivery
Articulate the business value of data-driven product decisions"
Software Engineering as the Next Level Up from Programming (Oracle Groundbrea...Lucas Jellema
Software engineering is programming with the added dimension of time: programs that can evolve and scale, be maintained and be operated by multiple people over a longer period of time. What does it take to do software engineering in a professional manner - beyond mere programming? As programmers, our main goal is to make IT work. To translate functional specification into executable code. And sure, that is the least we can do. But we have more responsibility than this. We have to produce software that is robust and will reliably handle expected and unexpected cases. Software that is scalable and can handle expected and somewhat unexpected load gracefully. With minimal operating costs and in the greenest way possible. Software that is observable and manageable and that can be evolved with changing and new functional requirements and with changing technology. Software that will be legacy in the original, positive meaning of the word. That does not depend on the one big brain in our team or on the guy that has been around for three decades. Software that we know is good and can comfortably be modified in a controlled and productive way. We have to grow from excellent programmers to professional software engineers. This session talks about what it takes to create our code with honor. It discusses automation at every level in the build, rollout and monitoring of infrastructure (as code), platform and application, using CI/CD pipelines and DevOps procedures and tools. The session talks about testing – before and during development as well as after each change anywhere in the system and for both functional and non-functional aspects. Test driven development, regression testing and smoke testing are among the concepts discussed. The term ‘clean code’ refers to code that is readable, testable and maintainable. Through code analysis and peer reviews and by performing refactoring we constantly refine our software to be collectively adaptable. The session demonstrates the concepts discussed with code samples in the context of cloud native programming. As software engineers, we have an obligation to society, to our peers and to ourselves to not only write software that does the job, but to create code that is good. Ours is a great and meaningful line of work, especially if we raise our game professionally to code with honor.
A Single Platform to Run All The Things - Kubernetes for the Enterprise - LondonVMware Tanzu
A Single Platform to Run All The Things - Kubernetes for the Enterprise - London
Ed Hoppitt
EMEA Lead Applications Transformation, VMware
28th March 2018
How to successfully load test over a million concurrent users stp con demoApica
Does your company attract millions of visitors, users or even subscribers to your site or application? Whether you answered yes or no, it’s still a great idea to know what it takes to test 2+ million concurrent users, fast. In this presentation, you’ll get a first-hand, live walk-through of Apica Load Test doing a mega test of 2 million concurrent users.
By talking about Microsoft's journey to Cloud cadence, this talk goes through all the DevOps practices such as Infrastructure as Code, CI/CD, Release Management and Hypothesis Driven Development.
It also introduces the impact of Docker and PaaS in DevOps.
Beyond DevOps - How Netflix Bridges the GapJosh Evans
Operating a massively scalable, constantly changing, distributed global service is a daunting task. We innovate at breakneck speed to attract new customers and stay ahead of the competition. Simultaneously improving service quality and enabling rapid, continuous change seems impossible on the surface.
At Netflix, Operations Engineering is a centralized organization whose charter is to accomplish just that by applying high-leverage software engineering practices like continuous delivery. real-time analytics, and automation to solve operational problems. It's well established that many traditional IT Operations teams struggle to bridge the gap with software engineering. Operations Engineering is no exception. And while DevOps as a construct seeks to address this gap, it doesn't go far enough. It does not explain how to bridge the gap or even why it's important to do so.
In this talk we’ll use Netflix Operations Engineering as a case study to address these questions. We'll explore common challenges faced by operational teams and strategies to overcome them.
Powering Business Transformation with Oracle Exadata: a Capgemini Case StudyCapgemini
What is the best way to get thousands of users from dozens of different countries who are used to local autonomy to buy into a global, centrally organized shared services system?
For Capgemini, Oracle Exadata Database Machine was the answer. The addition of Oracle Exadata to Capgemini’s global business intelligence financial system led to a fourfold decrease in reporting times and a 100 percent increase in reporting volumes and performance, thereby demonstrating how global shared services and processes can lead to dramatic increases in efficiency and automation.
Read this presentation to find out how you can use some of the best practices and lessons learned from Capgemini’s implementation of Oracle Exadata to enable transformation in your own organization.
https://www.capgemini.com/oracle/oracle-engineered-systems
DOES SFO 2016 - Avan Mathur - Planning for Huge ScaleGene Kim
Installing one CI server or configuring a deployment pipeline for a specific application might be easy enough. However, as enterprises look to scale their DevOps adoption and optimize their software delivery practices across the organization (to support additional teams, product lines, application releases, processes and infrastructure) -- software delivery pipeline(s) need to scale to support enterprise workloads.
For some enterprises, this means having a pipeline that can withstand the velocity and throughput of thousands of product releases, supporting tens of thousands of developers and distributed teams, hundreds of thousands of infrastructure nodes, multitudes of inter-dependent application components, or millions of builds and test-cases.
This scale poses unique challenges and implications for your pipeline design. This talk covers best practices for analyzing and (re)designing your software delivery pipeline – regardless of your chosen tool-set or technologies. Obtain tips and tools for ensuring your pipelines and DevOps infrastructure have the right architecture and feature-set to support your software production as it scales, while also ensuring manageability, governance, security, and compliance.
Learn best practices for how to:
1) Plan for scale: how to project for the types of performance indicators/vectors you’d need to scale across.
2) How to design of your pipeline and supporting infrastructure and operations (such as data retention, artifact retrieval, monitoring, etc.).
3) Design your pipeline workflows and processes to allow reusability and standardization across the organization, while also enabling flexibility to support the needs of specific teams/apps.
4) Design your pipeline in a way that enables fast rollout- easy onboarding thousands of applications, across hundreds of teams
5) Incorporate security access controls, approval gates and compliance checks as part of your pipeline and have them standard across all releases
6) Ensure your architecture support HA, DR and business continuity.
Service-Level Objective for Serverless Applicationsalekn
Deploying commercial applications that meet their expected business needs is challenging due to the differences between how business goals are specified and how the system is evaluated. Furthermore, business goals are dynamic, requiring deployment to change constantly over time. Such difficulties make it costly to maintain application quality as the underlying infrastructure is not always fast enough to keep up with business changes. Nowadays, serverless opens a new approach to build application. By abstracting out the deployment details, serverless application can be implemented with minimum deployment efforts. Serverless also reduces maintenance cost with auto-scaling and pay-as-you-go. Such abilities make us believe that by adopting serverless, we can build application that can meet and quickly adapt to business goals.
However, simply writing applications with serverless is not sufficient. Due to best-effort invocation mechanisms and the lack of application structure awareness, serverless performance is highly variable and often fails to support applications with rigorous quality of service requirements. In this study, we aim to mitigate such limitations by coupling serverless deployment with business needs. In particular, we define an Serverless Service-Level Objective (SLO) interface that allows developers to describe their application structure and business goals in terms of software-level objectives. We implement an SLO enforcer, which uses this information in combination with the system performance metrics to decide a proper serverless deployment and resource allocation for meeting business goals. The Serverless SLO leverages blueprint model, which allow developers to describe applications' architecture and runtime characteristics needs, to map application description to serverless function deployment on the top of Knative. We deploy our proposed system on KinD, a tool to run Kubernetes cluster over our local Docker container, and evaluate it with different system configurations. Evaluation results showed that SLO definition and enforcement helps serverless application use resources in accordance with business goals.
SAP on AWS: Big Businesses, Big Workloads, Big Time featuring Ingram-Micro - ...Amazon Web Services
In order to increase business agility and reduce costs, a large number of enterprise customers are moving their entire SAP landscapes, including their production environments, to AWS. Some examples of enterprise customers running their core businesses on AWS are BP, Kellogg’s, Brooks Brothers, AIG, and Ingram-Micro. In this session, hear how customers are running mission-critical workloads on AWS, and understand how we guide Fortune 50 companies as they rapidly adopt emerging technologies and accelerate greater innovation on AWS.
FaaS or not to FaaS. Visible and invisible benefits of the Serverless paradig...Vadym Kazulkin
When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, unsufficient network performance and missing security features.
FaaS or not to FaaS. Visible and invsible benefits of the Serverless paradigm...Vadym Kazulkin
When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, unsufficient network performance and missing security features.
Camunda Product Update – The present and the future of Process Automationcamunda services GmbH
Hear about the latest innovations in process automation from Camunda. Find out how our engineering team is delivering solutions for our customers’ biggest challenges from CTO Daniel Meyer.
FaaS or not to FaaS. Visible and invisible benefits of the Serverless paradig...Vadym Kazulkin
When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, unsufficient network performance and missing security features.
Pivoting to Cloud: How an MSP Brokers Cloud Services RightScale
Many Managed Services Providers (MSPs) are looking to shift their cloud services offerings to encompass public and private cloud options. Learn how one MSP, Offis, uses RightScale to broker services across a variety of cloud providers as well as virtualized environments in order to serve the diverse needs of its customers.
FaaS or not to FaaS. Visible and invisible benefits of the Serverless paradig...Vadym Kazulkin
When we talk about prices, we often only talk about Lambda costs. In our applications, however, we rarely use only Lambda. Usually we have other building blocks like API Gateway, data sources like SNS, SQS or Kinesis. We also store our data either in S3 or in serverless databases like DynamoDB or recently in Aurora Serverless. All of these AWS services have their own pricing models to look out for. In this talk, we will draw a complete picture of the total cost of ownership in serverless applications and present a decision-making list for determining if and whether to rely on serverless paradigm in your project. In doing so, we look at the cost aspects as well as other aspects such as understanding application lifecycle, software architecture, platform limitations, organizational knowledge and plattform and tooling maturity. We will also discuss current challenges adopting serverless such as lack of high latency ephemeral storage, unsufficient network performance and missing security features.
Mail is received as a commodity from the cloud, also Collaboration. However, in many client meetings we often hear the question, where are we heading with the hundreds of Notes applications? Which strategy is most effective and cost efficient at the same time? Is cloud a practical answer? With sound and proven methodology Notes applications can be transformed into valuable web applications in the cloud. It turns out that today the time has come for cloud platforms. A side view of large customer projects, already transforming their Notes applications to the cloud - for example to IBM SoftLayer - is helpful. This Track helps you understand that strategies that are implemented and lets you understand the costs and risks involved.
Mobile User Experience:Auto Drive through Performance MetricsAndreas Grabner
Believe it or not - 85% of mobile apps are removed after first usage! In this presentation - given at the APM Meetup in Singapore in April 2015 - I talked about the challenges, best practices and especially metrics to avoid this situation.
Key Points of the Presentation
The two key trends "Internet of Things" and "DevOps" play a big role in our life when we talk about User Experience and especially mobile user experience. In this presentation I tell you what metrics to use to make sure you deliver your ideas faster to your mobile end users but also ensuring the right quality and user experience so that your users stay loyal and dont delete the mobile app after first usage.
Scaling your application efficiently is is key to achieving a good rate of return and performance monitoring is an important tool to ensure you scale as expected.
Performance monitoring of single Node.js applications is relatively straight forward with a variety of technigues and tooling options available to a developer. In this presentation, we will follow the journey of how to apply these techniques when scaling up to a clustered Node.js deployment in the cloud. We will show how to use freely available monitoring tooling and open source solutions like appmetrics, Elasticsearch and Kibana to provide real-time monitoring and performance tracking for Enterprise solutions. Come and learn how to keep on top on how your application is performing and find out about problems before they occur.
Realize 2022 MINO 7 year of implementation v0.1.pptxjakobkuhn
This presentation shows how MINO (an automotive Line Builder in China) has successfully implemented the use of Virtual Commissioning and also developed according workflows and training methods
The annual review session by the AMIS team on their findings, interpretations and opinions regarding news, trends, announcements and roadmaps around Oracle's product portfolio.
The annual review session by the AMIS team on their findings, interpretations and opinions regarding news, trends, announcements and roadmaps around Oracle's product portfolio.
The annual review session by the AMIS team on their findings, interpretations and opinions regarding news, trends, announcements and roadmaps around Oracle's product portfolio.
The annual review session by the AMIS team on their findings, interpretations and opinions regarding news, trends, announcements and roadmaps around Oracle's product portfolio.
The annual review session by the AMIS team on their findings, interpretations and opinions regarding news, trends, announcements and roadmaps around Oracle's product portfolio.
Challenges for IoT in Industrial Automation Lifecycle (>15 years)
Robust, highly available
Well supported
Closed
Diversity
Incremental changes
Small budgets
High data intensity
Security
IoT trackrecord (“we don’t want our competitor to know”)
USP IoT (“we already have that”)
Maintenance staff
Our technology has gotten smart and fast enough to make predictions and come up with recommendations in near real time. Machine Learning is the art of deriving models from our Big Data collections – harvesting historic patterns and trends – and applying those models to new data in order to rapidly and adequately respond to that data. This presentation will explain and demonstrate in simple, straightforward terms and using easy to understand practical examples what Machine Learning really is and how it can be useful in our world of applications, integrations and databases. Hadoop and Spark, real time and streaming analytics, Watson and Cloud Datalab, Jupyter Notebooks, Oracle Machine Learning CS and the Citizen Data Scientists will all make their appearance, as will SQL.
The annual review session by the AMIS team on their findings, interpretations and opinions regarding news, trends, announcements and roadmaps around Oracle's product portfolio. This presentation discusses architecture trends, container technology, disruptive movements such as IoT, Blockchain, Intelligent Bots and Machine Learning, Modern User Experience, Enterprise Integration, Autonomous Systems in general and Autonomous Database in particular, Security, Cloud, Networking, Java, High PaaS & Low PaaS, DevOps, Microservices, Hybrid Cloud. This Oracle OpenWorld - more than any in recent history - rocked the foundations of the Oracle platform and opened up some real new roads ahead. This presentation leads you through the most relevant announcements and new directions.
Bridging the gap between Administrative and Operational IT
Vision, Architecure and Project experience. This slide deck shows our vision on this market for industrial enterprise IOT. Conclusion
More from Getting value from IoT, Integration and Data Analytics (19)
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
2. Luc Gorissen
Previous employers:
- KPN Research
- CMG Wireless Data Solutions
- OraVision
- Oracle
Focus:
- BPM and SOA Suite
luc.gorissen@amis.nl
+31 6 3622 4226
@LucGorissen
No, no, no
LinkedIn
3. 3
What to expect?
• Step-by-step
• Real-life experience
Application Architecture
4. 4
Topics
Higher volumes
Scrum
Starting point:
<HelloWorld/>
Always consider:
• UI
• Directory Server
• DMS / RM
• ‘Real’ functionality
Switch on the Light:
1 2 3
4 5
Closing
Final
ACM Solution
Architecture
6
BI
BAM
Designing an ACM solution Application Architecture in steps
6. 6
User Interface
ACM/BPM Platform
-
Case Control
SOAP API
Custom API
REST API
2. Always consider
ADF JET companyACM/BPM WorkSpace
Custom API
Service Bus
More powerful API on
ACM/BPM platform
Pick right technology combination
–
for different user groups
7. 7
User Interface
2. Always consider
ACM/BPM Platform
-
Case Control
SOAP API
Custom API
REST API
ADF JET companyACM/BPM WorkSpace
Custom API
Service Bus
Application Architecture
8. 8
User Interface
ACM/BPM Platform
-
Case Control
SOAP API
Custom API
REST API
2. Always consider
ADF JET companyACM/BPM WorkSpace
Custom API
Service Bus
Cache API session context when using the API
Enduser
Cache
Enduser
9. 9
User Interface
ACM/BPM Platform
-
Case Control
SOAP API
Custom API
REST API
2. Always consider
ADF JET companyACM/BPM WorkSpace
Custom API
Service Bus
System user &
Cache API session context when using the API
Enduser
SSL
Systemuser
Cache
13. 13
Object Registries
2. Always consider
ACM/BPM Platform
-
Case Control
Regulations Customers Licenses
‘Back-end’ system data?
Stand-alone application?
Referential integrity
Management screens
Only steering data in
case/process
15. 15
And …
Document Generation / Digital Document Signing
SOA Services
Select and integrate
On same platform?
Make that Service Architecture Layering model
On same platform?
Delete the ‘default’ partition
2. Always consider
17. 17
Business Intelligence
Business Activity Monitoring
So, our Scrum development team worked to get a 'minimum viable set‘
It's not complete, but let's start using it.
BI
BAM
But … what’s happening?
3. Switch on the Light
18. 18
Business Intelligence
Business Activity Monitoring
3. Switch on the Light
Install BAM from the start into your domain
No BI on dehydration store
Business will request all-or-nothing: don’t wait for that!
19. 19
3. Switch on the light
Application Architecture
3. Switch on the Light
20. 20
4. Higher volumes
Improve Exception Handling
Clean consoles
Move non-steering case data into Case Registry
Save Case History (in Case Registry?)
Search function for Case (History)
4. Higher volumes
23. 23
Scrum
Scrum team with 2 weekly delivery cycles…
5. Scrum
CustomerComplaint 15.08
CustomerComplaint 15.12
CustomerComplaint 16.18
CustomerComplaint 15.20
CustomerComplaint 15.10
Case Versions Case Instances
24. 24
Scrum
Scrum teams with 2 weekly delivery cycles…
5. Scrum
High # version – slow start-up time
High # versions – confused end users
High # versions – confused application management
High # versions – confused developers
28. 28
Re-start case
New Case
Old Case
abortCase
transformCaseInfo
Re-startcase getCaseInfo
initializeCase
updateStatusCase
Re-start of cases is not easy
5. Scrum