DevOps is not a one-trick pony. It involves a lot of changes to culture and attitudes. But the cultural changes only happen when you have the technology to enable it all. Oracle provides a comprehensive set of tools and products for traditional IT and cloud environments to help you deliver on your DevOps goals.
AMIS 25: DevOps Best Practice for Oracle SOA and BPMMatt Wright
DevOps and Cloud are transforming the software release process, one which spans multiple teams across development and operations (including testing, infrastructure management), into a collaborative process, with all teams working together to deliver solutions into production faster.
This session details how to implement a continuous delivery process for Oracle SOA/BPM projects, both on-premise and in the cloud, which transform the release process into an automated, reliable, high quality delivery pipeline that that deliver projects faster, with less risk and less cost.
It details the processes and best practices that need to be established, how to use tools to automate and govern the build, deployment and configuration of code from our first initial environment through to production.
1. Learn how DevOps and Continuous Delivery can stream-line the delivery of integration / bpm projects into production.
2. Learn how DevOps plus the Cloud service can accelerate the implementation of on-premise Oracle SOA .
3. Learn best practice for implementing DevOps or Continuous Delivery for Oracle SOA projects on-cloud and on-premise.
4. How to use tools to automate and govern the build, deployment and configuration of code from dev through to production
5. How to leverage the Cloud for Dev and Test, and the benefits this provides.
Using puppet to leverage DevOps in Large Enterprise Oracle Environments Bert Hajee
DevOps in large companies is difficult. When you add Oracle and WebLogic to the equation, it becomes even more difficult. This presentation tells the story of IT Manager John and how he use Puppet en the puppet modules from Enterprise Modules to get started with DevOps in his organization. The change was staggering. Where before a new release lasted more than a year, now they were able to implement changes within days or even hours.
Agile Development and DevOps in the Oracle Cloudjeckels
A broad overview of how Oracle is delivering on the latest generation of development tools and frameworks to help modern enterprises succeed. From Oracle Open World 2016, all rights reserved.
Provisioning Oracle Fusion Middleware Environments with Chef and PuppetEdwin Biemond
Provisioning Oracle Fusion Middleware Environments with Chef and Puppet
This session presents case studies and experiences involving automated provisioning of Oracle Fusion Middleware environments with the popular DevOps tools Chef and Puppet. In addition, it discusses experiences in orchestrating multinode environments with these tools, together with others such as MCollective and some custom-built tooling. The presentation also covers issues such as installing, creating domains, patching, configuring resources such as JDBC, and deploying applications. It also spends a little time on how this provisioning can contribute to building an environment for cloud-based automated acceptance testing.
DevOps is not a one-trick pony. It involves a lot of changes to culture and attitudes. But the cultural changes only happen when you have the technology to enable it all. Oracle provides a comprehensive set of tools and products for traditional IT and cloud environments to help you deliver on your DevOps goals.
AMIS 25: DevOps Best Practice for Oracle SOA and BPMMatt Wright
DevOps and Cloud are transforming the software release process, one which spans multiple teams across development and operations (including testing, infrastructure management), into a collaborative process, with all teams working together to deliver solutions into production faster.
This session details how to implement a continuous delivery process for Oracle SOA/BPM projects, both on-premise and in the cloud, which transform the release process into an automated, reliable, high quality delivery pipeline that that deliver projects faster, with less risk and less cost.
It details the processes and best practices that need to be established, how to use tools to automate and govern the build, deployment and configuration of code from our first initial environment through to production.
1. Learn how DevOps and Continuous Delivery can stream-line the delivery of integration / bpm projects into production.
2. Learn how DevOps plus the Cloud service can accelerate the implementation of on-premise Oracle SOA .
3. Learn best practice for implementing DevOps or Continuous Delivery for Oracle SOA projects on-cloud and on-premise.
4. How to use tools to automate and govern the build, deployment and configuration of code from dev through to production
5. How to leverage the Cloud for Dev and Test, and the benefits this provides.
Using puppet to leverage DevOps in Large Enterprise Oracle Environments Bert Hajee
DevOps in large companies is difficult. When you add Oracle and WebLogic to the equation, it becomes even more difficult. This presentation tells the story of IT Manager John and how he use Puppet en the puppet modules from Enterprise Modules to get started with DevOps in his organization. The change was staggering. Where before a new release lasted more than a year, now they were able to implement changes within days or even hours.
Agile Development and DevOps in the Oracle Cloudjeckels
A broad overview of how Oracle is delivering on the latest generation of development tools and frameworks to help modern enterprises succeed. From Oracle Open World 2016, all rights reserved.
Provisioning Oracle Fusion Middleware Environments with Chef and PuppetEdwin Biemond
Provisioning Oracle Fusion Middleware Environments with Chef and Puppet
This session presents case studies and experiences involving automated provisioning of Oracle Fusion Middleware environments with the popular DevOps tools Chef and Puppet. In addition, it discusses experiences in orchestrating multinode environments with these tools, together with others such as MCollective and some custom-built tooling. The presentation also covers issues such as installing, creating domains, patching, configuring resources such as JDBC, and deploying applications. It also spends a little time on how this provisioning can contribute to building an environment for cloud-based automated acceptance testing.
Tim Krupinski, a Solution Architect at SageLogix, Inc., offers his experience in using tools like Puppet to facilitate a hybrid cloud approach with Oracle Infrastructure as a Service
What is DevOps? A lot of people think it means a lot of different things. We tend to think it has two complimentary aspects: culture and technology changes. Culture is what creates DevOps, technology enables it. Thanks, Kelly Goetsch, for the slide work.
AMIS 25: Moving Integration to the CloudMatt Wright
The growth of cloud has fueled the need to integrate Cloud applications with each other and with applications that reside on premise.
Traditional integration platforms are evolving into offerings called integration platform as a service (iPaaS) primarily targeted at cloud 2 cloud integrations. During the transition to cloud, organizations will be required to adopt a hybrid approach to their integration platform, but as more on-premise applications move to the cloud, users should plan for a re balancing of the center of gravity of their integration platform.
This session is designed to educate the audience about what it means to move integration to the cloud, and use customer case studies to provide insight into how organizations are doing this today, including:
1. Understand why you would move integrations to the cloud and which integrations are prime candidates for iPaaS
2. Understand some of the common implementation challenges and what you can do now to simplify future cloud migrations
3. Understand some of the critical deployment and operational monitoring considerations in moving integrations to the cloud
4. Provides a six step roadmap for moving integration to the cloud
Keynote: Software Kept Eating the World (Pivotal Cloud Platform Roadshow)VMware Tanzu
Software Kept Eating the World
Software is transforming our world at an ever quickening page. In the modern world, realtime information drives decision making in enterprises that were not traditionally considered technology companies. If you recognize software is a competitive advantage, delivering software rapidly and reliably takes the advantage to the next level.
Driving Enterprise Architecture Redesign: Cloud-Native Platforms, APIs, and D...Chris Haddad
High performance architecture is rapidly changing due to three fundamental drivers:
Cloud-Native Platforms - change the way we think about operational infrastructure
DevOps - changes application lifecycle practices
APIs - change how we integrate and evolve infrastructure and applications, especially Mobile apps
In this session, Chris will illustrate:
Why you should consider Cloud-Native architecture components in your Enterprise Architecture
What is DevOps impact on App and API design guidelines
How API-centric focus revises Enterprise Architecture
A session in the DevNet Zone at Cisco Live, Berlin. At the moment, this is the DoE: DevOps of Everything. DevOps is about culture first but many people take shortcuts to tools and workflow. They forgot the essence of DevOps which is about people and not only from Dev to Ops. In this session, we will show you how we are currently building a DevOps culture with a focus on continuous improvement.
Building and Deploying Cloud Native ApplicationsManish Kapur
This deck provides an overview of Oracle's Cloud Native Application Development offerings. It covers developing and deploying cloud native applications like Microservices and Serverless functions using Continuous Integration and Delivery Pipelines. This will be followed by a workshop where you will get a hands-on experience of how to build and deploy simple Java and Node.js microservices using a CI/CD Pipelines and Kubernetes in Oracle Cloud.
Product brochure for JIRA - JIRA lets you prioritise, assign, track, report and audit your 'issues,' whatever they may be — from software bugs and help-desk tickets to project tasks and change requests.
Learn about various cloud integration strategies, and how API Gateways fit into the schema of things. Learn about cloud integration development lifecycles and cloud integration strategies.
Systems Management Forum 2013 (http://ac.nikkeibp.co.jp/nc/smf2013s/) の基調講演資料です。楽天では2012年4月から社内でPaaSを展開しており、その基盤の上でさまざまなサービスが開発、運用されています。講演では、楽天PaaSでのDevOpsの実践方法について具体的に紹介しました。
Tim Krupinski, a Solution Architect at SageLogix, Inc., offers his experience in using tools like Puppet to facilitate a hybrid cloud approach with Oracle Infrastructure as a Service
What is DevOps? A lot of people think it means a lot of different things. We tend to think it has two complimentary aspects: culture and technology changes. Culture is what creates DevOps, technology enables it. Thanks, Kelly Goetsch, for the slide work.
AMIS 25: Moving Integration to the CloudMatt Wright
The growth of cloud has fueled the need to integrate Cloud applications with each other and with applications that reside on premise.
Traditional integration platforms are evolving into offerings called integration platform as a service (iPaaS) primarily targeted at cloud 2 cloud integrations. During the transition to cloud, organizations will be required to adopt a hybrid approach to their integration platform, but as more on-premise applications move to the cloud, users should plan for a re balancing of the center of gravity of their integration platform.
This session is designed to educate the audience about what it means to move integration to the cloud, and use customer case studies to provide insight into how organizations are doing this today, including:
1. Understand why you would move integrations to the cloud and which integrations are prime candidates for iPaaS
2. Understand some of the common implementation challenges and what you can do now to simplify future cloud migrations
3. Understand some of the critical deployment and operational monitoring considerations in moving integrations to the cloud
4. Provides a six step roadmap for moving integration to the cloud
Keynote: Software Kept Eating the World (Pivotal Cloud Platform Roadshow)VMware Tanzu
Software Kept Eating the World
Software is transforming our world at an ever quickening page. In the modern world, realtime information drives decision making in enterprises that were not traditionally considered technology companies. If you recognize software is a competitive advantage, delivering software rapidly and reliably takes the advantage to the next level.
Driving Enterprise Architecture Redesign: Cloud-Native Platforms, APIs, and D...Chris Haddad
High performance architecture is rapidly changing due to three fundamental drivers:
Cloud-Native Platforms - change the way we think about operational infrastructure
DevOps - changes application lifecycle practices
APIs - change how we integrate and evolve infrastructure and applications, especially Mobile apps
In this session, Chris will illustrate:
Why you should consider Cloud-Native architecture components in your Enterprise Architecture
What is DevOps impact on App and API design guidelines
How API-centric focus revises Enterprise Architecture
A session in the DevNet Zone at Cisco Live, Berlin. At the moment, this is the DoE: DevOps of Everything. DevOps is about culture first but many people take shortcuts to tools and workflow. They forgot the essence of DevOps which is about people and not only from Dev to Ops. In this session, we will show you how we are currently building a DevOps culture with a focus on continuous improvement.
Building and Deploying Cloud Native ApplicationsManish Kapur
This deck provides an overview of Oracle's Cloud Native Application Development offerings. It covers developing and deploying cloud native applications like Microservices and Serverless functions using Continuous Integration and Delivery Pipelines. This will be followed by a workshop where you will get a hands-on experience of how to build and deploy simple Java and Node.js microservices using a CI/CD Pipelines and Kubernetes in Oracle Cloud.
Product brochure for JIRA - JIRA lets you prioritise, assign, track, report and audit your 'issues,' whatever they may be — from software bugs and help-desk tickets to project tasks and change requests.
Learn about various cloud integration strategies, and how API Gateways fit into the schema of things. Learn about cloud integration development lifecycles and cloud integration strategies.
Systems Management Forum 2013 (http://ac.nikkeibp.co.jp/nc/smf2013s/) の基調講演資料です。楽天では2012年4月から社内でPaaSを展開しており、その基盤の上でさまざまなサービスが開発、運用されています。講演では、楽天PaaSでのDevOpsの実践方法について具体的に紹介しました。
How can you and your team adopt DevOps? Is it as simple as taking the blue or the red pill? During this session we will share how we changed the culture within Microsoft Corp., what were the challenged faced and how we have addressed them. Transparency and continuous learning are just few of the examples that we will leverage to illustrate this transformation. We will discuss what it means to be a manager in the DevOps area, how to get there with some concreate examples.
https://tech.rakuten.co.jp/
●Overall introduction of Ichiba
Introduction
●Redis Cluster in Rakuten Ichiba
How we use Redis Cluster in Rakuten Ichiba
●R Framework
The challenge of updating a legacy system sharing code between multiple teams, using an in-house developed library for the Rakuten Ichiba Frontend side.
●Rakuten Catalog Platform- Classification Approach for 280,000,000 Ichiba items -
1. Taxonomy Strategy(Analyze, Adoption)
2. Rakuten Catalog Platform Classification Ichiba Item data -> Taxonomy(Taxonomy(Genre/Tag/Attribute) management/development) -> Catalog(Product Master) -> Data governance system -> Data Processing Unit -> Auto classification(Item information/Image)
●How to reconstruct a million-user app
Describes why we decided to rewrite our app, what difficulties we faced and how we create the new structure to ensure it's flexible, stable and maintainable.
https://tech.rakuten.co.jp/
DevOps: A Culture Transformation, More than TechnologyCA Technologies
DevOps is not a new technology or a product. It's an approach or culture of SW development that seeks stability and performance at the same time that it speeds software deliveries to the business. We will discuss this cultural shift where development teams have to accept the feedback of operations teams and the operations team should be ready to accept frequent updates to the SW that it's running.
To learn more about DevOps solutions from CA Technologies, please visit: http://bit.ly/1wbjjqX
OpenSource API Server based on Node.js API framework built on supported Node.js platform with Tooling and DevOps. Use cases are Omni-channel API Server, Mobile Backend as a Service (mBaaS) or Next Generation Enterprise Service Bus. Key functionality include built in enterprise connectors, ORM, Offline Sync, Mobile and JS SDKs, Isomorphic JavaScript and Graphical API creation tool.
Implementing SharePoint on Azure, Lessons Learnt from a Real World ProjectK.Mohamed Faizal
Infrastructure as a Service (IaaS) and its features that can be leveraged for hosting a SharePoint 2013 farm. Learn how to setup, thinks to consider when you setup VPN, Storage, Cloud Services, setting up load balance endpoints. The speaker will share his real world experience and trips and tricks
Openstack Summit Tokyo 2015 - Building a private cloud to efficiently handle ...Pierre GRANDIN
What do you do when your usual setup or turnkey solution isn’t suited for your workload?
Most of the documentation and user feedback that you can find about OpenStack is written for the use-case of running a public facing cloud serving several external customers. When you want to host a single tenant with a single application the problem is completely different, you don't want publicly exposed APIs. You want to ensure optimal resource allocation to maximize your application performance. You want to leverage the fact that you own the infrastructure layer to optimize your instance placement strategy, and to get the best latency and to avoid creating SPOFs using affinity (or anti affinity rules).
This talk will focus on what we learned during a two years journey; from getting OpenStack up and running reliably, to investigating performance bottlenecks, to maximizing the performance of our private cloud.
Use Case for Financial Industry using Mule ESB. This is a unique project and use case that shows, using light weight ESB like Mule it is easy to adapt and scale out on utility hardware. Besides just scale out, it is easy to migrate from a legacy batch based applications into a work flow enabled, Active-Active applications.
An overview of project Skyfall. A globally distributed fault tolerant event consumption framework used by AddThis.com to consume billions of events per day.
Moderne Serverless-Computing-Plattformen sind in aller Munde und stellen ein Programmiermodell zur Verfügung, wo sich der Nutzer keine Gedanken mehr über die Administration der Server, Storage, Netzwerk, virtuelle Maschinen, Hochverfügbarkeit und Skalierbarkeit machen brauch, sondern sich auf das Schreiben von eigenen Code konzentriert. Der Code bildet die Geschäftsanforderungen modular in Form von kleinen Funktionspaketen (Functions) ab. Functions sind das Herzstück der Serverless-Computing-Plattform. Sie lesen von der (oft Standard-)Eingabe, tätigen ihre Berechnungen und erzeugen eine Ausgabe. Die zu speichernden Ergebnisse von Funktionen werden in einem permanenten Datastore abgelegt, wie z.B. der Autonomous Database gespeichert. Die Autonomous Database besitzt folgende drei Eigenschaften self-driving, self-repairing und self-securing, die für einen modernen Anwendungsentwicklungsansatz benötigt werden.
The Good, the Bad and the Ugly of Migrating Hundreds of Legacy Applications ...Josef Adersberger
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a major German insurance company onto a Kubernetes cluster within one year. We're now close to the finish line and it worked pretty well so far.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way. We'll provide our answers to life, the universe and a cloud native journey like:
- What technical constraints of Kubernetes can be obstacles for applications and how to tackle these?
- How to architect a landscape of hundreds of containerized applications with their surrounding infrastructure like DBs MQs and IAM and heavy requirements on security?
- How to industrialize and govern the migration process?
- How to leverage the possibilities of a cloud native platform like Kubernetes without challenging the tight timeline?
Migrating Hundreds of Legacy Applications to Kubernetes - The Good, the Bad, ...QAware GmbH
CloudNativeCon North America 2017, Austin (Texas, USA): Talk by Josef Adersberger (@adersberger, CTO at QAware)
Abstract:
Running applications on Kubernetes can provide a lot of benefits: more dev speed, lower ops costs, and a higher elasticity & resiliency in production. Kubernetes is the place to be for cloud native apps. But what to do if you’ve no shiny new cloud native apps but a whole bunch of JEE legacy systems? No chance to leverage the advantages of Kubernetes? Yes you can!
We’re facing the challenge of migrating hundreds of JEE legacy applications of a major German insurance company onto a Kubernetes cluster within one year. We're now close to the finish line and it worked pretty well so far.
The talk will be about the lessons we've learned - the best practices and pitfalls we've discovered along our way. We'll provide our answers to life, the universe and a cloud native journey like:
- What technical constraints of Kubernetes can be obstacles for applications and how to tackle these?
- How to architect a landscape of hundreds of containerized applications with their surrounding infrastructure like DBs MQs and IAM and heavy requirements on security?
- How to industrialize and govern the migration process?
- How to leverage the possibilities of a cloud native platform like Kubernetes without challenging the tight timeline?
Oracle Drivers configuration for High AvailabilityLudovico Caldara
... is it a developer's job?
UCP, GridLink, TAF, AC, TAC, FAN… The configuration of Oracle Drivers for application high availability is not an easy job. The developers often care about the minimal working configuration, while the DBAs are busy with the operations. In this session I will try to demystify application server’s connectivity to the database and give a direction toward the highest availability, using Real Application Clusters and new Oracle features like TAC and CMAN TDM.
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
MySQL Webinar, presented on the 25th of April, 2024.
Summary:
MySQL solutions enable the deployment of diverse Database Architectures tailored to specific needs, including High Availability, Disaster Recovery, and Read Scale-Out.
With MySQL Shell's AdminAPI, administrators can seamlessly set up, manage, and monitor these solutions, ensuring efficiency and ease of use in their administration. MySQL Router, on the other hand, provides transparent routing from the application traffic to the backend servers in the architectures, requiring minimal configuration.
Completely built in-house and supported by Oracle, these solutions have been adopted by enterprises of all sizes for their business-critical applications.
In this presentation, we'll delve into various database architecture solutions to help you choose the right one based on your business requirements. Focusing on technical details and the latest features to maximize the potential of these solutions.
Continuous Localisation On A Massive ScaleGary Lefman
A crisis is brewing. There are hundreds of products to localise, and this number is growing rapidly. How does a small and efficient localisation team cope with increasing demand while keeping costs low and improving quality? In this session, Gary describes how an engineering team achieved this by taking automation to the next level with continuous localisation. You’ll get a technical deep dive into tools and techniques used to achieve near instantaneous translation in the hybrid cloud. This is a journey where you will learn about the challenges endured and overcome in the ballsy determination to localise an unlimited number of products.
In my presentation, I will summarize the applied and practical aspects of creating sustainable software products. What does it mean - "green" software for users and developers? I want to explain how creating “green” software can be driven by multiple organizational layers. And how building “green” software products can help the organization increase overall software product efficiency.
This presentation introduces the OWASP Top 10:2021.
It explains how to look at the data related to OWASP Top 10:2021, and provides detailed explanations of items with distinctive data. It also introduces the OWASP Project related to each item.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Observability Concepts EVERY Developer Should Know -- DeveloperWeek Europe.pdfPaige Cruz
Monitoring and observability aren’t traditionally found in software curriculums and many of us cobble this knowledge together from whatever vendor or ecosystem we were first introduced to and whatever is a part of your current company’s observability stack.
While the dev and ops silo continues to crumble….many organizations still relegate monitoring & observability as the purview of ops, infra and SRE teams. This is a mistake - achieving a highly observable system requires collaboration up and down the stack.
I, a former op, would like to extend an invitation to all application developers to join the observability party will share these foundational concepts to build on:
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
23. 22
Jennifer (Real Time Monitoring)
Application On Middleware
Instance 01
Instance 02
Instance 03
Instance 04
Jennifer Server
Process Time
Method Profiling
System Status
IO Status
VM Status
28. Rule Based System Configuration
27
App Name
App Number
Machines
IP Zone
Exalogic
Jennifer
WebLogic
OTD
Coherence
Ports
Directories
Two Parameters for Construction
33. 32
Main Feature for Nonstop Operation
OTD : rakuten.co.jp
Listener
Listen-Port : 8001
Origin Server Pool
10.10.1.1:7003
10.10.1.2:7003
Origin Server Pool
10.10.2.1:7003
10.10.2.2:7003
35. 34
Dual Domains for Application
8001 9001
A Domain
ManagedServer
Origin Server Pool B
10.10.1.1 : 7003
10.10.1.2 : 7003
B Domain
ManagedServer
Origin Server Pool A
10.10.2.1 : 7003
10.10.2.2 : 7003
36. 35
OTD
1.0
1.1
- Current Service
OTD
1.0
1.1
Basic Concept
- Next Version Release
- Testing via Proxy
- Switching
- Current Service
- 20 Seconds
default-route
test-route
OTD
1.0
0.9
55. 54
Rollback to Previous Version
Release History Directory
Release Directory of Previous Version
If standby side is updated.. Release
Standby Domain
Service Domain
Switching
56. 55
Normal Operation
Operation Directory
${applicationName}/operation/root
Modifying Configuration Release
Service Domain
Standby Domain
57. 56
Introduction
Middleware Architecture
Life Cycle
Cost Effective Operation
Exalogic Operation On OZ Manager
59. 58
Operation Documents
Legacy System
By System
By Material
By Time
Release Document
Release Document
Release Document
A Reason Of High Operating Cost
60. 59
Operation Documents
Exalogic
Construction
Release
Switching
Operation
Recipe
Operation Recipe for All Application
68. 67
How is Rakuten Ichiba ?
Lots of teams
Lots of applications
Lots of instances
Lots of releases
69. 68
How is Rakuten Ichiba ?
Lots of teams
Lots of applications
Lots of instances
Lots of releases
more than
30
teams
“team reliant operations”
Diff on Rule
Diff on Judge
Diff on Procedure
70. 69
How is Rakuten Ichiba ?
Lots of teams
Lots of applications
Lots of instances
Lots of releases
more than
150
applications
“scattered apps & tools”
Diff on Language
Diff on Architecture
Diff on Monitoring
71. 70
How is Rakuten Ichiba ?
Lots of teams
Lots of applications
Lots of instances
Lots of releases
more than
3000
instances
“excessive alert mails”
10,000 mails on trouble
receivable only 700
Important mails get buried
72. 71
How is Rakuten Ichiba ?
Lots of teams
Lots of applications
Lots of instances
Lots of releases
more than
1000
releases in a year
“more trouble risks”
Required to attract
Bug occurs
Human error happens
73. 72
What we needed
standardize basic operations
integrate and portalize management tools
collect log and pack alert mails
simplify and clarify for fast detection
77. 76
2. Plug-in configuration
plug-in manager
Checker Log Collect Application other
• Get status
• Stop
• Start
• Update
• Get results
• Get status
• Stop
• Start
• Update
• Get logs
• Get status
• Stop
• Start
79. 78
3. Data relation
Application Host
Component & Control
Cluster
exalogic
rpage rpageA rpageA01 node1
rpageA02
rpageB01 node2
rpageB02
rpageB
orderapi
rpageA rpage-inst nodeX
Controlgroup
Filter Checker
error applog inst-chk cpu%
Link
jennifer
exalogic
80. 79
3. Data Relatoin
Authentication & Authorizatoin
User
Ryu
Usergroup
admin
Menu
User
Cluster
exalogic
Mall
Kweon-san
EC Core
Cluster Application Host
orderapi
rpage
userX
81. 80
4. Monitoring for Exalogic
Type Target Checker
CPU usage
Load average
Filesystem usage
Burst process
Host - each node
Cluster
Weblogic service A/B
• Application log
• Performance log
• Alert log
Coherence A/B
• Application log
• Stdout log
Application
system
Oracle EM
• Admin server process
• Client agent process
Nodemanager • Process for each version
OTD • Process for each OTD configuration
weblogic instances • OTD origin online check
coherence instances • Coherence instance check
Host Each nodes
• Ping check
• Cpu usage
• Load average
• Filesystem usage
• Burst process
Good Afternoon!
Thank you for attending our session. My name is Kweon. Just call me David.
We have introduced Exalogic system on Japan Ichiba in 2013.
In This Session, I would like to talk about Ichiba architecture on Exalogic.
I would like to introduce this presentation.
This presentation consists of 5 chapters.
It is about the Ichiba Architecture on Exalogic from chapter 1 through 4.
And Chapter 5 is about Exalogic Operation on OZ Manager.
このプレゼンテーションについて紹介したいと思います。
このプレゼンテーションは5つのChapterで構成されています。
Chapter1から4まではエクサロジック上での市場アーキテクチャーについてです。
そしてChapter5はOZマネジャー上でのエクサロジック運用についてです。
At first, I will give a presentation about Exalogic Architecture.
And Watanabe-san will give a presentation about Exalogic operation on OZ Manager
まず、私がエクサロジックアーキテクチャーについてプレゼンテーションします。
そして渡辺さんがOZマネジャー上でのエクサロジックオペーレーションについてプレゼンテーションします。
So, Let’s start our presentation.
The first Chapter is introduction of Exalogic.
では、プレゼンテーションを始めましょう。
最初のChapterはエクサロジックの照会です。
Rakuten Ichiba is No 1 e-commerce web-site in japan. (At least, I think so. w)
This is a top page of Japan Ichiba.
It seems like a single page.
楽天市場は日本で第1のE-COMMERCEウェブサイトです。(少なくとも私には~~)
このページは日本市場のトップページです。
これは一つのページとして見えますが。
But.
Actually, Our Page is consist of many services.
しかし
実際には私達のページは多いサービスで構成されています。
And..
If you can see the back side of our system, you can find that many services are associated with another services completely.
そして
もし私達のシステムの裏面を見れるとすれば、多いサービスが他のサービスに複雑に関連されていることを確認出来るんでしょう。
For constructing Service, We have to prepare many components.
For Example, We need to construct servers, database, middleware and business application.
And, Each Component must connected each other via network.
If a service is started, we have to do maintenance for each component.
サービスを構築するには私達は色々なコンポーネントを用意する必要があります。
例えば、私達はサーバー、データベース、ミドルウェアー、そしてビジネスアプリケーションを構築する必要があります。
そして、各コンポーネントはネットワークで繋がってないといけない。
サービスが開始されると私達は各コンポーネントにメインテナンスをしないと行けない。
Most of Service Companies are maintaining massive systems for providing many services.
And, Many Engineers will be needed for operating our massive system.
殆どのサービス会社は多いサービスを提供するため、大規模システムを維持しています。
そして、多いエンジニアが大規模システムを稼働するために必要になります。
Everything was prepared perfectly.
But,
Sometimes, We notice that Systems can't be operated effective.
全て完璧に用意されている。
しかし
時には私達はシステムが効率的に運用されてないことに気づく。
That Reason is that Most of services have different peak time.
In Same time, Both low load service and high load service are existing.
but, High load service can't use resource of low load service because Each Service is using different system.
その理由は殆どのサービスが異なるピークタイムを持っているからです。
同じ時間に低負荷サービスと高負荷サービスが存在する。
しかし、高負荷サービスは低負荷サービスのリソースを使えない。なぜなら各サービスは異なるシステムを使っているから。。
If we can use the servers more effectively..
もし、私達がサーバーをもっと効率的に使えば~
And If system can be operated by 4 or 5 person,
it will be great thing.
そして、それが4,5人で運用出来れば、それは素晴らしいことでしょう。
So,
We have decided to introduce the Exalogic System on Japan Ichiba.
それで、私達は日本市場にエクサロジックを導入すると決めました。
Exalogic is the Engineered System manufactured by Oracle.
And, It is customized for Fusion Middleware such as WebLogic and Coherence.
エクサロジックはオラクルが製造したエンジニアドシステムです。
そしてエクサロジックはウェブロジックとコヒーレンスのようなFusionミドルウェア―にカスタマイズされています。
This is the specification of Exalogic.
As you know, Exalogic is very powerful system.
And, Exalogic is including Operation System ,Managing Software and Oracle Traffic Director.
Oracle Traffic Director is very important software for Japan Ichiba.
Please! Remember OTD. I will explain about OTD later.
これはエクサロジックの仕様です。
ご存知のようにエクサロジックは非常にパワープルなシステムです。
そして、エクサロジックはOS、管理ソフトウェアー、OTDを含んでいます。
OTDは日本市場に置いて非常に重要なソフトウェアーです。
OTDを覚えておいてください。後でOTDについて説明します。
Anyway, Our Plan is to migrate Japan Ichiba from legacy system to Exalogic.
Our Target Services are about 50 services, and operated by 400 servers.
we are expecting to down operating cost and realize nonstop system by introducing Exalogic.
とにかく、私達の計画は日本市場の400台、50サービスをエクサロジックに移行することです。(????????)
私達はエクサロジックを導入することでオペレーションコストを下げて、ノンストップシステムを実現しようとしています。
The second Chapter is about middleware architecture on Exalogic.
2番目のCHAPTERはエクサロジック上のミドルウェア―アーキテクチャーについてです。
Oracle Traffic Director is located on front side for forwarding requests to application servers such as WebLogic.
As everyone know, WebLogic Server is used as a Web Application Server.
Actually, Requests will be processed at WebLogic server.
Optionally, Coherence can be located between WebLogic and Database.
Coherence will be used to cache server for business object or session data.
Also, All the servers will use Jennifer Server for real time monitoring.
If you don’t know Jennifer, Don’t worry. I will explain it later.
OTDはリクエストをウェブロジックのようなアプリケーションサーバに転送するために前端に位置します。
ご存知のようにウェブロジックサーバはウェブアプリケーションサーバーとして使われます。
実際にリクエストはウェブロジックで処理されます。
付加的にコヒーレンスはウェブロジック及びデータベースの間に位置付けられます。
コヒーレンスはビジネスデータ及びセッションデータのキャッシュサーバーとして利用されます。
また、全てのサーバーはリアルタイムモニタリングのため、ジェニファーサーバーを利用します。
OTD is a load balancer software.
OTD has access information to each service.
All Requests on Exalogic are only processed via OTD.
OTDはロードバランサーソフトウェアーです。
OTDは各サービスへのアクセス情報を持っています。
エクサロジック上の全てのリクエストはOTDを経由してのみ、処理されます。
WebLogic Server is a part of Fusion Middleware software.
Our Applications will be deployed on WebLogic Server.
We are supporting 11g and 12c for Exalogic.
ウェブロジックサーバーはフュジョンミドルウェア―ソフトウェアーの一つです。
私達のアプリケーションはウェブロジックサーバー上にデプロイされます。
私達はエクサロジックのために11gと12cを支援しています。
Coherence is a java based data grid software supporting in-memory access.
Usually, Coherence Server has Data Access Objects.
And, Stored Objects in Coherence can be updated to database later.
This function is called Write-behind.
If we use write behind function of coherence, We can improve our application performance dramatically.
And, We are supporting 12c for coherence on Exalogic.
コヒーレンスはインメモリアクセスを支援するデータグリッドソフトウェアーです。
普通はコヒーレンスはDAOを持っています。
そして、コヒーレンスに保存されたオブジェクトは後でデータベースにアップデート出来ます。
この機能をWrite-Behindと言います。
コヒーレンスのWrite-Behind機能を使えば、もっと早いアプリケーションを実現出来ます。
私達はコヒーレンスについては11gと12cを支援しています。
Jennifer is the real time monitoring system introduced to Japan Ichiba in 2012.
This is the dashboard of Jennifer Server.
If the Jennifer server is connected to the application server, We can always monitor all processed requests on all servers.
ジェニファーは2012年、楽天に導入されたリアルタイムモニタリングシステムです。
私達はジェニファーサーバーを使って全ての処理されるリクエストを監視出来ます。
From next slide, I will explain about system design on Exalogic.
次のスライドからエクサロジック上のシステムデザインについて説明します。
We have considered 3 keywords while designing system architecture on Exalogic.
Those are automated, standardized and nonstop.
Ichiba’s system scale are expected to grow in a future. And our goal is to have this big system manageable by 4 or 5 person.
Automate and standardize are essential requirements to reduce operation. And at the same time we have to guarantee the nonstop system.
All of system designs on Exalogic are based on this 3 keywords.
Exalogic System is providing a zfs appliance of 60 terabyte.
So, We have decided to use zfs storage for every applications and data storing.
We have created two shares for application and data. And we have mounted shares to every computer node.
If we need additional computer node, we can start instance on additional node by mounting shares.
Because, All of Installation, data and operation tools are existing on shares.
We are using the Infini-band network for communication between computer node and zfs appliance.
It is very fast like a local disk.
エクサロジックは60TBのZFSアプライアンスを提供しています。
それで、我々は全てのアプリケーションのデータ保存にZFSストレージを使うことにしました。
我々はデータ用とアプリケーション用のシェアを2つ生成し、全てのコンピュータノードにマウントしました。
もし、追加ノードが必要になれば、我々は追加ノードにシェアをマウントするだけでインスタンスを起動出来ます。
なぜなら、全てのインスト―レーション、データ、及び運用ツールがシェアにあるからです。
我々はコンピュータノードとZFSアプライアンスの間の通信にInfini-Bandネットワークを使っていてローカルディスクのように非常に早いです。
All of Servers are sharing resource such as Middleware installation, domain installation, library and configuration.
And, Servers also needs private resource such as log files, application data, and managing data for each instances.
These resources need to have separate path. If resources are not separated, It will be the bottleneck.
We are using variables having instance name to separate resources.
全てのサーバはミドルウェアインストールレーション、ドメインインスト―レーション、ライブラリ、そして設定などのリソースを共有しています。
そして、サーバは共有リソース以外にも個別なリソースが必要です。
それはログファイル、アプリケーション、各インスタンスの管理データなどがあります。
このリソースはアクセスパスを分離する必要があります。もし、分離しなければそれはボトルネックになります。
そして、我々はリソースを分離するためにWebLogic.Nameとtangosol.coherence.member変数を利用しています。
And We must control not to duplicate server resource such as port number and IP address.
Because Exalogic is sharing a server resource for all applications.
So, We had designed rule based application for using not duplicated resource.
We have created operation tool for auto construction.
And, If we want to construct application, we need to decide only two parameters.
It’s a application name and number as a indicator for application.
If our construction tool knows only two parameters, Application will be constructed automatically.
そして、我々はポート番号やIPアドレスのようなサーバリソースを重複しないように制御しないと行けません。
なぜなら、エクサロジックは全てのアプリケーションに対してサーバリソースを共有するからです。
それで、我々はリソースが重複しないようにルールベースのアプリケーションを設計しました。
我々は自動構築用のオペーレーションツールを作成しました。
そして、アプリケーションを構築したいなら、2つのパラメータを決める必要があります。
それはアプリケーション名とアプリケーションを認識するための番号です。
構築ツールがこの2つのパラメータを知っていれば、アプリケーションは自動的に構築されます。
One of our most Important things is a floating IP for operating application server on Exalogic.
Each WebLogic Domain is using different floating IP Zone. And Floating IP Address for Managed Server is predefined on config.xml of WebLogic Domain.
Floating IP Address can be moved to the another computer node by online.
As a Result, If managed Server has a problem, we can simply migrate managed Server to another computer node.
And, We can migrate managed Server by one command in 2~ 3 minutes.
エクサロジック上でアプリケーションを運用する際に一番重要なことの一つはFloating IPです。
各ウェブロジックドメインは別のFloating IPゾンを使っていて、管理対象サーバのFloating IPアドレスはウェブロジックドメインのCONFIG.XMLに定義されています。
Floating IPアドレスはオンライン上で別のノードに移行出来ます。
結果的に管理対象サーバに問題が起きたら、我々は簡単に管理対象サーバを別のノードに移動出来ます。
そして我々は2~3分内にコマンド一つだけで管理対象サーバを移設出来ます。
Legacy Applications had one problem.
Most of Legacy Applications have the different execution environment.
Many applications are using different middleware version, environment variables and customized script.
So, We need the education cost for learning legacy applications.
Accordingly, We feel the need to standardizing environment.
So, We created the standardized script set for all applications running on Exalogic.
This Script Set can control WebLogic and Coherence Server remotely. And, When jennifer server is installed, It is designed to start Monitoring automatically.
Also, We did prepare a option configuration file for customizing application.
As a result, our applications have same script set except option configuration file. And We can control all application servers remotely by using same command.
既存アプリケーションは一つ問題がありました。
殆どの既存アプリケーションは異なる実行環境を持っていました。
多いアプリケーションが異なるミドルウェアバージョン、環境変数、カスタム化されたスクリプトを使っていました。
それで、我々は既存アプリケーションを習うために教育コストが必要だったんです。
その理由で我々は環境を標準化する必要があると思いました。
それで、我々はエクサロジックで動かす全てのアプリケーションで使うために標準化されたスクリプトセットを作りました。
このスクリプトセットはウェブロジックとコヒーレンスサーバーをリモートでコントロール出来ます。
そして、ジェニファーサーバがインストールされていれば、自動的にモニターリングが始まるように設計しました。
また、我々はアプリケーションをカスタマイズするため、オプション設定ファイルを用意しました。
結果的にはアプリケーションはオプションファイル以外は全て同じスクリプトセットを持つことになりました。
そして、我々は全てのアプリケーションをリモートで同じコマンドを使って制御出来るようになりました。
But, As everyone knows, NodeManager is used to control WebLogic Servers remotely.
If WebLogic Instance is started by NodeManager, standardized script set will not be used.
But, We know that the NodeManager can be customized by editing nodemanager.properties.
We have combined NodeManager with Our Script Set.
So, We can use anything to control WebLogic Server. WebLogic will be maintained same environment.
しかし、皆さんも分かっていると思いますが、ノードマネジャーもウェブロジックサーバーをリモートで制御するために使われます。
もし、ウェブロジックインスタンスがノードマネジャーによって起動されると標準化スクリプトは使用去れないでしょう。
しかし、我々はノードマネジャープロパティーを使ってノードマネジャーをカスタム化出来ると知っていました。
我々はノードマネジャーとスクリプトセットを結合しました。
そして、我々はどれもウェブロジックサーバの制御に使えるようになりましたし、同じ環境を維持されるようになりました。
Our Main focus is construction non-stop system
Until Now, Our Legacy System is used to stop when doing system maintenance.
The most important thing is to remove downtime.
私達のメインフォーカスはノンストップシステムを構築することです。
今まで、我々の既存システムはシステムメインテナンス時に止まる場合がありました。
何よりも重要なのはダウンタイムを無くすることです。
We need to know some function of OTD to make non-stop system happen.
For processing request, OTD needs configuration having port number and application server information.
Port number is called Listener. And, application server information is called ORIGIN-SERVER-POOL.
and OTD will forward requests to application server based on configuration.
For Each Service, We need to associate access port number and application servers.
And, listener’s origin-server-pool can be changed dynamically.
ノンストップシステムを実現するにはOTDのある機能を知る必要があります。
リクエストを処理するため、OTDはポート番号とアプリケーションサーバー情報が必要です。
ポート番号はリスナーと言います。そして、アプリケーションサーバ情報をORIGIN-SERVER-POOLと言います。
そして、OTDは設定を元にアプリケーションサーバーへリクエストを転送します。
各サービスのため、私達はポート番号とアプリケーションサーバーを関連付ける必要があります。
そして、リスナーのORIGIN_SERVER_POOLは動的に変更出来ます。
So, We have defined two listeners and origin-server-pool on OTD for each service.
それで私達はサービス毎に2つずつ、リスナーとオリジンサーバープールをOTD上に定義しました。
And, We have constructed two WebLogic domains for a application in order to connect origin-server-pool.
As a result, Listener can connect to the A or B domain by switching origin-server-pool.
そして、私達はオリジンサーバープールに繋ぐためにアプリケーション毎に2つのウェブロジックドメインを構築しました。
結果的に、ポートはAドメインとBドメインを切り替えるようになりました。
Now, We can release and test to the application at the test domain in advance.
And, We can make nonstop maintenance possible by switching origin-server-pool of listener.
The good point is that we can test the application on production environment before service release
When we decided to introduce this design, We didn't know that the concept already exist.
It is called the blue green deployment.
これで、私達は事前にテストドメインにアプリケーションをリリースしてテスト出来ます。
そして、リスナーのオリジンサーバープールをスウィチして無停止メインテナンスを実現出来ます。
良い点はサービスリリース前に本番環境でアプリケーションのテストが出来ることです。
私達がこのデザインを導入すると決めた時には私達はこの概念が既にあるとは知りませんでした。
これはブルグリーンデプロイメントと言います。
The Effect of using blue-green deployment was very successful.
In case of our first migrated application, which was in April, 2014.
Release time was reduced from 6 hours to 20 seconds.
実際に最初に移行したアプリケーションの場合、既存システムでサービスリリースに約6時間程度、時間が必要だったが、
エクサロジック上では20秒になりました。
But..
If our application is sessionful, Session Data will be lost when switching application.
For resolving the problem, We constructed coherence cluster including two weblogic domains.
Session Data will be stored in coherence cluster. And WebLogic Domain can access the session data by using coherence web.
We can solve the problem by sharing session data to the coherence cluster.
しかし
アプリケーションがセッションプルならば、アプリケーションスウィチ時にセッションデータが消えてしまいます。
この解決のため、私達は両ドメインを含んでいるコヒーレンスクラスターを構築しました。
セッションデータはコヒーレンスクラスターに保存されます。そしてウェブロジックドメインはコヒーレンスウェブを使ってセッションデータへアクセス出来ます。
私たちはコヒーレンスクラスターにセッションデータを共有することでこの問題を解決出来ました。
Now we know, to make nonstop operation possible, application needs two Domains.
And also, We know that Jennifer server is necessary to monitor Weblogic Domains.
Which means that we need 4 installation to construct service.
For Legacy environment, we needed 1 or 2 weeks for construction.
今、私達はノンストップオペレーションを実現するためには2つのウェブロジックドメインが必要だと知っています。
また、ウェブロジックドメインをモニタリングするジェニファーも必要だと知っています。
それはアプリケーションを構築するには4回のインストールが必要だという意味です。
もし、既存環境だったら、私たちはアプリケーション構築に1~2週間必要だった。
We wanted to cut construction time on Exalogic.
so, we have created a tool for automatic construction.
A Tool is constructing application by using Templates. and Required Information is only application name and number for construction.
私たちはエクサロジック上での構築時間を短縮したいと思っていました。
それで私達は自動構築用のマニュアルを作成しました。
マニュアルはテンプレートを使ってアプリケーションを構築します。そして、構築に必要な情報はアプリケーション名と番号のみです。
Now by automating, we only need 5 minutes to construct service.
アプリケーションを構築したい場合、マニュアルを使って5分以内に作成されるようになりました。
I have explained the concept of System Design on Exalogic.
Next Chapter is an application life cycle on Exalogic.
Exalogic上のシステムデザイン概念を説明しました。
次のChapterはエクサロジック上でのアプリケーションライフサイクルです。
All applications on Exalogic have Same Lifecycle.
A Lifecycle has 4 phases. Release, Testing, Switching and Operation.
エクサロジック上の全アプリケーションは同じライフサイクルを持っています。
ライフサイクルは4段階である。リリース、テスト、スウィッチ、そしてオペレーション
First Phase is an Application Release.
最初のステップはアプリケーションリリースです。
A Rule for application release is that release material has to keep same directory structure with production environment.
We can release any application by keeping this rule.
If you use maven, I would like to recommend assembly-plugin.
When building application, Maven's assembly-plugin can deploy files to predefined directory.
アプリケーションリリースのルールはリリース物は本番環境と同じディレクトリを持たないと行けないことです。
このルールを守ることによって私達はどんなアプリケーションもリリース出来ます。
もし、MAVENを使っているなら、私はAssembly-pluginをお勧めします。
アプリケーションをビルドする時にMAVENのAssembly-pluginはファイルを任意のディレクトリに配置出来ます。
After preparing release material, It will be moved to Release history Directory.
And We can release files to standby domain.
Release Task is executed by release recipe automatically. And Only New or Modified resources can be updated.
Recipe is a executable release document. I will explain it at chapter 4.
リリース物が用意されると、それはリリース履歴ディレクトリに移動されます。
そして私達はStandbyドメインにファイルをリリース出来ます。
リリースタスクはStandbyReleaseマニュアルを使って自動的に実行されます。そしてリリースタスクは新規または更新されたリソースのみ、更新出来ます。
This is the workflow of release recipe.
If application is already installed, We are always removing old application.
and After restarting WebLogic domain, Updated Application will be installed on WebLogic Domain of clean status.
これはリリースマニュアルのワークフローです。
既にアプリケーションがインストールされていれば、私達は何時も古いアプリケーソンを削除しています。
そして、ウェブロジックドメインを再起動した後、新バージョンのアプリケーションは綺麗な状態のウェブロジックドメインにインストールされます。
Next Phase is Testing.
次のフェーズはテストです。
I have explained that we can access standby domain via listener for test
If application has been released, we can always access to application from internal.
私はテストリスナーを経由してスタンドバイドメインにアクセス出来ると説明しました。
アプリケーションがリリースされていれば、私達は何時でも内部からアプリケーションにアクセス出来ます。
It is the big benefit to test application in production before service-in
In Test phase, Application can be updated many times over.
So, We can fix the problem of application before switching application.
事前にアプリケーションをテスト出来るのはよいメリットです。
テスト段階ではアプリケーションは何度も更新出来ます。
そして、私達はアプリケーションスウィチング前にアプリケーションの問題を解決出来ます。
If the test is finished, Upgraded service can start by switching application.
テストが完了すれば、アプリケーションをスウィッチし、最新サービスを開始出来ます。
Service can be switched without stopping.
This Video was recorded during domain switch.
If you watch the video, you will be able to understand the concept of switching.
Movie on top is the Dashboard of current service domain.
And bottom Movie is the standby domain.
If we switch domain, All requests will be moved to standby domain.
After switching, standby domain will be new service domain. And Old Service domain will be standby domain.
サービスはリアルタイムで切替出来ます。
このビデオはアプリケーションスウィチングの記録です。
このビデオを見れば、スウィチングの概念が分かるようになるのでしょう。
After switching an application, Release Material needs to be renamed to the name including switched date.
And, This Directory will be linked to directory for operation.
release material and production environment will directed same point.
And Linked Directory will be used for operation of the production environment.
アプリケーションをスウィチングした後、リリース物は切替日を含んだ名称にリネームする必要があります。
そして、そのディレクトリはオペーレーション用のディレクトリにリンクされます。
リリース物と本番環境は同じであり、
リンクされたディレクトリは本番環境のオペーレーションのために使われます。
Now, We need to operate the production application.
これからは私達は本番アプリケーションを運用する必要があります。
When the Application raises some problem, It needs to rollback previous version.
If standby domain is not updated, It will be returned to previous version by switching application.
In this case, We can change application in just a few seconds by using automated recipe.
But, If the application is already updated, we need to deploy the application of previous version to standby domain.
For this situation, we are storing release history.
We can deploy application to standby domain by using release material of previous version.
As a result, We can switch to the previous application in 5 minutes by one command.
Service don't stop in any cases.
アプリケーションが問題を起こした場合、前のバージョンに戻す必要があります。
スタンドバイドメインがまだ更新されてなければ、アプリケーションをスウィチするためで前のバージョンに戻ります。
この場合、私達は自動化されたマニュアルを利用し、数秒以内で切り替えが出来ます。
しかし、アプリケーションが既に更新された場合には私達はスタンドバイドメインに前のバージョンのアプリケーションをデプロイする必要があります。
この状況のため、私達はリリース履歴を保管している。
私達は前バージョンのリリース物を使ってスタンドバイドメインにアプリケーションをデプロイ出来ます。
結果的に私達はコマンド一つで5分内に前のアプリケーションに切替出来ます。
どの場合にもサービスは止まらない。
Normal Operation means to update configuration of production application in real time.
For normal operation, We are using released material existing in the operation directory.
At first, We will update configuration file of released material.
And
The Configuration Files will be updated from operation directory to production application.
So, The Operation Directory is always maintaining the latest status.
通常運用はリアルタイムで本番アプリケーションの設定を更新することを意味します。
通常運用のため、私達はオペーレーションディレクトリにあるリリースされた物を使用しています。
まず、私達はリリースされた物の設定ファイルを更新します。
そして、
設定ファイルはオペーレーションディレクトリから本番アプリケーションへ更新されます。
それで、オペーレーションディレクトリは常に最新状態を維持しています。
My last chapter is about cost effective operation.
最後
WLST is command based tools for WebLogic.
We are using WLST commands for controlling WebLogic Domain on Exalogic.
It's a good solution for automated operation.
because of, The operation recipe can use WLST commands.
WLSTはウェブロジック用のコマンドベースツールです。
私達はエクサロジック上のウェブロジックドメインを制御するのにWLSTコマンドを利用しています。
オペーレーションマニュアルはWLSTコマンドを使えるので自動化されたオペーレーションには良い対策です。
Release Document is a list of os commands for release operation. .
For the legacy system, we are doing release operation by cutting & pasting commands from release document manually.
And for every maintenance we have to update release document.
because Most of Applications have different environment.
It was a reason of high operating cost.
レガシシステムを運用する時には、私達は全てのメインテナンスにリリースマニュアルを作成しました。
なぜなら、殆どのアプリケーションが異なる環境を持っていたからです。
それは高い運用コストの原因になりました。
But
For Applications on Exalogic, operations are standardized and can be covered by one release document.
We are calling it “operation recipe”.
Most of Exalogic Operations are done by these Recipes.
And, We are using only three operation recipes. Construction, Release, and Switching.
しかし
エクサロジック上のアプリケーションは標準化されているので、私達は一つのマニュアルで全サービスをカバー出来ます。
殆どのエクサロジックオペーレーションはマニュアルで実行されています。
そして、私達は運用マニュアルを3つしか使っていません。
For more effective operation, We made trigger system.
Trigger System is auto execution tool for operation recipe.
Trigger System can select mode whether sequential and automatic execution.
And We can register trigger as a job of jenkins.
In this Case, We can execute operating task on web browser.
もっと効果的なオペーレーションのため、私達はトリガーシステムを作りました。
トリガーシステムはリリースマニュアルの自動実行ツールです。
トリガーシステムは順番に実行するか、自動的に事項するかモードを選択出来ます。
そして、私達はトリガーをジェンキンスのジョブとして登録出来ます。
この場合、私達はウェブブラウザーで運用タスクを実行出来ます。
This is a demo video creating application using Trigger.
In this demo, We are creating two WebLogic Domains and two Jennifer servers.
To create application, We have to input some arguments.
Those are application name, number, WebLogic version, and our construction recipe. <click>
Then, Our recipe will start creating application.
WebLogic and Jennifer servers will be created by using template.
As you see, After Installing Servers, recipe is using wlst command and shell commands to update server configuration.
And All configuration steps are printed in console.
For Jennifer, we needs many ports for monitoring.
We have to make sure not to conflict ports for all servers.
And It is done automatically by our recipe.
By using this trigger system, We reduced operating time extremely
This Demo is played in 8x speed. So actually, it is taking about 5minutes.
When our construction recipe is finished, Our Server is already started.
So, If we have release material, we can deploy application right away.
このデモはトリガーを使用してアプリケーションを構築する映像です。
このデモでは2つのウェブロジックドメインと2つのジェニファーサーバを構築しています。
トリガーシステムを使うことによって私達は運用時間を極端的に減らすことができました。
Next Chapter is about Exalogic Operation on OZ Manager.
Watanabe-san will give a presentation.