In an increasingly competitive marketplace, speed and business agility are paramount. And integration between customer-facing systems and back-end applications is more crucial than ever.
At this event, you'll learn how open source software built by communities, like Apache Camel, Docker, Kubernetes, OpenShift Origin, and Fabric8, can help organizations integrate services and establish effective continuous integration and delivery (CI/CD) pipelines.
I work at Red Hat, the world's leading provider of open source software solutions and the company ranked #23 Best Place to Work in 2014 by Glassdoor.com. I'm part of the Solution Engineering Team, responsible for developing innovative IT solutions that drive business value focusing on DevOps and Platform as a Service.
For the past 20 years, Red Hat's open source software development model has produced high-performing, cost-effective solutions. Our model mirrors the highly interconnected world we live in—where ideas and information can be shared worldwide in seconds. Today, more than 90% of Fortune 500 companies rely on Red Hat. We offer the only fully open technology stack, from operating system to middleware, storage to cloud and virtualization solutions. We also provide a variety of services, including award-winning support, consulting, and training.
In case you missed our Red Hat Essentials Training at our offices find attached the slides that were presented. Designed to be the \'helicopter\' view of Red Hat and Open Source and the market opportunity.
Red Hat Linux Certified Professional step by step guide Tech ArkitRavi Kumar
Introduction to course outline and certification
Managing files & directories
Basic Commands ls, cp, mkdir, cat, rm and rmdir
Getting help from using command line (whatis, whereis, man, help, info, –help and pinfo)
Editing Viewing of text files (nano, vi and vim)
User Administration Creating, Modifying and Deleting
Controlling services & daemons
Listing process
Prioritize process
Analyze & storing logs
Syslog Server & Client configuration
Compressing files & directories (tar and zip)
Copying files & directories to remote servers
Yum & RPM
Search files and directories
File & Directory links (Soft Links and Hard Links)
Managing of physical storage
Logical Volume Manager
Access Control List (ACL)
Scheduling of future Linux tasks
SELinux
NFS Server and Client configuration
Firewall
Securing the NFS using kerberos
LDAP client configuration
Setting UP ldap users home directory
Accessing the network storage using (CIFS) samba
Samba Multiuser Access
Using Virtualized systems
Creating virtual Machines
Automated installation of Redhat Linux
Automated Installation using Kickstart
Linux Booting Process
Root password Recovery
Fixing Partition Errors – Using Enter into Emergency Mode
Using Regular Expressions with grep
Understand and use essential tools for handling files, directories, command-line environments, and documentation
Operate running systems, including booting into different run levels, identifying processes, starting and stopping virtual machines, and controlling services
Configure local storage using partitions and logical volumes
Create and configure file systems and file system attributes, such as permissions, encryption, access control lists, and network file systems
Deploy, configure, and maintain systems, including software installation, update, and core services
Manage users and groups, including use of a centralized directory for authentication
Manage security, including basic firewall and SELinux configuration
Configuring static routes, packet filtering, and network address translation
Setting kernel runtime parameters
Configuring an Internet Small Computer System Interface (iSCSI) initiator
Producing and delivering reports on system utilization
Using shell scripting to automate system maintenance tasks
Configuring system logging, including remote logging
Configuring a system to provide networking services, including HTTP/HTTPS, File Transfer Protocol (FTP), network file system (NFS), server message block (SMB), Simple Mail Transfer Protocol (SMTP), secure shell (SSH) and Network Time Protocol (NTP)
Linux VDI with OpenStack – How to Deliver Linux Virtual Desktops on DemandLeostream
It’s no secret that Linux has a loyal fan-base across the development community and industries such as government, engineering, and oil & gas. But, when it comes to VDI, the operating system often gets the short end of the stick.
How can you lower IT costs when applications run on a Linux operating system? How can you handle a mixture of Windows and Linux in a hosted environment? And, how do you ensure a seamless end-user experience, while maximizing resource usage and minimizing downtime?
The truth is, Linux VDI doesn’t have to be hard. You can create a virtual Linux environment that provides an efficient way to access hosted resources on centrally managed servers. By combining the Leostream Connection Broker with a high-performance protocol, managing a hosted Linux environment can be as simple, seamless, and powerful as a hosted Windows environment.
I work at Red Hat, the world's leading provider of open source software solutions and the company ranked #23 Best Place to Work in 2014 by Glassdoor.com. I'm part of the Solution Engineering Team, responsible for developing innovative IT solutions that drive business value focusing on DevOps and Platform as a Service.
For the past 20 years, Red Hat's open source software development model has produced high-performing, cost-effective solutions. Our model mirrors the highly interconnected world we live in—where ideas and information can be shared worldwide in seconds. Today, more than 90% of Fortune 500 companies rely on Red Hat. We offer the only fully open technology stack, from operating system to middleware, storage to cloud and virtualization solutions. We also provide a variety of services, including award-winning support, consulting, and training.
In case you missed our Red Hat Essentials Training at our offices find attached the slides that were presented. Designed to be the \'helicopter\' view of Red Hat and Open Source and the market opportunity.
Red Hat Linux Certified Professional step by step guide Tech ArkitRavi Kumar
Introduction to course outline and certification
Managing files & directories
Basic Commands ls, cp, mkdir, cat, rm and rmdir
Getting help from using command line (whatis, whereis, man, help, info, –help and pinfo)
Editing Viewing of text files (nano, vi and vim)
User Administration Creating, Modifying and Deleting
Controlling services & daemons
Listing process
Prioritize process
Analyze & storing logs
Syslog Server & Client configuration
Compressing files & directories (tar and zip)
Copying files & directories to remote servers
Yum & RPM
Search files and directories
File & Directory links (Soft Links and Hard Links)
Managing of physical storage
Logical Volume Manager
Access Control List (ACL)
Scheduling of future Linux tasks
SELinux
NFS Server and Client configuration
Firewall
Securing the NFS using kerberos
LDAP client configuration
Setting UP ldap users home directory
Accessing the network storage using (CIFS) samba
Samba Multiuser Access
Using Virtualized systems
Creating virtual Machines
Automated installation of Redhat Linux
Automated Installation using Kickstart
Linux Booting Process
Root password Recovery
Fixing Partition Errors – Using Enter into Emergency Mode
Using Regular Expressions with grep
Understand and use essential tools for handling files, directories, command-line environments, and documentation
Operate running systems, including booting into different run levels, identifying processes, starting and stopping virtual machines, and controlling services
Configure local storage using partitions and logical volumes
Create and configure file systems and file system attributes, such as permissions, encryption, access control lists, and network file systems
Deploy, configure, and maintain systems, including software installation, update, and core services
Manage users and groups, including use of a centralized directory for authentication
Manage security, including basic firewall and SELinux configuration
Configuring static routes, packet filtering, and network address translation
Setting kernel runtime parameters
Configuring an Internet Small Computer System Interface (iSCSI) initiator
Producing and delivering reports on system utilization
Using shell scripting to automate system maintenance tasks
Configuring system logging, including remote logging
Configuring a system to provide networking services, including HTTP/HTTPS, File Transfer Protocol (FTP), network file system (NFS), server message block (SMB), Simple Mail Transfer Protocol (SMTP), secure shell (SSH) and Network Time Protocol (NTP)
Linux VDI with OpenStack – How to Deliver Linux Virtual Desktops on DemandLeostream
It’s no secret that Linux has a loyal fan-base across the development community and industries such as government, engineering, and oil & gas. But, when it comes to VDI, the operating system often gets the short end of the stick.
How can you lower IT costs when applications run on a Linux operating system? How can you handle a mixture of Windows and Linux in a hosted environment? And, how do you ensure a seamless end-user experience, while maximizing resource usage and minimizing downtime?
The truth is, Linux VDI doesn’t have to be hard. You can create a virtual Linux environment that provides an efficient way to access hosted resources on centrally managed servers. By combining the Leostream Connection Broker with a high-performance protocol, managing a hosted Linux environment can be as simple, seamless, and powerful as a hosted Windows environment.
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...OpenStack Korea Community
6wind 솔루션의 특징은
l Linux 베어메탈 및 가상화 환경에서 최고의 패킷처리 성능을 제공 합니다.
l 다양한 멀티프로세서(Intel, Cavium, Broadcom, EZchip/Tilera 등)와 최적화된 고성능의 L2/L3/L4 네트워크 프로토콜 스택을 제공 합니다.
l Linux OS, Hypervisor, OVS, Openflow, Openstack 등과 투명하게 동작 합니다.
l 개발기간 단축 등으로 비용절감이 가능 합니다.
고객은 용도에 따라 소스코드(제품명: 6WINDGate) 또는 바이너리 솔루션의 라이선스가 가능하며,
통신/네트워크/보안/클라우드 솔루션의 성능 업그레이드 또는 고성능의 신규 솔루션 개발에 사용이 가능 합니다.
클라우드 사업자의 경우 가상스위치 가속솔루션(Virtual Accelerator)을 이용하면 서버당 운용 가능한 가상머신의 수를 증가시킬 수 있으며,
또한 각 가상머신에 더 높은 네트워크 대역폭 제공이 가능 합니다. 이를 통하여 고품질의 서비스 제공 및 경쟁력 확보가 가능하며, TCO 절감 및 ROI 극대화가 가능 합니다.
일부 클라우드 사업자의 경우 소스코드(6WINDGate)를 라이선스 하여 자사의 서비스에 필요한 다양한 솔루션들을 직접 개발하여 사용하기도 합니다.
l 소스코드 솔루션 (6WINDGate)
n 기능별로 모듈화된 76여개의 소스코드 모듈로 구성이 되어 있으며, 용도에 따라 고객이 필요한 모듈을 선택하여 라이선스 가능 합니다.
n 통신/네트웍/보안/클라우드 솔루션의 성능향상 또는 고성능 신규 솔루션 개발을 위해 사용가능 합니다.
l 바이너리 솔루션: 6WINDGate 및 DPDK를 기반으로 제작됨.
n Virtual Accelerator: 가상화 환경에서 KVM hypervisor의 네트워킹 성능가속 솔루션이며 리눅스 기반의 OVS에 비해 월등한 처리 성능 갖으며,
Fast path 기반의 IP forwarding, VRF, Filtering, NAT, VXLAN, GRE 등의 부가 기능을 포함하고 있습니다.
n Turbo Router: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 라우터(vRouter) 입니다.
n Turbo IPsec gateway: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 IPsec 게이트웨이(vIPsec GW)이며 Turbo Router를 포함하고 있습니다.
A session in the DevNet Zone at Cisco Live, Berlin. Big data and the Internet of Things (IoT) are two of the hottest categories in information technology today, yet there are significant challenges when trying to create an end-to-end solution. The worlds of "IT" and “IoT" differ in terms of programming interfaces, protocols, security frameworks, and application lifecycle management. In this talk we will describe proven ways to overcome challenges when deploying a complete “device to datacenter” system, including how to stream IoT telemetry into big data repositories; how to perform real-time analytics on machine data; and how to close the loop with reliable, secure command and control back out to remote control systems and other devices.
The evolution in storage.
Why an open source initiative like Ceph found its way into the enterprise storage world. Traditional storage solutions are expensive and you will probably need a forklift getting it in your datacenter. Meanwhile you have an ever growing demand for storage capacity by adopting new technologies like IoT, video for marketing & surveillance now in 4k, expanding user data with the adoption of BYOD and increasing backup requirements.
This demand created the opportunity for Ceph, a Scale-out Software Defined Storage solution, driven by one of the best open source communities worldwide. Standardize on Industry Standard Servers and grow your storage estate at YOUR rate.
In this session we will introduce you to the enterprise adoption of Ceph, give you a technical deep dive of Ceph and how erasure coding is improving your level of data protection.
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 5
The evolution of OpenStack Networking
Guido Appenzeller - Chief Technology Strategy Officer, Networking & Security, VMWare
Hyper-C is OpenStack on Windows Server 2016, based on Nano Server, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Windows. Bare metal deployment features Cloudbase Solutions Juju charms and MAAS.
From the Amazon Web Services Singapore & Malaysia Summits 2015 Track 2 Breakout, 'Containerized Cloud Computing' Presented by Sivaram Shunmugam Manager, Infrastructure Practice - Redhat
Developing Enterprise Applications for the Cloud,from Monolith to MicroservicesDavid Currie
Presented at IBM InterConnect 2105. Is your next enterprise application ready for the cloud? Do you know how to build the kind of low-latency, highly available, highly scalable, omni-channel, micro-service modern-day application that customers expect? This introductory presentation will cover what it takes to build such an application using the multiple language runtimes and composing services offered on IBM Bluemix cloud.
Introduction to PaaS and demos on Cloud Foundry from a DevOps point of view.
Presented at the Singapore DevOps meetup of Sept 2012:
http://www.meetup.com/devops-singapore/events/80016202/
[2015-05월 세미나] Network Bottlenecks Mutiply with NFV Don't Forget Performance ...OpenStack Korea Community
6wind 솔루션의 특징은
l Linux 베어메탈 및 가상화 환경에서 최고의 패킷처리 성능을 제공 합니다.
l 다양한 멀티프로세서(Intel, Cavium, Broadcom, EZchip/Tilera 등)와 최적화된 고성능의 L2/L3/L4 네트워크 프로토콜 스택을 제공 합니다.
l Linux OS, Hypervisor, OVS, Openflow, Openstack 등과 투명하게 동작 합니다.
l 개발기간 단축 등으로 비용절감이 가능 합니다.
고객은 용도에 따라 소스코드(제품명: 6WINDGate) 또는 바이너리 솔루션의 라이선스가 가능하며,
통신/네트워크/보안/클라우드 솔루션의 성능 업그레이드 또는 고성능의 신규 솔루션 개발에 사용이 가능 합니다.
클라우드 사업자의 경우 가상스위치 가속솔루션(Virtual Accelerator)을 이용하면 서버당 운용 가능한 가상머신의 수를 증가시킬 수 있으며,
또한 각 가상머신에 더 높은 네트워크 대역폭 제공이 가능 합니다. 이를 통하여 고품질의 서비스 제공 및 경쟁력 확보가 가능하며, TCO 절감 및 ROI 극대화가 가능 합니다.
일부 클라우드 사업자의 경우 소스코드(6WINDGate)를 라이선스 하여 자사의 서비스에 필요한 다양한 솔루션들을 직접 개발하여 사용하기도 합니다.
l 소스코드 솔루션 (6WINDGate)
n 기능별로 모듈화된 76여개의 소스코드 모듈로 구성이 되어 있으며, 용도에 따라 고객이 필요한 모듈을 선택하여 라이선스 가능 합니다.
n 통신/네트웍/보안/클라우드 솔루션의 성능향상 또는 고성능 신규 솔루션 개발을 위해 사용가능 합니다.
l 바이너리 솔루션: 6WINDGate 및 DPDK를 기반으로 제작됨.
n Virtual Accelerator: 가상화 환경에서 KVM hypervisor의 네트워킹 성능가속 솔루션이며 리눅스 기반의 OVS에 비해 월등한 처리 성능 갖으며,
Fast path 기반의 IP forwarding, VRF, Filtering, NAT, VXLAN, GRE 등의 부가 기능을 포함하고 있습니다.
n Turbo Router: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 라우터(vRouter) 입니다.
n Turbo IPsec gateway: 리눅스 베어메탈 및 가상화 환경에서 사용 가능한 고성능의 소프트웨어 기반 IPsec 게이트웨이(vIPsec GW)이며 Turbo Router를 포함하고 있습니다.
A session in the DevNet Zone at Cisco Live, Berlin. Big data and the Internet of Things (IoT) are two of the hottest categories in information technology today, yet there are significant challenges when trying to create an end-to-end solution. The worlds of "IT" and “IoT" differ in terms of programming interfaces, protocols, security frameworks, and application lifecycle management. In this talk we will describe proven ways to overcome challenges when deploying a complete “device to datacenter” system, including how to stream IoT telemetry into big data repositories; how to perform real-time analytics on machine data; and how to close the loop with reliable, secure command and control back out to remote control systems and other devices.
The evolution in storage.
Why an open source initiative like Ceph found its way into the enterprise storage world. Traditional storage solutions are expensive and you will probably need a forklift getting it in your datacenter. Meanwhile you have an ever growing demand for storage capacity by adopting new technologies like IoT, video for marketing & surveillance now in 4k, expanding user data with the adoption of BYOD and increasing backup requirements.
This demand created the opportunity for Ceph, a Scale-out Software Defined Storage solution, driven by one of the best open source communities worldwide. Standardize on Industry Standard Servers and grow your storage estate at YOUR rate.
In this session we will introduce you to the enterprise adoption of Ceph, give you a technical deep dive of Ceph and how erasure coding is improving your level of data protection.
[OpenStack Day in Korea 2015] Keynote 5 - The evolution of OpenStack NetworkingOpenStack Korea Community
OpenStack Day in Korea 2015 - Keynote 5
The evolution of OpenStack Networking
Guido Appenzeller - Chief Technology Strategy Officer, Networking & Security, VMWare
Hyper-C is OpenStack on Windows Server 2016, based on Nano Server, Hyper-V, Storage Spaces Direct (S2D) and Open vSwitch for Windows. Bare metal deployment features Cloudbase Solutions Juju charms and MAAS.
From the Amazon Web Services Singapore & Malaysia Summits 2015 Track 2 Breakout, 'Containerized Cloud Computing' Presented by Sivaram Shunmugam Manager, Infrastructure Practice - Redhat
Developing Enterprise Applications for the Cloud,from Monolith to MicroservicesDavid Currie
Presented at IBM InterConnect 2105. Is your next enterprise application ready for the cloud? Do you know how to build the kind of low-latency, highly available, highly scalable, omni-channel, micro-service modern-day application that customers expect? This introductory presentation will cover what it takes to build such an application using the multiple language runtimes and composing services offered on IBM Bluemix cloud.
Introduction to PaaS and demos on Cloud Foundry from a DevOps point of view.
Presented at the Singapore DevOps meetup of Sept 2012:
http://www.meetup.com/devops-singapore/events/80016202/
Web Performance Optimisation at times.co.ukStephen Thair
Optimizing dynamic websites like www.thetimes.co.uk and www.thesundaytimes.co.uk isn't an easy task!
Speeding up a site requires a "war plan" and having a clear vision, dedicated team, appropriate tools and most importantly speed comparison data with similar sites.
Mehdi Ali, Optimisation Manager for the Times websites, will show us how this strategy was applied for The Times and Sunday Times sites with great results.
Configuration Management - The Operations Managers ViewStephen Thair
A presentation from the BCS COnfiguration Management Special Interest Group conference 2009. It gives "the other side of the story from a Operation Manager\'s perspective.
WELCOME TO, WEBASHA TECHNOLOGIES WHICH IS A CONSPICUOUS NAME AMONG LINUX TRAINING PROVIDERS OF COUNTRY
Our approach to training and development is designed to ensure that our trainees become capable of adopting up-to-date skills to work in today's modern, widest range of Industrial and Service sectors.
The training team of Webasha includes professionals who have more than 6 years experience in their respective fields. All the training sessions conducted are strictly based on the requirements of our client.
We design and deliver the best quality training to meet the changing and growing needs of the Professionals
Containers: Don't Skeu Them Up. Use Microservices Instead.Gordon Haff
from LinuxCon Japan 2016
Skeuomorphism usually means retaining existing design cues in something new that doesn't actually need them. But the basic idea is far broader. For example, containers aren't legacy virtualization with a new spin. They're part and parcel of a new platform for cloud apps including containerized operating systems like Project Atomic, container packaging systems like Docker, container orchestration like Kubernetes and Mesos, DevOps continuous integration and deployment practices, microservices architectures, "cattle" workloads, software-defined everything, management across hybrid infrastructures, and pervasive open source.
In this session, Red Hat's Gordon Haff and William Henry will discuss how containers can be most effectively deployed together with these new technologies and approaches -- including the resource management of large clusters with diverse workloads -- rather than mimicking legacy sever virtualization workflows and architectures.
Pivotal Cf, the most advanced Enterpise PaaS Platform in the world. this presentations explains how PCF helps developers and operators and boost their operational agility and enhance their IT capabilities.
ipsr solutions ltd. is a complete IT service provider based at Kottayam, Kerala with branches at Trivandrum, Kochi, Thrissur, Kozhikode and Bangalore. We have also established a 100% subsidiary in the United Kingdom. We provide Training in Red Hat, Cisco, Microsoft, software Courses.
Devops: Enabled Through a Recasting of Operational Rolescornelia davis
Delivered at CF Summit Berlin, 2 Nov 2015.
One thing that everyone agrees on is that “Devops” is about reducing the friction between dev and ops. While it might not be immediately apparent, CF enables a separation of “operations” into two roles: platform ops and application ops. Platform ops is responsible for maintaining a secure platform with sufficient functionality and capacity so that application developers and application operators can perform their work. And application operators are responsible for keeping business applications up and running, so that consumers receive superior service, 24x7x365. By moving further up the stack, app operators can be far closer to the line of business owners, getting them speaking the same language. In this session we demonstrate how Cloud Foundry enables this, we talk about customers who are taking advantage of it, and we cover the tools available for each of the roles.
How do measure our progress in a journey towards continuous integration? What are other people doing?
This presentation provides an measuring stick for CD Maturity and simple pattern for reviewing your current situation and deciding what to work on next.
This was a talk I did in Dublin at an event called Redefining the Enterprise OS Breakfast Briefing - How to meet next-generation IT demands for Linux Containers, Docker, Performance & Systems Management
http://techxperts.eu/events/redefining-the-enterprise-os-breakfast-briefing/
Watch the recorded version of this Webinar here:
Curious about Continuous Integration? Tune in!
Continuous Integration (CI), which is a big part of continuous delivery, is the concept of continuously building and testing software using an automated process. We have learned that utilizing CI could help us catch bugs earlier, enable better visibility, reduce repetitive processes, enable the development team to produce deployable products at a moment's notice, and reduce risk overall.
These slides will identify the various levels of continuous integration and delivery with regards to a release maturity of the development team or parent organization.
Using apache camel for microservices and integration then deploying and managing on Docker and Kubernetes. When we need to make changes to our app, we can use Fabric8 continuous delivery built on top of Kubernetes and OpenShift.
Choosing PaaS: Cisco and Open Source Options: an overviewCisco DevNet
A session in the DevNet Zone at Cisco Live, Berlin. Confused by all the open source PaaS options out there? What criteria should you use to evaluate them? We seek to answer these questions in a systematic manner and will explore top technologies such as Mesos, Apprenda, Cloud Foundry and Kubernetes along with Cisco's Project Shipped and open source Mantl. The aim of this session will be to shed light on which platforms add value to your needs, applications and workloads.
This is the slide deck for the DFW Azure User Group meetup of 18 July 2017, presented by Doug Vanderweide and discussing Azure's services that support a microservices architecture.
Highlights the services in Azure that provide microservices, including App Service, Logic Apps, Functions, Azure SQL Database, Service Bus, containers, Traffic Manager, etc.
A presentation on why or why not microservices, why a platform is important, discovering how to break down a monolith and some of the challenges you'll face (data, transactions, boundaries, etc). Last section is on Istio and service mesh introductions. Follow on twitter @christianposta for updates and more details
IBM Think Session 8598 Domino and JavaScript Development MasterClassPaul Withers
Session from IBM Think 2018. Note: the architecture used is an extreme case of what's possible (and it could go further), rather than a real-world expectation
Real world #microservices with Apache Camel, Fabric8, and OpenShiftChristian Posta
What are, or aren't, microservices?
There's a lot of hype and buzz, but microservices emerged organically vs how some of the other distributed architectural styles were "handed down to us", so I believe there's some good things once you cut through the hype. In this talk I discussed what are and are NOT microservices, introduced some concepts, and discussed some concrete open-source libraries and frameworks that can help you develop and manage microservice style deployments.
Real-world #microservices with Apache Camel, Fabric8, and OpenShiftChristian Posta
What are and aren't microservices?
Microservices is a validation of the open-source approach to integration and service implementation and a rebuff of the committee-driven SOA approach. In this
The challenge of application distribution - Introduction to Docker (2014 dec ...Sébastien Portebois
Live recording with the demos: https://www.youtube.com/watch?v=0XRcmJEiZOM
Contents
- The application distribution challenge
- The current solutions
- Introduction to Docker, Containers, and the Matrix from Hell
- Why people care: Separation of Concerns
- Technical Discussion
- Ecosystem, momentum
- How to build Docker images
- How to make containers talk to each other, how to handle data persistence
- Demo 1: isolation
- Demo 2: real case - installing Go Math! Academy, tail –f containers, unit tests
Custom Healthcare Software for Managing Chronic Conditions and Remote Patient...Mind IT Systems
Healthcare providers often struggle with the complexities of chronic conditions and remote patient monitoring, as each patient requires personalized care and ongoing monitoring. Off-the-shelf solutions may not meet these diverse needs, leading to inefficiencies and gaps in care. It’s here, custom healthcare software offers a tailored solution, ensuring improved care and effectiveness.
We describe the deployment and use of Globus Compute for remote computation. This content is aimed at researchers who wish to compute on remote resources using a unified programming interface, as well as system administrators who will deploy and operate Globus Compute services on their research computing infrastructure.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
Accelerate Enterprise Software Engineering with PlatformlessWSO2
Key takeaways:
Challenges of building platforms and the benefits of platformless.
Key principles of platformless, including API-first, cloud-native middleware, platform engineering, and developer experience.
How Choreo enables the platformless experience.
How key concepts like application architecture, domain-driven design, zero trust, and cell-based architecture are inherently a part of Choreo.
Demo of an end-to-end app built and deployed on Choreo.
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
Gamify Your Mind; The Secret Sauce to Delivering Success, Continuously Improv...Shahin Sheidaei
Games are powerful teaching tools, fostering hands-on engagement and fun. But they require careful consideration to succeed. Join me to explore factors in running and selecting games, ensuring they serve as effective teaching tools. Learn to maintain focus on learning objectives while playing, and how to measure the ROI of gaming in education. Discover strategies for pitching gaming to leadership. This session offers insights, tips, and examples for coaches, team leads, and enterprise leaders seeking to teach from simple to complex concepts.
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
Enterprise Resource Planning System includes various modules that reduce any business's workload. Additionally, it organizes the workflows, which drives towards enhancing productivity. Here are a detailed explanation of the ERP modules. Going through the points will help you understand how the software is changing the work dynamics.
To know more details here: https://blogs.nyggs.com/nyggs/enterprise-resource-planning-erp-system-modules/
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I ...Juraj Vysvader
In 2015, I used to write extensions for Joomla, WordPress, phpBB3, etc and I didn't get rich from it but it did have 63K downloads (powered possible tens of thousands of websites).
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Globus Compute wth IRI Workflows - GlobusWorld 2024Globus
As part of the DOE Integrated Research Infrastructure (IRI) program, NERSC at Lawrence Berkeley National Lab and ALCF at Argonne National Lab are working closely with General Atomics on accelerating the computing requirements of the DIII-D experiment. As part of the work the team is investigating ways to speedup the time to solution for many different parts of the DIII-D workflow including how they run jobs on HPC systems. One of these routes is looking at Globus Compute as a way to replace the current method for managing tasks and we describe a brief proof of concept showing how Globus Compute could help to schedule jobs and be a tool to connect compute at different facilities.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
3. • Trying to incorporate new technology?
• Trying to copy what others (Netflix, Amazon) are
doing?
• Tactical automation?
• Created a “DevOps” team?
• Exploring cloud services?
• Build/deploy automation?
• OpenSource?
• Piecemeal integration?
How are you keeping up with change?
Cloud Native Architectures
4. Cloud Native Architectures
• Faster software delivery
• Own database (data)
• Faster innovation
• Scalability
• Right technology for the
problem
• Test individual services
• Isolation
• Individual deployments
Microservices helps solve the problem
of “how do we decouple our services
and teams to move quickly at scale to
deliver business value”
5. • If my services are isolated at the process
level, I’m doing #microservices
I’m doing microservices if…
• If I use REST/Thrift/ProtoBuf instead of
SOAP, I’m doing #microservices
• If I use JSON, I’m doing #microservices
• If I use Docker / SpringBoot / Dropwizard /
embedded Jetty, I’m doing #microservices
6.
7. Cloud Native Architectures
Fallacies of distributed computing
• Reliable networking
• Latency is zero
• Bandwidth is infinite
• Network is secure
• Topology doesn’t change
• Single administrator
• Transport cost is zero
• Network is homogenous
https://en.wikipedia.org/wiki/Fallacies_of_distributed_computing
12. Cloud Native Architectures
Apache Camel to the rescue!
• Small Java library
• Distributed-system swiss-army knife!
• Powerful EIPs
• Declarative DSL
• Embeddable into any JVM (EAP, Karaf, Tomcat, Spring
Boot, Dropwizard, Wildfly Swarm, no container, etc)
• Very popular (200+ components for “dumb pipes”)
13. • “Smart endpoints, dumb pipes”
• Endpoint does one thing well
• Metadata used for further routing
• Really “dynamic” with rules engine (eg,
Drools/BRMS)
Apache Camel features easy to use visual editor
Dynamic Routing
14. Apache Camel features easy to understand config
REST DSL
public class OrderProcessorRouteBuilder extends RouteBuilder {
@Override
public void configure() throws Exception {
rest().post(“/order/socks”)
.description(“New Order for pair of socks”)
.consumes(“application/json”)
.route()
.to(“activemq:topic:newOrder”)
.log(“received new order ${body.orderId}”)
.to(“ibatis:storeOrder?statementType=Insert”);
}
}
16. Cloud Native Architectures
Typical problems developing microservices
• How to run them all locally?
• How to package them (dependency management)
• How to test?
• Vagrant? VirtualBox? VMs?
• Specify configuration
• Process isolation
• Service discovery
• Multiple versions?
17. Cloud Native Architectures
Shared infrastructure platforms headaches
• Different teams
• Different rates of change
• VM sprawl
• Configuration drift
• Isolation / multi-tenancy
• Performance
• Real-time vs batch
• Compliance
• Security
• Technology choices
18.
19. Cloud Native Architectures
Immutable infrastructure/deploys
• “we’ll just put it back in Ansible”
• Avoid chucking binaries / configs together and hope!
• Cattle vs Pets
• Don’t change it; replace it
• System created fully from automation; avoid drift
• Eliminate manual configuration/intervention
22. • Developer focused workflow
• Enterprise ready
• Higher level abstraction above containers for
delivering technology and business value
• Build/deployment triggers
• Software Defined Networking (SDN)
• Docker native format/packaging
• CLI/Web based tooling
OpenShift
23. Cloud Native Architectures
Fuse Integration Services for OpenShift
• Set of tools for integration developers
• Build/package your Fuse/Camel services
as Docker images
• Run locally on CDK
• Deploy on top of OpenShift
• Plugs-in to your existing build/release
ecosystem
(Jenkins/Maven/Nexus/Gitlab,etc)
• Manage them with Kubernetes/OpenShift
• Flat class loader JVMs
• Take advantage of existing investment into
Karaf with additional options like “just
enough app server” deployments
• Supports Spring, CDI, Blueprint
• Small VM run locally by
developers
• Full access to Docker,
Kubernetes, OpenShift
• Deploy your suite of
microservices with ease!
• Uses Vagrant/VirtualBox
• Getting Started on Linux,
Mac or Windows!
http://bit.ly/1U5xU4z
25. RED HAT JBOSS FUSE
Development and tooling
Develop, test, debug, refine,
deploy
JBoss Developer Studio
Web services framework
Web services standards, SOAP,
XML/HTTP, RESTful HTTP
Integration framework
Transformation, mediation, enterprise
integration patterns
Management and
monitoring
System and web services metrics,
automated discovery, container
status, automatic updates
JBoss Operations Network
+
JBoss Fabric Management
Console
(hawtio)
Apache CXF Apache Camel
Reliable Messaging
JMS/STOMP/NMS/MQTT, publishing-subscribe/point-2-point, store and forward
Apache ActiveMQ
Container
Life cycle management, resource management, dynamic deployment,
security and provisioning
Apache Karaf + Fuse Fabric
RED HAT ENTERPRISE LINUX
Windows, UNIX, and other Linux
26. Cloud Native Architectures
Typical problems developing microservices
• How to run them all locally?
• How to package them
• How to test?
• Vagrant? VirtualBox? VMs?
• Specify configuration
• Process isolation
• Service discovery
• Multiple versions?
29. • Trying to incorporate new technology?
• Trying to copy what others (Netflix, Amazon) are
doing?
• Tactical automation?
• Created a “DevOps” team?
• Exploring cloud services?
• Build/deploy automation?
• OpenSource?
• Piecemeal integration?
How are you keeping up with change?
Cloud Native Architectures
30. • 100% open source, ASL 2.0
• Technology agnostic (java,
nodejs, python, golang, etc)
• Built upon decades of
industry practices
• 1-click automation
• Cloud native (on premise,
public cloud, hybrid)
• Complex build/deploy
pipelines (human workflows,
approvals, chatops, etc)
• Comprehensive integration
inside/outside the platform
What if you could do all of this right now
with an open-source platform?
31. • Docker native, built on top of
Kubernetes API
• Out of the box CI/CD,
management UI
• Logging, Metrics
• ChatOps
• API Management
• iPaaS/Integration
• Chaos Monkey
• Lots and lots of
tooling/libraries to make
developing cloud-native
applications easier
http://fabric8.io
We need to discuss “change” in terms of scaling out our organizations. Devops and microservices is not a technology choice or a new team. DevOps is a re-org. All of these attempts to “keep up with change” without addressing the organization is not much help.
When creating distributed systems, a lot of what’s old is new again. Just bringing in “new technology” does not solve problems; in fact it probably creates new ones.
Trying to copy others’ technology choices is fools errand. People try to copy netflix/amazon/etc, but as Adrian Cockcroft says “you’re copying a point in time, not the process”
We try to fight the organizational structure with piecemeal automation, creating more “teams” of silos (“devops” team?... Totally misses the point) or even saying we’ll just adopt “cloud” or adopting “opensource”
Microservices is an approach to distributed systems that focus on scaling an organization’s IT systems and people. It doesn’t come without its drawbacks but it does allow us to make decisions quicker, implement functionality faster, and ultimately deliver on the business requirements faster to stay competitive. By breaking IT systems and teams down into smaller, autonomous components, we can test things easier, isolate them for failure properly, change them without impacting the entire systems, scale them where needed, etc.
Teams should be small (6-8 people), focus on the service(s) they provide via APIs, be cross functional (ops/security/dba/release/devs all on one team or automate away the pieces where resources are lacking), be responsible for the systems the create (you build it, you own it).
http://blog.christianposta.com/microservices/the-real-success-story-of-microservices-architectures/
People claim to do microservices without regard for the system-thinking principles that undelie any successful microservice architecture. If we just “do X” or “use X” then we’ll be doing microservices. In the end, they end up developing the same brittle, constrained architectures they had before but this time with new tools.
Ultimately, when we dig into the technology and how that aligns with our company structure, we’re talking about building and scaling distributed systems. Building and scaling these systems requires different ways of thinking and cannot ignore the past.
Foremost on our minds when building distributed systems is how they interact with each other: over unrealiable networks. A strong corollary for this fact is that we must build our systems to interact with each other knowing things fail and will fail. Second, even if things do not fail, they may appear to fail.. Latency in distributed systems is not something we have to deal with in more-monolithic systems, but is easily one of the biggest issues. Did things fail? Are they just slow? Do we retry? What do we do?
Given that systems will be communicating over lossy, unreliable networks… do we need integration? As we start to build non-trivial systems that interact with partner organizations (external and internal), use/consume/interact with “cloud” services, and require access to legacy applications/databases.. It’s clear and “by definition” that distributed systems will require integration.
People consider integration in the form of legacy ESB or EAI solutions, but as we see in the following slides, integration does not imply those approaches… those approaches come because of our organizational structure. But as we explore microservices, integration, and organization further, we’ll see EAI/ESB are not pre-requisites.
What about new-fangled “reactive” or event-driven systems? Do we need integration?
YES.
Consuming events and reacting to “what happened in time” requires us to not lose events, retry when networks are down, failover or retry other “possibly synchronous” systems in order to continue to delivery business value. Systems publishing events need access to queues/channels and some mechanism for interacting with them reliably.
When we start to look at systems as disconnected, autonomous agents both from a technology and organizational aspect, we absolutely need reliable integration.
Systems will communicate over may non-homogenous protocols and data formats: messaging (JMS, AMQP, proprietary), file transfer, HTTP (SOAP/REST/other), streaming, etc. These systems will need transformation, reliability, synchronous and asynchronous communication. Gregor Hophe’s book on integration lays out the patterns that may be useful in a disconnected environment like this.
Apache Camel brings tried and true experience to the table to tackle some of these distributed-systems integration challenges.
Apache Camel is very well suited for integration in a microservices environment. It’s not an ESB, doesn’t pre-suppose suites of software or servers. It’s a small, lightweight library that can be embedded in your choice of JVM runtime like Spring Boot, Dropwizard, WildFly/Swarm, EAP, Jetty, Tomcat, Karaf, or anything.
Microservices architectures are built around autonomy and being able to make changes to a service without impacting other areas that must also change along with it. In this scenario a service is part of a set of “choreographed interaction scenario” where the service knows enough about what it provides and its surrounding members/services and can make its own decisions about what services to engage, when, and for what reason. Apache Camel allows us to build services with smart routing without regard for the technology or “pipes” that are used to communicate. We can leverage the Dynamic Router EIP or plug into existing rules engines or complementary rules engines like Jboss Drools to accomplish sophisticated routing requirements an decisions.
Apache Camel can enable legacy backends to participate in a REST-based set of services by quickly exposing a REST service interface using its expressive DSL.. The DSL plugins right into the rest of the Apache Camel DSL allowing you to quickly expose a REST endpoint that can describe an API as well as integrate with backend services by mediating, routing, transforming and otherwise changing the shape of data or even content of a payload with enricher, resequence, and recipient list patterns.
Even though Apache Camel brings some good solutions for implementing integration across distributed systems, why is my head still hurting with distributed systems? Maybe you already do use Camel, or you’ve already incorporated a light-weight integration framework… why are we still running into issues/pain when creating these types of systems?
Developers experience this type of pain…
Operations experiences another type of pain…
When we move to smaller, isolated, autonomous systems at any kind of scale, we need to move away from the “pet” analoogy and to the “cattle” analogy where we build systems that can quickly be delivered and replaced as needed.
https://blog.engineyard.com/2014/pets-vs-cattle
Immutable delivery concepts help us reason about these problems. With immutable delivery, we try to reduce the number of moving pieces into pre-baked images as part of the build process. For example, imagine in your build process you could output a fully baked image with the operating system, the intended version of the JVM, any side-car applications, and all configuration? You could then deploy this in one environment, test it, and migrate it along a delivery pipeline toward production without worrying about "whether the environment or application is configured consistently." If you needed to make a change to your application, you rerun this pipeline which produces a new immutable image of your application and then do a rolling upgrade to deliver it. If it doesn't work, you can rollback by deploying the previous image. No more worrying about configuration or environment drift or whether things were properly restored on a rollback.
Docker came along a few years ago with an elegant solution to immutable delivery. Docker allows us to package our applications with all of the dependencies it needs (OS, JVM, other application dependencies, etc) in a lightweight, layered, image format. Additionally, Docker uses these images to run instances which run our applications inside `Linux containers` with isolated CPU, memory, network, and disk usage. In a way, these containers are a form of "application virtualization" or "process virtualization." They allow a process to execute thinking it's the only thing running (ie, list processes with `ps` and you see only your application's process there), that it has full access to the CPUs, memory, disk, network and other resources when reality it doesn't. It can only use resources it's allocated. For example, I can start a Docker container with a slice of CPU, a segment of memory, and limits on how much network IO can be used. From outside the Linux container, on the Host, the application just looks like another process. No virtualization of device drivers, operating systems, network stacks, no special hypervisors, etc. It's just a process. This fact also means we can get even more applications running on a single set of hardware for higher density without the overhead of additional Operating Systems and other pieces of a VM which would be required to achieve similar isolation qualities.
Back in 2013 when Docker rocked the technology industry, Google decided it was time to open-source their next-generation successor to Borg, which they named Kubernetes. Today, Kubernetes is a large, open, and rapidly growing community with contributions from Google, Red Hat, CoreOS and many others (including lots of independent individuals!). Kubernetes brings a lot of functionality for running clusters of microservices inside Linux containers at scale. Google has packaged over a decade of experience into Kubernetes, so being able to leverage this knowledge and functionality for our own microservices deployments is game changing. The web-scale companies have been doing this for years and a lot of them (Netflix, Amazon, etc) had to hand build a lot of the primitives that Kubernetes now has baked-in. Kubernetes has a handful of simple primitives that you should understand before we dig into examples. In this chapter, we'll introduce you to these concepts and in the following chapter we'll make use of them for managing a cluster of microservices.
Red Hat OpenShift 3.x is a Apache v2 licensed open-source developer self-service platform (OpenShift Origin: https://github.com/openshift/origin) that has been revamped to use Docker and Kubernetes. OpenShift at one point had its own cluster management and orchestration engine, but with the knowledge, simplicity, and power that Kubernetes brings to the world of container cluster management, it would have been silly to try and re-create yet another one. The broader community is converging around Kubernetes and Red Hat is all in with Kubernetes.
OpenShift has many features, but of the most important is that it's still native Kubernetes under the covers and supports features many enterprises need role-based access control, out of the box software defined networking, security, logins, developer builds, and many other things.
The RH CDK allows us to develop using the same technology as a world-class PaaS directly on our laptops locally. We can run our builds locally, test things out, wire up services, and when we’re comfortable, push to a CaaS or PaaS like OpenShift to handle the build pipeline/CI steps and perform validations/security checks and begin the application lifecycle management steps toward production. We can fit in with existing tooling like Git/Jenkins/and Nexus and integrate with the OpenShift Docker registry to do build promotions and so forth.
Quick demo of rider-auto-openshift on CDK
https://github.com/christian-posta/rider-auto-openshift/tree/ceposta-add-rest-module
Keeping up with “change” and building an organization to be agile is a challenge in it’s own right.
From a technology perspective we’d like to give service teams more autonomy, self-service, and responsibility.
Previous versions of fabric8 were built specifically for Java developers and for specific flavors of the JVM. In fabric8 2.0 instead of rebuilding everything that the Docker and Kubernetes communities were building, we’ve rebased everything on top of the Kubernetes API and can take advantage of the out of the box features. We’ve also built things like CI/CD with visualization of environments, a Chaos Monkey to help prove out the resilience of our distributed systems, etc.
Playback recording? Or do live demo of fabric8 CI/CD?
Show and talk to this demo:
https://blog.fabric8.io/create-and-explore-continuous-delivery-pipelines-with-fabric8-and-jenkins-on-openshift-661aa82cb45a#.p1apj49e5