Microservices, DevOps, and Continuous DeliveryKhalid Salama
Continuous Delivery is the ability to get software changes - including new features, enhancements, configuration changes, and bug fixes - into production safely and quickly, in a sustainable way. In these slides, I am giving a very high-level introduction to microservices architecture, and why it is considered as enabler to continuous delivery. We cover the key characteristics of a microservice, some common concepts, architectural patterns, and implementation guidelines. In addition, we quickly cover the main concepts and activities in DevOps, which the Application Lifecycle Management process to support continuous delivery.
Cloud Foundry Summit 2015: Making the LeapVMware Tanzu
Speaker: Richard Seroter, CenturyLink
To learn more about Pivotal Cloud Foundry, visit http://www.pivotal.io/platform-as-a-service/pivotal-cloud-foundry.
Oracle Solaris Build and Run Applications Better on 11.3OTN Systems Hub
Build and Run Applications Better on Oracle Solaris 11.3
Tech Day, NYC
Liane Praza, Senior Principal Software Engineer
Ikroop Dhillon, Principal Product Manager
June, 2016
cross cloud inter-operability with iPaaS and serverless for Telco cloud SDN/NFVKrishna-Kumar
An overview of how SDN/NFV can be orchestrated with serverless and iPaas environment typically in Hybrid Cloud world. Cross cloud inter-operability for Telco cloud.
Keynote: Architecting for Continuous Delivery (Pivotal Cloud Platform Roadshow)VMware Tanzu
Continuous Delivery & Microservices with Matt Stine, Platform Engineer at Pivotal.
Microservices−small, loosely coupled applications that follow the Unix philosophy of "doing one thing well"−represent the application development side of enabling rapid, iterative development, horizontal scale and polyglot clients. Microservices also help enable continuous delivery and scaling application development while eliminating long-term commitments to a single technology stack.
Pivotal Cloud Platform Roadshow is coming to a city near you!
Join Pivotal technologists and learn how to build and deploy great software on a modern cloud platform. Find your city and register now http://bit.ly/1poA6PG
Microservices, DevOps, and Continuous DeliveryKhalid Salama
Continuous Delivery is the ability to get software changes - including new features, enhancements, configuration changes, and bug fixes - into production safely and quickly, in a sustainable way. In these slides, I am giving a very high-level introduction to microservices architecture, and why it is considered as enabler to continuous delivery. We cover the key characteristics of a microservice, some common concepts, architectural patterns, and implementation guidelines. In addition, we quickly cover the main concepts and activities in DevOps, which the Application Lifecycle Management process to support continuous delivery.
Cloud Foundry Summit 2015: Making the LeapVMware Tanzu
Speaker: Richard Seroter, CenturyLink
To learn more about Pivotal Cloud Foundry, visit http://www.pivotal.io/platform-as-a-service/pivotal-cloud-foundry.
Oracle Solaris Build and Run Applications Better on 11.3OTN Systems Hub
Build and Run Applications Better on Oracle Solaris 11.3
Tech Day, NYC
Liane Praza, Senior Principal Software Engineer
Ikroop Dhillon, Principal Product Manager
June, 2016
cross cloud inter-operability with iPaaS and serverless for Telco cloud SDN/NFVKrishna-Kumar
An overview of how SDN/NFV can be orchestrated with serverless and iPaas environment typically in Hybrid Cloud world. Cross cloud inter-operability for Telco cloud.
Keynote: Architecting for Continuous Delivery (Pivotal Cloud Platform Roadshow)VMware Tanzu
Continuous Delivery & Microservices with Matt Stine, Platform Engineer at Pivotal.
Microservices−small, loosely coupled applications that follow the Unix philosophy of "doing one thing well"−represent the application development side of enabling rapid, iterative development, horizontal scale and polyglot clients. Microservices also help enable continuous delivery and scaling application development while eliminating long-term commitments to a single technology stack.
Pivotal Cloud Platform Roadshow is coming to a city near you!
Join Pivotal technologists and learn how to build and deploy great software on a modern cloud platform. Find your city and register now http://bit.ly/1poA6PG
Oracle Warehouse Builder to Oracle Data Integrator 12c Migration UtilityNoel Sidebotham
As Oracle Warehouse builder nears the end of extended support; customers need to consider their migration options.
In this webex we'll be discussing this topic and aim to answer questions like Which tool should I use for new projects? What should be done with existing implementations? And why should I migrate to ODI?
In this session You will learn about –
• Oracle Data Integrator 12c, concepts and features
• The OWB2ODI migration utility
• How to successfully migrate OWB projects to ODI
• You will hear about customer success stories
• New features of ODI 12c that are getting ETL developers excited including Big Data and Hybrid Cloud support.
Redefining HCI: How to Go from Hyper Converged to Hybrid Cloud InfrastructureNetApp
The hyper converged infrastructure (HCI) market is entering a new phase of maturity. A modern HCI solution requires a private cloud platform that integrates with public clouds to create a consistent hybrid multi-cloud experience.
During this webinar, NetApp and an IDC guest speaker covered what led to the next generation of hyper converged infrastructure and which five capabilities are required to go from hyper converged to hybrid cloud infrastructure.
IDC datacenter of the future : Oracle point of viewRiccardo Romani
The Datacenter is (NOT) dead!!! It's alive and kicking and is evolving as IDC is predicting..smaller, smarter and cloud-ready! Slides of of Milan event hed on April the 27th
Help me move away from Oracle! (RMOUG Training Days 2022, February 2022)Lucas Jellema
Organizations with decades of investment in Oracle technology sometimes (and increasingly) express a wish to move away from Oracle. In this session, we will first explore where the desire to move away from Oracle might come from. Then we describe what the term Oracle represents -- more than 2.000 products on all layers in the technology stack and in different business areas. Finally, we map out what the 'moving away from' consists of: defining where you 'move to' and subsequently actually going there.
It will become clear why you should give considerable thought about dropping Oracle, or any other vendors' technology, when you're not pleased with your current IT situation. You need to focus on the actual problems and objectives and define the suitable roadmap to fit your real needs. It turns out that the quest is usually for modernization and flexibility - and Oracle can very well be a part of that future.
Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store.
Oracle Integration Cloud Service (ICS) best practices learned from the fieldRobert van Mölken
Integration Cloud Service (ICS) provides a cloud hosted means to integrate systems together using a graphical means to define and represent integrations. This presentation sets out to demonstrate how ICS can be used to effectively implement integrations that work both in the cloud and on-premise. This presentation will discuss different customer best practices, showing the audience how to implement integrations with ICS and talk about patterns, challenges and give useful insights into ICS. This should equip the audience with the knowledge on how to use ICS to solve their own integration needs such as removing those tedious manual processes of moving data from one system to another with automation through integration.
Scaling AI/ML with Containers and Kubernetes Tushar Katarki
AI is popular and yet faces several challenges in the industry: 1) self-service and automation 2) Deployment into production 3) Access to data. These challenges can be addressed with containers and Kubernetes. They help you build AI-as-a-service with open source tools and Kuberentes. Data Scientists can use the service for data, experimentation and to deliver models into production iteratively with self-service and automation. Using Kubernetes, one is able to run massive machine learning pipelines iteratively in an automated fashion that can be repeated.
ODSC East 2020 Accelerate ML Lifecycle with Kubernetes and Containerized Da...Abhinav Joshi
This deck provide an overview of containers and Kubernetes, and how these technologies can help solve the challenges faced by data scientists, ML engineers, and application developers. Next, it showcases the key capabilities required in a containers and kubernetes platform to help data scientists easily use technologies like Jupyter Notebooks, ML frameworks, programming languages to innovate faster. Finally it discusses the available platform options (e.g. KubeFlow, Open Data Hub, etc.), and some examples of how data scientists are accelerating their ML initiatives with containers and kubernetes platform.
Exploring Kubeflow on Kubernetes for AI/ML | DevNation Tech TalkRed Hat Developers
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable by leveraging best-of-breed open source projects. These include Jupyter Notebooks, TensorFlow, and Pytorch for Training; Seldon and KFServing for Serving; and Kubeflow Pipelines. These are all wrapped up neatly in an easy-to-use portal so developers and data scientists can easily collaborate and deliver production-ready AI/ML workloads.
Runtime Fabric on OpenShift _--_ MuleSoft Meetup Deck.pptxSandeep Deshmukh
Runtime fabric will add native support for OpenShift container platforms later this year. Openshift has some of the most significant footprints among enterprise customers who want to adopt an easy-to-use Kubernetes-based platform to streamline their operations and increase developer productivity.
Oracle Warehouse Builder to Oracle Data Integrator 12c Migration UtilityNoel Sidebotham
As Oracle Warehouse builder nears the end of extended support; customers need to consider their migration options.
In this webex we'll be discussing this topic and aim to answer questions like Which tool should I use for new projects? What should be done with existing implementations? And why should I migrate to ODI?
In this session You will learn about –
• Oracle Data Integrator 12c, concepts and features
• The OWB2ODI migration utility
• How to successfully migrate OWB projects to ODI
• You will hear about customer success stories
• New features of ODI 12c that are getting ETL developers excited including Big Data and Hybrid Cloud support.
Redefining HCI: How to Go from Hyper Converged to Hybrid Cloud InfrastructureNetApp
The hyper converged infrastructure (HCI) market is entering a new phase of maturity. A modern HCI solution requires a private cloud platform that integrates with public clouds to create a consistent hybrid multi-cloud experience.
During this webinar, NetApp and an IDC guest speaker covered what led to the next generation of hyper converged infrastructure and which five capabilities are required to go from hyper converged to hybrid cloud infrastructure.
IDC datacenter of the future : Oracle point of viewRiccardo Romani
The Datacenter is (NOT) dead!!! It's alive and kicking and is evolving as IDC is predicting..smaller, smarter and cloud-ready! Slides of of Milan event hed on April the 27th
Help me move away from Oracle! (RMOUG Training Days 2022, February 2022)Lucas Jellema
Organizations with decades of investment in Oracle technology sometimes (and increasingly) express a wish to move away from Oracle. In this session, we will first explore where the desire to move away from Oracle might come from. Then we describe what the term Oracle represents -- more than 2.000 products on all layers in the technology stack and in different business areas. Finally, we map out what the 'moving away from' consists of: defining where you 'move to' and subsequently actually going there.
It will become clear why you should give considerable thought about dropping Oracle, or any other vendors' technology, when you're not pleased with your current IT situation. You need to focus on the actual problems and objectives and define the suitable roadmap to fit your real needs. It turns out that the quest is usually for modernization and flexibility - and Oracle can very well be a part of that future.
Load data on demand into a cache from a data store. This can improve performance and also helps to maintain consistency between data held in the cache and data in the underlying data store.
Oracle Integration Cloud Service (ICS) best practices learned from the fieldRobert van Mölken
Integration Cloud Service (ICS) provides a cloud hosted means to integrate systems together using a graphical means to define and represent integrations. This presentation sets out to demonstrate how ICS can be used to effectively implement integrations that work both in the cloud and on-premise. This presentation will discuss different customer best practices, showing the audience how to implement integrations with ICS and talk about patterns, challenges and give useful insights into ICS. This should equip the audience with the knowledge on how to use ICS to solve their own integration needs such as removing those tedious manual processes of moving data from one system to another with automation through integration.
Scaling AI/ML with Containers and Kubernetes Tushar Katarki
AI is popular and yet faces several challenges in the industry: 1) self-service and automation 2) Deployment into production 3) Access to data. These challenges can be addressed with containers and Kubernetes. They help you build AI-as-a-service with open source tools and Kuberentes. Data Scientists can use the service for data, experimentation and to deliver models into production iteratively with self-service and automation. Using Kubernetes, one is able to run massive machine learning pipelines iteratively in an automated fashion that can be repeated.
ODSC East 2020 Accelerate ML Lifecycle with Kubernetes and Containerized Da...Abhinav Joshi
This deck provide an overview of containers and Kubernetes, and how these technologies can help solve the challenges faced by data scientists, ML engineers, and application developers. Next, it showcases the key capabilities required in a containers and kubernetes platform to help data scientists easily use technologies like Jupyter Notebooks, ML frameworks, programming languages to innovate faster. Finally it discusses the available platform options (e.g. KubeFlow, Open Data Hub, etc.), and some examples of how data scientists are accelerating their ML initiatives with containers and kubernetes platform.
Exploring Kubeflow on Kubernetes for AI/ML | DevNation Tech TalkRed Hat Developers
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable, and scalable by leveraging best-of-breed open source projects. These include Jupyter Notebooks, TensorFlow, and Pytorch for Training; Seldon and KFServing for Serving; and Kubeflow Pipelines. These are all wrapped up neatly in an easy-to-use portal so developers and data scientists can easily collaborate and deliver production-ready AI/ML workloads.
Runtime Fabric on OpenShift _--_ MuleSoft Meetup Deck.pptxSandeep Deshmukh
Runtime fabric will add native support for OpenShift container platforms later this year. Openshift has some of the most significant footprints among enterprise customers who want to adopt an easy-to-use Kubernetes-based platform to streamline their operations and increase developer productivity.
Open source grid middleware packages – Globus Toolkit (GT4) Architecture , Configuration – Usage of Globus – Main components and Programming model - Introduction to Hadoop Framework - Mapreduce, Input splitting, map and reduce functions, specifying input and output parameters, configuring and running a job – Design of Hadoop file system, HDFS concepts, command line and java interface, dataflow of File read & File write.
Analytics and Lakehouse Integration Options for Oracle ApplicationsRay Février
This Red Hot session is designed for customers who are currently using Oracle Cloud applications such as Fusion and EPM, and are interested in gaining a better understanding of the integration options that are available to them.
Here is a high level agenda:
- We will start by discussing the modern data platform on OCI, the Lakehouse architecture and the OCI related services that supports it.
- We will then discuss the data extraction methods available on OCI for Fusion and EPM.
- Last but not least, we will end with a few best practices and possible use cases.
In the interest of time, we will mainly focus on integration patterns that are recommended for Fusion and EPM, but don’t hesitate to reach out if you would to talk to us about other Oracle applications.
Enjoy!
Oracle Unified Information Architeture + Analytics by ExampleHarald Erb
Der Vortrag gibt zunächst einen Architektur-Überblick zu den UIA-Komponenten und deren Zusammenspiel. Anhand eines Use Cases wird vorgestellt, wie im "UIA Data Reservoir" einerseits kostengünstig aktuelle Daten "as is" in einem Hadoop File System (HDFS) und andererseits veredelte Daten in einem Oracle 12c Data Warehouse miteinander kombiniert oder auch per Direktzugriff in Oracle Business Intelligence ausgewertet bzw. mit Endeca Information Discovery auf neue Zusammenhänge untersucht werden.
Extending open source and hybrid cloud to drive OT transformation - Future Oi...John Archer
A look at ESG concerns and agility needed to address pressures to transform energy organizations with decarbonization. Presented to Future Oil and Gas conference November 2021
How pig and hadoop fit in data processing architectureKovid Academy
Pig, developed by Yahoo research in 2006, enables programmers to write data transformation programs for Hadoop quickly and easily without the cost and complexity of map-reduce programs.
All data accessible to all my organization - Presentation at OW2con'19, June...OW2
It is clear that all employees must have access to data wherever they are to make decisions. However, tools that allow to share data just as easily as the best collaborative tools such as a google doc or an office 365 should be used.
Open source driven by the big data ecosystem and a number of large companies have provided solutions to allow organizations to federate data systems and secure their access.
After a quick overview of existing open source solutions, and how such projects can be organized, it will be necessary to detail Dremio implementation, a unique and centralized interface on all your data. Some real feedbacks will conclude the presentation.
Top 10 Data analytics tools to look for in 2021Mobcoder
This write-up has surrounded the top 10 tools used by data analysts, architects, scientists, and other professionals. Each tool has some specific feature that makes it an ideal fit for a specific task. So choose wisely depending on your business need, type of data, the volume of information, experience in analytical thinking.
Red Hat Enteprise Linux Open Stack Platfrom DirectorOrgad Kimchi
Red Hat Enterprise Linux OpenStack Platform director is a toolset for installing and managing a complete OpenStack environment. It is based primarily on the OpenStack project TripleO, which is an abbreviation for "OpenStack-On-OpenStack". This project takes advantage of OpenStack components to install a fully operational OpenStack environment. This includes new OpenStack components that provision and control bare metal systems to use as OpenStack nodes. This provides a simple method for installing a complete Red Hat Enterprise Linux OpenStack Platform environment that is both lean and robust.
Oracle Solaris 11.2 - Engineered for Cloud
Oracle Solaris provides an efficient, secure and compliant, simple, open, and affordable solution for
deploying your enterprise-grade clouds. More than just an operating system, Oracle Solaris 11.2 includes
features and enhancements that deliver no-compromise virtualization, application-driven software-defined
networking, and a complete OpenStack distribution for creating and managing an enterprise cloud, enabling
you to meet IT demands and redefine your business.
For more information: http://www.oracle.com/technetwork/server-storage/solaris11/overview/beta-2182985.html
Performance analysis in a multitenant cloud environment Using Hadoop Cluster ...Orgad Kimchi
Analyzing the performance of a virtualized multitenant cloud environment can be challenging because of the layers of abstraction. This article shows how to use Oracle Solaris 11 to overcome those limitations.
For more information see:
http://www.oracle.com/technetwork/articles/servers-storage-admin/perf-analysis-multitenant-cloud-2082193.html
Oracle Solaris 11 as a BIG Data Platform Apache Hadoop Use CaseOrgad Kimchi
The following are benefits of using Oracle Solaris Zones for a Hadoop cluster:
Fast provision of new cluster members using the zone cloning feature
Very high network throughput between the zones for data node replication
Optimized disk I/O utilization for better I/O performance with ZFS built-in compression
Secure data at rest using ZFS encryption
For more information see: http://www.oracle.com/technetwork/articles/servers-storage-admin/howto-setup-hadoop-zones-1899993.html
Oracle Solaris 11 is the first operating system engineered with cloud computing in mind. So what's new in Oracle Solaris 11, and how does that connect to the cloud? If you`re involved in Application Life-cycle Management, Configuration Management,
Cloud Deployment, Big Data Design and Application or Infrastructure Scaling - You will learn how to leverage the Solaris 11 technologies in order to build your Cloud infrastructure.
For more information see: http://www.oracle.com/technetwork/systems/hands-on-labs/hol-oracle-solaris-remote-lab-1894053.html
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Deploying and Managing Artificial Intelligence Services using the Open Data Hub Project on Openshift Container Platform
1. 1
Deploying and Managing Artificial
Intelligence Services using the Open
Data Hub Project on Openshift
Container Platform
Orgad Kimchi
Associate Manager, Consulting
Sep 18 2019
2. ● Introduction
● Why Red Hat for AI/ML?
● AI as Service High Level Architecture
● What is Open Data Hub?
● Representative Open AI/ML Architectures
● Demo
2
Agenda
3. ● Artificial Intelligence/Machine Learning (AI/ML) is a critical part of the
Digital Transformation journey for many customers. autonomous vehicles,
manufacturing, are just some of the key markets being transformed by
AI/ML.
● Customers trying to adopt AI/ML face significant hurdles today.
Proprietary systems are unable to keep up with the rapid evolution of the
AI/ML ecosystem.
● The cost, complexity and vendor lock-in of existing solutions pose
significant challenges for AI/ML implementations at scale.
3
Introduction
4. ● Proprietary AI/ML solutions are unable to keep up with the rapidly evolving AI/ML
ecosystem. Open source and open standards architectures from Red Hat provide
the necessary agility, flexibility and transparency needed to evolve the customer’s
AI/ML environment over time.
● Vendor lock-in also can be expensive and limit choice. Red Hat has a large
ecosystem of partners and systems integrators to help the customers with all the
facets of an end-to-end AI/ML solution. This approach will utilize the customer’s
existing infrastructure and partner integrations. One example is the OpenShift
certification with NVidia GPUs and workload affinity integrations.
● Rapid automation, massive scalability and efficient lifecycle operations with
containers and Kubernetes are the foundations of the Red Hat AI/ML solution.
4
Why Red Hat for AI/ML?
7. ● The Open Data Hub is a machine-learning-as-a-service platform built on Red
Hat's Kubernetes-based OpenShift Container Platform, Ceph Object Storage,
and Kafka/Strimzi
● integrating a collection of open source projects. It inherits from upstream
efforts such as Kubeflow and is the base of Red Hat's internal data-science
and ML service.
● Data scientists can create models using Jupyter notebooks, and select from
popular tools such as TensorFlow™, scikit-learn, Apache Spark™ and more
for developing models.
● Teams can spend more time solving critical business needs and less on
installing and maintaining infrastructure with the Open Data Hub.
Source : https://opendatahub.io/
7
What is Open Data Hub?
8. ● Ceph is an open source object store that is massively scalable. It can run natively in
OpenShift or as a standalone cluster for optimized performance. Ceph provides a
scalable Ceph Storage Cluster native to Openshift, allowing the distributed storage
of massive data sets as typical of AI/ML workflows.
● Ceph is ideal for storing unstructured data from multiple sources which is also ideal
for large AI/ML dataset ingestions. Ceph provides S3 RESTful API that is widely
supported and is simple to use, making AI/ML data that is stored and transformed
easily accessible.
● Ceph is deployed on OpenShift via Rook (https://rook.io), a storage operator that
provides a user friendly way for deployment and integration of Ceph into the
OpenShift ecosystem.
8
Included Components
9. ● Apache Spark™ operator is an open source operator implementation of Apache
Spark™.
● It is developed as part of the Radanalytics community (https://radanalytics.io/) to
provide distributed Spark cluster workloads on Openshift.
● This implementation creates a Spark cluster with master and worker/executor
processes.
● Distributed parallel execution as provided by Spark clusters are typical and essential
for the success of AI/ML workloads.
9
Included Components
10. JupyterHub (https://jupyter.org/hub) is an open source multi-user notebook platform that ODH provides with
multiple notebook image streams that incorporate embedded features such as Spark libraries and connectors.
JupyterHub provides many features such as multi-user experience for data scientists allowing them to run
notebooks in their own workspaces. Authentication can also be customized as a pluggable component to support
authentication protocols such as OAuth. Data scientists can use familiar tools such as Jupyter notebooks for
developing complex algorithms and models. Frameworks such as numpy, scikit-learn, Tensorflow and more are
available for use.
Prometheus (https://prometheus.io/) is an open source monitoring and alerting tool that is widely adopted across
many enterprises. Prometheus can be configured to monitor targets by scraping or pulling metrics from the
target’s HTTP endpoint and storing the metric name and a set of key-value pairs in a time series database. For
graphing or querying this data, Prometheus provides a web portal with rudimentary options to list and graph the
data. It also provides an endpoint for more powerful visualization tools such as Grafana to query the data and
create graphs. An Alert Manager is also available to create alert rules to produce alerts on specific metric
conditions.
Grafana (https://grafana.com/) is an open source tool for data visualization and monitoring. Data sources such as
Prometheus can be added to Grafana for metrics collection. Users create Dashboards that include
comprehensive graphs or plots of specific metrics. It includes powerful visualization capabilities for graphs, tables,
and heatmaps. Ready-made dashboards for different data types and sources are also available giving Grafana
users a head start. It also has support for a wide variety of plugins so that users can incorporate
community-powered visualisation tools for things such as scatter plots or pie charts.
10
Included Components
12. Reproducibility
A fundamental concern for many AI/ML use cases is reproducibility. This
implies both that results are reproducible and that the environments used to
produce these results are reproducible.
Reproducible application environments and reproducible application
deployments are a core feature of OpenShift, and the same functionality that
provides this capability to general distributed applications can also enable
reproducible machine learning models, pipelines, systems, and applications.
12
Why OpenShift?
13. Security, access control, and isolation
An equally important set of concerns for machine learning systems is related to security. Machine learning
systems often deal with sensitive or valuable data (including confidential data,
OpenShift: the namespace mechanism provides lightweight isolation between distinct applications; internal
service routing means that model services deployed within the same namespace as an application are not
accessible to the outside world by default; secret management makes it possible for components to securely
store credentials for sensitive data sources; and namespace isolation, quotas, and scheduling policies combine
to keep misbehaving components from impacting others.
For deployments that require exposing model services beyond the scope of an application in OpenShift, the
3Scale API gateway and RH-SSO from Red Hat’s application development portfolio provide powerful tools to
authenticate, authorize, meter, and gate access to these services.
13
Why OpenShift?
14. Elasticity, scale-out, and federation
An essential part of systems designed for the cloud, and for the contemporary
hybrid cloud in particular, is that their components scale elastically to exploit
available resources and meet shifting demand, potentially even scaling or
migrating across multiple clouds (e.g., some combination of internal clouds and
distinct public cloud providers).
These capabilities are provided by fundamental functionality in OpenShift, which is
an abstraction layer for the hybrid cloud: scaling application components (whether
on-demand or in response to application metrics), migrating stateless and stateful
services, and scheduling applications against resources federated from multiple
clouds.
14
Why OpenShift?
15. Flexible scheduling of heterogeneous resources
Machine learning systems don’t just require an advanced scheduler for basic
applications that run on identical commodity hardware: they may also benefit from the
ability to schedule particular tasks where they can take advantage of hardware
accelerators like GPUs.
OpenShift is flexible enough both to schedule conventional application components as
well as specialized compute workloads, including those that require close coupling
between parallel tasks, guaranteed memory or network bandwidth, or access to
accelerator hardware.
15
Why OpenShift?
17. Fraud Detection Using OpenDataHub on OpenShift
https://www.youtube.com/watch?v=IcQ2bhsw_kQ
17
Use Cases
18. The Open Data Hub operator deploys and manages various components using the
Operator SDK in the Operator Framework.
There are two options to deploy ODH operator: Manual and using Operator Lifecycle
Manager (OLM). Both require Openshift 3.11 or 4.0 and an installation of Ceph using
Rook operator.
The latest version of the Open Data Hub operator project is located here:
https://gitlab.com/opendatahub/opendatahub-operator
The latest version of the Open Data Hub operator image is located here:
https://quay.io/opendatahub/opendatahub-operator
18
Installation