This document discusses the transition from DevOps to DataOps. It begins by introducing the speaker, Kellyn Pot'Vin-Gorman, and their background. It then provides definitions and histories of DevOps and some common DevOps tools and practices. The document argues that database administrators (DBAs) need to embrace DevOps tools and practices like automation, version control, and database virtualization in order to stay relevant. It presents database virtualization and containerization as ways to overcome "data gravity" and better enable continuous delivery of database changes. Finally, it discusses how methodologies like Agile, Scrum, and Kanban can be combined with data-centric tools to transition from DevOps to DataOps.
Security Implications for a DevOps TransformationDeborah Schalm
If your organization is undergoing a DevOps transformation, you’re probably thinking about where security fits in. All too often, we tack on security testing at the end of the delivery process, which means significant problems go undetected until development is complete. As we adopt DevOps principles and practices, we enable a natural solution to this problem: ensure that security experts are involved throughout the delivery process.
In this webinar, DevOps.com and Puppet defined a reference implementation of DevOps from the ground up, by illustrating how the software delivery process evolves at a hypothetical startup. Once we've laid a technical foundation for DevOps, we discussed the implications for security. We also discussed:
Benefits for and challenges to security during a DevOps transformation
How to craft a DevOps-ready security practice
Refinements of a standard DevOps workflow to address security needs
KEYNOTE | WHAT'S COMING IN THE NEXT 10 YEARS OF DEVOPS? // ELLEN CHISA, bolds...DevOpsDays Tel Aviv
Fifteen years ago, we'd barely started to use S3, and ten years ago DevOps was the new thing. Today, we can add a new tool, technology, or trick every week, and more and more work is shifted into the application developer's workflow. If security, resiliency, and incident response become part of product teams, where will we be ten years from now, and what should we do today to get ready?
Today, organizations of all shapes and sizes depend on feature-packed application releases to keep end users productive and happy. In their new book, The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, Gene Kim and his co-authors shared ways that high-performing organizations use DevOps principles to enable reliable deployments - and boring releases!
Gene Kim, CTO, DevOps researcher and co-author of the DevOps Handbook and The Phoenix Project, and Anders Wallgren, CTO of Electric Cloud shared their tips for overcoming the challenges of DevOps and Continuous Delivery at scale. During the webinar, they discussed:
- The business value of DevOps
- How to eliminate “deployment anxiety” and increase business agility
- Lessons learned from large scale DevOps transformations
- The advantages and disadvantages of practicing DevOps in large organizations
Security Implications for a DevOps TransformationDeborah Schalm
If your organization is undergoing a DevOps transformation, you’re probably thinking about where security fits in. All too often, we tack on security testing at the end of the delivery process, which means significant problems go undetected until development is complete. As we adopt DevOps principles and practices, we enable a natural solution to this problem: ensure that security experts are involved throughout the delivery process.
In this webinar, DevOps.com and Puppet defined a reference implementation of DevOps from the ground up, by illustrating how the software delivery process evolves at a hypothetical startup. Once we've laid a technical foundation for DevOps, we discussed the implications for security. We also discussed:
Benefits for and challenges to security during a DevOps transformation
How to craft a DevOps-ready security practice
Refinements of a standard DevOps workflow to address security needs
KEYNOTE | WHAT'S COMING IN THE NEXT 10 YEARS OF DEVOPS? // ELLEN CHISA, bolds...DevOpsDays Tel Aviv
Fifteen years ago, we'd barely started to use S3, and ten years ago DevOps was the new thing. Today, we can add a new tool, technology, or trick every week, and more and more work is shifted into the application developer's workflow. If security, resiliency, and incident response become part of product teams, where will we be ten years from now, and what should we do today to get ready?
Today, organizations of all shapes and sizes depend on feature-packed application releases to keep end users productive and happy. In their new book, The DevOps Handbook: How to Create World-Class Agility, Reliability, and Security in Technology Organizations, Gene Kim and his co-authors shared ways that high-performing organizations use DevOps principles to enable reliable deployments - and boring releases!
Gene Kim, CTO, DevOps researcher and co-author of the DevOps Handbook and The Phoenix Project, and Anders Wallgren, CTO of Electric Cloud shared their tips for overcoming the challenges of DevOps and Continuous Delivery at scale. During the webinar, they discussed:
- The business value of DevOps
- How to eliminate “deployment anxiety” and increase business agility
- Lessons learned from large scale DevOps transformations
- The advantages and disadvantages of practicing DevOps in large organizations
DevOps: A Culture Transformation, More than TechnologyCA Technologies
DevOps is not a new technology or a product. It's an approach or culture of SW development that seeks stability and performance at the same time that it speeds software deliveries to the business. We will discuss this cultural shift where development teams have to accept the feedback of operations teams and the operations team should be ready to accept frequent updates to the SW that it's running.
To learn more about DevOps solutions from CA Technologies, please visit: http://bit.ly/1wbjjqX
DevOps Will Save The World! : Public Safety, Public Policy, and DevOps In Context
Joshua Corman, CTO, Sonatype
Link to video: https://www.youtube.com/watch?v=K-hskShNyoo
The Role of Automation in the Journey to Continuous DeliveryXebiaLabs
Presenters Robert Reeves, CTO and Cofounder of Datical, and Tim Buntel, VP of Products at XebiaLabs, give an expert presentation on the role of automation in Continuous Delivery. Find the entire webinar here: https://xebialabs.com/community/webinars/
DevOps is not a new technology or a product. It’s an approach or culture of software development that seeks stability and performance at the same time that it speeds software deliveries to the business. In this sharing, we will discuss what DevOps is from CAMS model that represents culture, automation, measurement and sharing. In addition, I will share some practical experiences in Trend Micro.
Talk about the basic principles and concepts of CI/CD as a set of practices in order to reduce integration errors through automated implementations for testing and deployments as well as the tooling behind this philosophy.
DevOps and the Importance of Single Source Code Repos Perforce
Companies are increasingly moving to DevOps practices to streamline product development and delivery. In this presentation DevOps author and evangelist Gene Kim will discuss how version control has moved from a development concern to a fundamental practice for everyone in the value stream, especially Operations. He will discuss the importance of the single, shared source code repository in high performing technology organizations.
He will discuss the research he has done over the last 16 years about the top predictors of DevOps performance, and how best to overcome the cultural and workflow friction that can exist between Development teams and Operations.
He will discuss the research he has done over the last 16 years about the top predictors of DevOps performance, and how best to overcome the cultural and workflow friction that can exist between Development teams and Operations."
Cloud and DevOps are independent but mutually reinforcing strategies for delivering business value through IT. However, the pace of disruption is accelerating.
If cloud is an instrument, then DevOps is the conductor that plays it. DevOps principles are transforming the way leading enterprises are shortening work cycles, increasing delivery frequency, and helping them adopt an attitude of continual experimentation.
These slides were used in a recent webcast featuring Kevin Behr, co-author of The Phoenix Project and VisibleOps Handbook and Mike Baukes, co-founder of ScriptRock who explored key aspects of how cloud computing can be leveraged to deliver ideas to market faster by activating DevOps principles in your IT Enterprise.
The live webcast can be found at http://info.scriptrock.com/devops_webinar_022714
Scala: La escalera a la Programación FuncionalQindel Group
Qindel Group estuvo representada en el evento Open Expo 2017 por Ignacio Navarro, Desarrollador Senior de la empresa.
Navarro es programador de Scala y colaborador habitual de proyectos y charlas de programación funcional. Fue parte del MeetUps Haskell, donde habló de Scala, un lenguaje multiparadigma (Funcional y Orientado a Objetos) que corre sobre la JVM (*).
DevOps: A Culture Transformation, More than TechnologyCA Technologies
DevOps is not a new technology or a product. It's an approach or culture of SW development that seeks stability and performance at the same time that it speeds software deliveries to the business. We will discuss this cultural shift where development teams have to accept the feedback of operations teams and the operations team should be ready to accept frequent updates to the SW that it's running.
To learn more about DevOps solutions from CA Technologies, please visit: http://bit.ly/1wbjjqX
DevOps Will Save The World! : Public Safety, Public Policy, and DevOps In Context
Joshua Corman, CTO, Sonatype
Link to video: https://www.youtube.com/watch?v=K-hskShNyoo
The Role of Automation in the Journey to Continuous DeliveryXebiaLabs
Presenters Robert Reeves, CTO and Cofounder of Datical, and Tim Buntel, VP of Products at XebiaLabs, give an expert presentation on the role of automation in Continuous Delivery. Find the entire webinar here: https://xebialabs.com/community/webinars/
DevOps is not a new technology or a product. It’s an approach or culture of software development that seeks stability and performance at the same time that it speeds software deliveries to the business. In this sharing, we will discuss what DevOps is from CAMS model that represents culture, automation, measurement and sharing. In addition, I will share some practical experiences in Trend Micro.
Talk about the basic principles and concepts of CI/CD as a set of practices in order to reduce integration errors through automated implementations for testing and deployments as well as the tooling behind this philosophy.
DevOps and the Importance of Single Source Code Repos Perforce
Companies are increasingly moving to DevOps practices to streamline product development and delivery. In this presentation DevOps author and evangelist Gene Kim will discuss how version control has moved from a development concern to a fundamental practice for everyone in the value stream, especially Operations. He will discuss the importance of the single, shared source code repository in high performing technology organizations.
He will discuss the research he has done over the last 16 years about the top predictors of DevOps performance, and how best to overcome the cultural and workflow friction that can exist between Development teams and Operations.
He will discuss the research he has done over the last 16 years about the top predictors of DevOps performance, and how best to overcome the cultural and workflow friction that can exist between Development teams and Operations."
Cloud and DevOps are independent but mutually reinforcing strategies for delivering business value through IT. However, the pace of disruption is accelerating.
If cloud is an instrument, then DevOps is the conductor that plays it. DevOps principles are transforming the way leading enterprises are shortening work cycles, increasing delivery frequency, and helping them adopt an attitude of continual experimentation.
These slides were used in a recent webcast featuring Kevin Behr, co-author of The Phoenix Project and VisibleOps Handbook and Mike Baukes, co-founder of ScriptRock who explored key aspects of how cloud computing can be leveraged to deliver ideas to market faster by activating DevOps principles in your IT Enterprise.
The live webcast can be found at http://info.scriptrock.com/devops_webinar_022714
Scala: La escalera a la Programación FuncionalQindel Group
Qindel Group estuvo representada en el evento Open Expo 2017 por Ignacio Navarro, Desarrollador Senior de la empresa.
Navarro es programador de Scala y colaborador habitual de proyectos y charlas de programación funcional. Fue parte del MeetUps Haskell, donde habló de Scala, un lenguaje multiparadigma (Funcional y Orientado a Objetos) que corre sobre la JVM (*).
5 claves para un trayecto exitoso a DevOpsQindel Group
La transformación digital representa la necesidad de las empresas de aprovechar al máximo la tecnología para hacer más eficientes sus procesos internos. DevOps es la clave que ayuda a las empresas a mejorar la eficiencia, ganar agilidad, ser más flexibles, ajustarse a los tiempos de entrega, aumentar la calidad y sobre todo ahorrar tiempo y dinero. La ponencia aborda los puntos clave para realizar un paso exitoso a DevOps.
Why Everyone Needs DevOps Now: 15 Year Study Of High Performing Technology OrgsGene Kim
This presentation describes my interpretation of the Why and How of DevOps, and the key findings from my 15 year study of high-performing IT organizations, and how they simultaneously deliver stellar service levels and rapid implementation of new features into the production environment.
Organizations employing DevOps practices such as Google, Amazon, Facebook, Etsy and Twitter are routinely deploying code into production hundreds, or even thousands, of times per day, while providing world-class availability, reliability and security. In contrast, most organizations struggle to do releases more every nine months.
He will present how these high-performing organizations achieve this fast flow of work through Product Management and Development, through QA and Infosec, and into IT Operations. By doing so, other organizations can now replicate the extraordinary culture and outcomes enabling their organization to win in the marketplace.
Innotech Austin 2017: The Path of DevOps Enlightenment for InfoSecJames Wickett
Innotech Austin 2017: The Path of DevOps Enlightenment for InfoSec
Security as we have known it has completely changed. Through challenges from the outside and from within there is a wholesale conversion happening across the industry where DevOps and Security are joining forces. This talk is a hybrid of inspiration and pragmatism for dealing with the new landscape.
DevOps and Continuous Delivery Reference Architectures (including Nexus and o...Sonatype
There are numerous examples of DevOps and Continuous Delivery reference architectures available, and each of them vary in levels of detail, tools highlighted, and processes followed. Yet, there is a constant theme among the tool sets: Jenkins, Maven, Sonatype Nexus, Subversion, Git, Docker, Puppet/Chef, Rundeck, ServiceNow, and Sonar seem to show up time and again.
Accenture DevOps: Delivering applications at the pace of businessAccenture Technology
Are you ready to shift to continuous delivery? DevOps, a leading software engineering innovation, makes this shift possible by bringing business, development and operation teams together to streamline IT and applying more automated processes.
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Kellyn Pot'Vin Gorman presented this talk on May 23, 2018 at Data Summit 2018. Database Trends & Applications covered her talk in the following article https://t.co/J6dk30iPkc
Webinar: End-to-End CI/CD with GitLab and DC/OSMesosphere Inc.
Seven years ago, Apache Mesos was born as a platform to bring the distributed computing capabilities that powered the largest digital companies to the masses. Today, Mesosphere DC/OS technologies power more containers in production than any other software stack in the world, and has emerged as the premier platform for building and elastically scaling data-rich, modern applications and the associated CI/CD infrastructure across any infrastructure, public or private.
GitLab is an end-to-end software development and delivery platform with built-in CI/CD, monitoring, and performance metrics. With a unified experience for every step of the development lifecycle and seamless integration with container schedulers, GitLab provides the most efficient approach to reduce cycle time, increase velocity, and improve software quality.
In this webinar, you will learn how to combine DC/OS and GitLab to easily build a CI/CD infrastructure and build a complete CI/CD pipeline in minutes.
Slides cover:
1. An introduction to Apache Mesos and Mesosphere DC/OS and overview of DC/OS features and capabilities for developing, deploying, and operating containerized applications, microservices and CI/CD
2. An introduction to GitLab
3. How to use DC/OS and GitLab to build a CI/CD solution and go from idea to production
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
Managing ScaleIO as Software on Mesos - David vonThenen - Dell EMC World 2017{code} by Dell EMC
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
They didn’t think migrating off their legacy version control system would be difficult. They thought it would be impossible.
For Cadence Design Systems, the multinational electronic design automation (EDA) software and engineering services company, moving off ClearCase was an important but daunting goal.
They knew a modern, flexible system would foster innovation and help them keep up with rapidly evolving customer demands. But, they had a highly customized environment and wanted to preserve the data they’d accumulated over the years.
It wasn’t easy. But, with Perforce, it was possible.
How? Find out.
Cindi Hunter, Director of Configuration Management and Tom Tyler, Senior Consultant at Perforce Software, share their highly successful migration process, which includes:
• Defining the scope of your migration given your unique environment.
• Determining a migration strategy to preserve sophisticated branching strategies, custom tools, and important data.
• Ensuring you get the migration support you need from your new vendor.
DevOps has been an emerging trend in the software development world for the past several years. While the term is relatively new, it is really a convergence of a number of practices that have been evolving for decades. Unfortunately, database development has been left out of much of this movement, but that's starting to change. As database professionals, we all need to understand what this important change is about, how we fit in, and how to best work database development practices into the established DevOps practices.
One of the cornerstones of the DevOps methodology is source control. When most people think of source control, they picture a tool - either a traditional, centralized system like TFS, or a newer, distributed system like Git. Source control is more than a tool, though; human processes and practices also play a critical role in an effective source control (and DevOps) implementation. In this session, we'll talk in depth about both types of source control systems and how you can effectively use source control for your databases.
The Fastest Way to Redis on Pivotal Cloud FoundryVMware Tanzu
What do developers choose when they need a fast performing datastore with a flexible data model? Hands-down, they choose Redis.
But, waiting for a Redis instance to be set up is not a favorite activity for many developers. This is why on-demand services for Redis have become popular. Developers can start building their applications with Redis right away. There is no fiddling around with installing, configuring, and operating the service.
Redis for Pivotal Cloud Foundry offers dedicated and pre-provisioned service plans for Cloud Foundry developers that work in any cloud. These plans are tailored for typical patterns such as application caching and providing an in-memory datastore. These cover the most common requirements for developers creating net new applications or who are replatforming existing Redis applications.
We'd like to invite you to a webinar discussing different ways to use Redis in cloud-native applications. We'll cover:
- Use cases and requirements for developers
- Alternative ways to access and manage Redis in the cloud
- Features and roadmap of Redis for Pivotal Cloud Foundry
- Quick demo
Presenters: Greg Chase, Director of Products, Pivotal and Craig Olrich, Platform Architect, Pivotal
OSMC 2017 | Building a Monitoring solution for modern applications by Martin ...NETWAYS
Modern applicatons require modern monitoring solutions that can react fast on changes in the monitored applications (think of autoscaling, updates). And after many years our old monitoring system, based on Nagios and Cacti, was not holding up anymore. This talk tells the story of your journey from our old system through defining our requirements and multiple tool evaluations (Zabbix, Prometheus, Icinga2) to our current impementation based on Icinga2. I will also show some of our implementation details and how we solved problems in our deployment.
There's More to Docker than the Container: The Docker Platform - Kendrick Col...{code} by Dell EMC
{code} by Dell EMC has a rich history of building storage plaugins with Docker. The Docker engine is only one piece of the puzzle when it comes to solving a container-based infrastructure. The projects from Docker aim to democratize development tools, build better applications, and simplify operations. Learn about all of the different Docker projects along with {code} by Dell EMC integrations to run containers at every stage from development to production.
2018年11月5日(月)開催セミナー
DBを10分間で1000個構築するDB仮想化テクノロジーとは?
~Database as code in Devops~
講演資料です。
"What is DevOps"
Office of the CTO, Delphix Adam Bowen
Devopsとは何か?DevopsにおけるDB環境はどうあるべきか?Facebook,ebay,WallmartのDevpos事例を交えて、DevopsとDBのベストプラクティスを解説します。
This are my keynote slides from SQL Saturday Oregon 2023 on AI and the Intersection of AI, Machine Learning and Economnic Challenges as a Technical Specialist
This is the second session of the learning pathway at PASS Summit 2019, which is still a stand alone session to teach you how to write proper Linux BASH scripts
Let's dive deeper into the world of ODC! Ricardo Alves (OutSystems) will join us to tell all about the new Data Fabric. After that, Sezen de Bruijn (OutSystems) will get into the details on how to best design a sturdy architecture within ODC.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
"Impact of front-end architecture on development cost", Viktor TurskyiFwdays
I have heard many times that architecture is not important for the front-end. Also, many times I have seen how developers implement features on the front-end just following the standard rules for a framework and think that this is enough to successfully launch the project, and then the project fails. How to prevent this and what approach to choose? I have launched dozens of complex projects and during the talk we will analyze which approaches have worked for me and which have not.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
DevOps derives from both development and operations, groups that DBAs often have a foot in each of.
There is a high focus on collaboration, geared on methodologies, process and practice.
The goal is to release more frequently, more successfully and with less bugs.
Talk about the future of the DBA with DevOps-
I just presented this last week to both Oracle and SQL Server events on this very topic.
What does this have to do with a DBA? Is it our future?, is it something we all have to embrace or convert to?
My answer is no- just as we still see cobalt and fortran apps still in need of support, traditional relational database support for on-prem isn’t going away anytime soon.
We all have enough work to keep us busy as traditional DBAs for a good decade or more. Those of you in the municipal and federal jobs are safe for a few more decades…
DevOps derives from both development and operations, groups that DBAs often have a foot in each of.
There is a high focus on collaboration, geared on methodologies, process and practice.
To be empowered by DevOps requires automation and with that, tools. Tools can include scripting through CLIs and GUI interaction.
Agile 2008 conference, Andrew Clay Shafer and Patrick Debois discussed "Agile Infrastructure”
The term DevOps was popularized through a series of "devopsdays" starting in 2009 in Belgium
Introduction of the cloud, the idea of the department that buys the server and gets a developer to build something they need outside of IT, is now on steroids..
They now just open a cloud account with the idea that its our problem when it become mission critical
Arrow Electronics just claimed at a dinner that 30% of their business will be on audits of unsecure, non-policy meeting cloud initiatives that are in production from this exact practice.
So we review code, but how often do we check the tools that are being used…
On the Oracle side, we saw this all the time- The developers used Toad or other tools to develop, but the Oracle DBA would require SQL Plus to release and it would fail due to proprietary comments in the scripts or parameter setup at the command line that was assumed.
We are the masters of automation, so we should be involved in tool selection to ensure they cover a broad range of tiers in the IT environment.
How many of you use these tools? How many of you use these tools when executing to production?
Keep in mind that there are many terms used for the concepts on this slide.
I’ve chosen the most common ones, but depending on the choice in Agile and DevOps methodology, the words may change, but the goal is the same.
Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and running automated tests.
Continuous delivery (CD) is a software engineering approach in which teams produce software ... incremental updates to applications in production. A straightforward and repeatable deployment process is important for continuous delivery.
At the same time, there are a few tool in CD, like Jenkins, that have been very popular with the DBA masses.
Ant is another java based built tool that’s part of Apache open-source project.
Similar to Make and written in XML.
This Groovy script executes another script, making it valuable in environments that already have a number of mature scripts in place that should be reused in automation.
This is our plugin- that’s how important we find these tools that we’ve built it into Delphix….
Configuration management (CM) is a systems engineering process for establishing and maintaining consistency of a product's performance, functional, and physical attributes with its requirements, design, and operational information throughout its life.
A DBA’s desire for low risk and stability assists here as we desire routine that results in expected outcomes.
This is a simple ansible call to copy a script from on directory to another, change the permissions and then execute it. This is all being done on a Linux machine.
Solutions for DevOps, Security/compliance, configuration management, cloud/container management and “infrastructure as code”
They have new products outside of their Enterprise,
Like Discovery, Bolt, Pipelines and the Container Registry
This is another area that introduces risk, but DBAs are less adverse to this methodology, as it focuses on one feature, even if it focuses on multiple tiers.
If something goes wrong, it can mean higher detail of coordination to back a change out or to correct a problem.
Release Orchestration is the use of tools like XLRelease which manage software releases from the development stage to the actual software release itself.
It is more known for diagnostics, but allows for automation of admin tasks quickly. Perform analysis of data and create automated scripts for reuse.
Most useful? SQL Job Editor, SQL User Clone, (full clone) SQL Job Editor, SQL Server Configuration Compare.
Redgate DML Automation not only automates your release, you can create release scripts from this application.
We all know how important it is to track changes, but a repository can be used for a number of other valuable ways.
I’m going to add to this definition with Data version control.
This is where we move from DevOps into DataOps and it’s the both the evolution of DevOps, along with where the DBA becomes a focal point of DevOps.
OK, yours may currently may not be the same. We need to talk about how you can become aligned with everyone else’s goals.
It doesn’t mean you have to give up your first database.
You can be part of the goals of the company and still protect the data, all of the data and the database.
The concept was first coined just a few years ago by a Senior VP Platform Engineer, Dave McCrory. It was an open discussion aimed at understanding how data impacted the way technology changed when connected with network, software and compute.
He discusses the basic understanding that there’s a limit in “the speed with which information can get from memory (where data is stored) to computing (where data is acted upon) is the limiting factor in computing speed.” called the Von Newmann Bottleneck.
These are essential concepts that I believe all DBAs and Developers should understand, as data gravity impacts all of us. Its the reason for many enhancements to database, network and compute power. Its the reason optimization specialists are in such demand. Other roles such as backup, monitoring and error handling can be automated, but the more that we drive logic into programs, nothing is as good as true skill in optimization when it comes to eliminating much of data gravity issues. Less data, less weight- it’s as simple as that.
In computing, virtualization means to create a virtual version of a device or resource, such as a server, storage device, network or even a database. The framework divides the resource into one or more execution environments. For data, this can result in a golden copy or source that is used for a centralized location and removal of duplicated data. For read and writes, having unique data for that given copy, while duplicates are kept to singular.
RMAN duplicates, cold backup to restores, datapump and other archaic data transfer processes are time consuming.
By virtualizing, we remove the “weight” of the data. We know that 80% of the data won’t change between copies, so why do we need individual copies of it. Our source is then deduped and compressed to conserve more space.
How do we “rewind” data and code changes now?
Why should the DBA rewind changes made in dev and test?
Why should you be the one to do this in test?
Virtualization removes this.
The Virtual databases are read and write, so even maintenance tasks, like DBCC’s can be offloaded to one.
Ability to version control, not just the meta data, but the user data!
I work with Delphix, so you would think I know our virtualization the best, but the truth is, I also know many other virtualization tools at a very detailed level.
The amount of information I know on Oracle virtualization tools is pretty insane, in fact.
Point out the engine and size after we’ve compressed and de-duplicated.
Note that each of the VDBs will take approximately 5-10G vs. 1TB to offer a FULL read/write copy of the production system
It will do so in just a matter of minutes.
That this can also be done for the application tier!
Each Virtual Database, (VDB) will no longer require space, (only background and transaction log unique to the user database, etc.) This is a considerable savings, but…
If we take this a step further by embracing write changes only on blocks changed from the source, then we’ll experience 10-20 copies of a database in about the same space that one database requires.
Package software into standardized units for development, shipment and deployment. A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
The next step is moving to data pods. Containers are a buzz area of technology right now. If we’re talking Docker or Kubernetes, we know this is the way of the future. Instead of having locked, unique environments, the ability to package them as one, in a lighter and more flexible unit makes incredible sense.
As a DBA, I rarely, if ever, just released code to the database. It was commonly to the database, the application and linked products.
The ability to package and manage as a Data Pod is an impressive enhancement to the Developer, tester and DBA.
The next step is the ability to migrate to the cloud or from one cloud to another. Right now, 60% of customers are using 2-5 clouds on average. The ability to move a Data Pod from one cloud to another is incredibly powerful.
Companies are spending increased time now just migrating to the cloud, but to other clouds and if it would be as simple as migrating a Data pod with a few changes to the new storage location, (i.e. cloud) that could save companies millions of dollars.
A data pod is a set of virtual data environments and controls built then delivered to users for self-service data consumption. It allows for self-management without the need for DBAs to manage standard processing, automate rebuilds and even remove need for backout scripts when development, testing and promotion goes wrong.
We refer to a container as a template in our product.
Note that a data pod can be moved here or to the cloud…
DBA has to commandeer a database for patch testing.
This has to be performed for EACH environment, 100’s or 1000’s of databases!
Most are not synchronized with production, different outcomes when released to production.
Bugs occurring in one, not another!
Over 80% of time is waiting for RDBMS, (relational databases) to be refreshed. Developers and Testers are waiting for data to do their primary functions.
This allows for faster and less costly migrations to the cloud, too.
So what is “data versioning”? This is similar to version control at the code level, but tracks changes.
Lot of interest in SQL Server Temporal tables, (although very few use cases in the real world)
Some of these products are focused on the DBA to control the changes, as they are most often the one having to address how to “rewind” or correct changes
When they occur.
Jet Stream focuses on developers and Testers and although can be at the database only, we more often build it with data pods, (i.e. containers) that consist of the database, application
And any other tier that interacts with the database.
There’s significant benefit to doing it this way and more third party providers may begin to do this as well.
This is a cornerstone to developers and testers, so as DBAs, we know the pain when a developer comes to us to flashback a database and before that, recover or logically recover, (import or datapump) independent objects. What is The developer/tester could do this for themselves?
This may appear to be a traffic disaster of changes, but for developers with Agile experience, a “sprint” looks just like this. You have different sprints that are quick runs and merges where developers are working separately on code that must merge successfully at the correct intersection and be deployed.
Versioning with source control is displayed at the top, using Virtual images. You can see each iteration of the sprints.
In the middle section is the branches of that occur during the development process. A virtual can be spun from a virtual, which means that it’s easier for developers to work from the work another developer has produced.
Stopping points and release via a clone is simply minutes vs. hours or days.
This is the interface for Developers and testers- they can bookmark before important tasks or rewind to any point in the process. They can bookmark and branch for full development/testing needs.
An Agile Framework
Scrum Framework consists of:
A Product owner creating a wish list and the “sprint” begins
Sprint planning and backlog is created.
Team sets up schedule and beings to have daily scrum standups, (commonly 5 minutes)
Scrum master keeps team focused and collaborating, keeps track of status
Product is released
Sprint ends with feedback and lessons learned
Next sprint begins
Considered a very “visual” development process, based off of grocery store shelf stocking.
Uses standardized cues and refined processes
Goal to reduce waste and maximize value
Most often uses sticky notes and whiteboard to create a picture of the work to complete, what’s in process and what’s done.
Visualize Work
Limit Work in Process
Focus on Flow
Continuous Improvement
Code may come first in XP, but testing must already exist to know what the successful outcome will be.
Code is written by pairs of programmers, allowing for better collaboration.
Believes in the power of doing, vs. extensive planning. Failure is expected.
Always build foundations that can be built on later.
Rarely specialize- everyone develops, tests, designs, etc.
Shades of Crystal- orange, yellow, etc.
Similar to Rapid Deploy, but it is one tier focused often. The developers work in on a goal of client focused projects and the value must be seen.
It’s not about correcting or fixing, but on driving a feature that is demanded from the user and creates revenue.
FDD also defines a collection of supporting roles, including:
Domain Manager
Release Manager
Language Guru
Build Engineer
Toolsmith
System Administrator
Tester
Deployer
Technical Writer
Methods provide a format or guide to work from. Hybrid approaches often implement best.
Collaboration methods ensure that communication continues when team members return to their desks
Deployment tools help with documenting and lessons learned
Build tools help with automation and orchestration
Or does it shift the problem toward authentication and authorization?
Idera SQL Secure identifies who has access of on-prem and cloud environments.
Set Strong security policies.
Present security violations, analyze user permissions and you can create security templates to create similar databases roles and privs in the future.
This includes ones pre-built for PCI, HIPAA, FERPA for guidelines for STIG and CIS
Where the SQL Compliance manager works to audit sensitive data, stop potential threats by tracking access.
This feature also has templates similar to the ones in SQL Secure. Compliance manager offers a lot more in reporting and dashboard, but has less features.
For a typical Fortune 1000 company, just a 10% increase in data accessibility will result in more than $65 million additional net income.
Leveraging data coupld increase revenue by as much as 60%
There are larger data sources every day. Databases are at the center of this friction and the natural life of a database is growth.
There are two different definitions of data gravity
The weight of data causes application, access and services to be pulled to the data.
The very weight of data is heavy, creating a gravitational pull that is difficult to escape from when working with it.
By 2020, we’ll grow from today’s 4.4 zettabyets to an approximate, but staggering 44 zettabytes, or 44 trillion gigabytes.
And by 2020, a third of that data will pass through the cloud.
Data gravity is the ability of bodies of data to attract applications, services and other data. ... IT expert Dave McRory coined the term data gravity as an analogy to the way that, in accordance with the physical laws of gravity, objects with more mass attract those with less.
And yet we state that we won’t need DBAs? That data isn’t the center of challenge?
Per Forbes, by the year 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet.
more data has been created in the past two years than in the entire previous history of the human race.
That data has to be stored somewhere and there’s a large chance it’s going to be in a relational data store.
We can’t eliminate the majority of data
We can optimize the code and the applications, but data is still data- i.e. large.
It will continue to grow
The business is able to provision new environments or refresh existing ones in a matter of minutes.
Developers and testers who’ve worked with bookmarks and branching of their code changes can now do the same with database changes, rewinding and refreshing as they need without impacting the DBAs day. This allows the DBA to do more with their time.
Having tools that includes the database in the Agile development cycle makes a pivotal change in how the DBA is capable of being part of DevOps.