The document discusses challenges with moving databases to the cloud and proposes a solution using data virtualization. It summarizes that virtualizing databases with tools like Delphix and DBVisit allows for instant provisioning of development environments without physical copies. Databases are packaged into "data pods" that can be easily replicated and kept in sync. This streamlines cloud migrations by removing bottlenecks around copying and moving large amounts of database data.
The Power of DataOps for Cloud and Digital Transformation Delphix
Companies have been trying to speed up their innovation delivery for many years but often at the cost of higher quality and stronger security. Despite billions invested to accelerate innovation, projects are too often slowed by data friction - the result of growing volumes of silo’d data and multiple requests for data.
Overcoming these sources of friction requires constant iteration across several key dimensions:
• Reducing the total cost of data by making it fast and efficient to deliver data, regardless of source or consumer. Automation and tooling is critical.
• Integrating security and governance into a seamless data delivery process. This requires integrated masking, but also a governance platform and process to ensure the right rules and access controls are in place.
• Breaking down silos between people and organizations. This starts with the organizational change to bring people together into one team, but requires technology change to provide self-service data access and control.
Platform for Cloud Migration — Accelerating and De-Risking your Cloud JourneyDelphix
Businesses can use the Delphix Dynamic Data Platform to streamline all phases of cloud migration, from identifying and securing sensitive information, to replicating data to the cloud, to testing migrated applications ahead of cutover and go-live. With Delphix, organizations can transition their application landscape to the cloud with speed, security, and as little risk as possible.
The Power of DataOps for Cloud and Digital Transformation Delphix
Companies have been trying to speed up their innovation delivery for many years but often at the cost of higher quality and stronger security. Despite billions invested to accelerate innovation, projects are too often slowed by data friction - the result of growing volumes of silo’d data and multiple requests for data.
Overcoming these sources of friction requires constant iteration across several key dimensions:
• Reducing the total cost of data by making it fast and efficient to deliver data, regardless of source or consumer. Automation and tooling is critical.
• Integrating security and governance into a seamless data delivery process. This requires integrated masking, but also a governance platform and process to ensure the right rules and access controls are in place.
• Breaking down silos between people and organizations. This starts with the organizational change to bring people together into one team, but requires technology change to provide self-service data access and control.
Platform for Cloud Migration — Accelerating and De-Risking your Cloud JourneyDelphix
Businesses can use the Delphix Dynamic Data Platform to streamline all phases of cloud migration, from identifying and securing sensitive information, to replicating data to the cloud, to testing migrated applications ahead of cutover and go-live. With Delphix, organizations can transition their application landscape to the cloud with speed, security, and as little risk as possible.
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
Managing ScaleIO as Software on Mesos - David vonThenen - Dell EMC World 2017{code} by Dell EMC
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Kellyn Pot'Vin Gorman presented this talk on May 23, 2018 at Data Summit 2018. Database Trends & Applications covered her talk in the following article https://t.co/J6dk30iPkc
Google does containers: Hello Kubernetes - Steve Wong and Vladimir Vivien - D...{code} by Dell EMC
Kubernetes is a production grade container orchestration system and one of the most popular open source projects released by Google. {code} by Dell EMC is focused on bringing a common storage interface. Kubernetes boasts a unique architecture and fresh perspective on running containerized applications used by companies like SoundCloud and The New York Times. Learn all about Kubernetes and see how {code} by Dell EMC is integrating REX-Ray to provide stateful application support to pods of containers.
An introduction to {code} by Dell EMC, our mission on containers, and our core project REX-Ray. This will give the audience an understanding of why REX-Ray is important and where you can go to learn more.
There's More to Docker than the Container: The Docker Platform - Kendrick Col...{code} by Dell EMC
{code} by Dell EMC has a rich history of building storage plaugins with Docker. The Docker engine is only one piece of the puzzle when it comes to solving a container-based infrastructure. The projects from Docker aim to democratize development tools, build better applications, and simplify operations. Learn about all of the different Docker projects along with {code} by Dell EMC integrations to run containers at every stage from development to production.
Deep Dive on Container Storage Architectures - Clinton Kitson and Chris Duche...{code} by Dell EMC
Every container platform has a unique storage integration approach because communicating with different storage platforms can be complex. No uniform tool exists between Docker, Mesos and Kubernetes, and each applies a different architecture for volume mounting. Come to this session to learn about the different storage architectures with each container platform and how {code} by Dell EMC is moving towards issuing a standard interface.
Large Scale Cassandra Made Better in Containers - Chris Duchesne and Aaron Sp...{code} by Dell EMC
How do you take a NoSQL, highly scalable, high-performance distributed database providing high availability with no single point of failure and turn it into an on-demand service? You use Kubernetes and containers! Come learn how Cassandra, REX-Ray, and ScaleIO creates a new architecture for an always available distributed database.
Leading an Open Source community at a large Enterprise - Jonas Rosland - Open...{code} by Dell EMC
Creating an open source initiative at a large enterprise such as Dell EMC comes with both challenges and rewards. Making sure your community is engaged and your projects thrive takes time and effort. In this session, Jonas Rosland, Open Source Community Manager at {code} by Dell EMC shares experiences, failures, and gives a glimpse into how large enterprises can embrace and lead open source communities successfully.
Data Analytics Using Container Persistence Through SMACK - Manny Rodriguez-Pe...{code} by Dell EMC
New digital business models facilitated by containers require collecting and analyzing device data. Apache Mesos removes the need to build separate stacks and combines optimized application containers and data analytics into a single platform. In this session, we will explore new approaches to data analytics using REX-Ray as a container persistence tool and the SMACK stack - Spark, Mesos, Akka, Cassandra, Kafka – a set of tools for building data and messaging layers for digital engagement apps.
Real World Modern Development Use Cases with RackHD and AdobeTimothy Gelter
Adobe and the Dell EMC RackHD team provide an overview on how Adobe is modernizing their datacenters using public and private clouds enabled by infrastructure as code technologies to abstract their infrastructure for application deployments and improve operational efficiencies.
Are you drowning in unstructured data? Can a private cloud, object based storage system be the solution? Join our live webinar "5 Reasons Object Storage Can Solve Unstructured Data Problems" and learn how Object Storage can:
1. Reduce Data Storage Costs
2. Increase Data Retention Capabilities
3. Improve Data Protection and Security
4. Provide "Cloud" Flexibility
5. Be easy to implement and operate
Mesosphere and the Enterprise: Run Your Applications on Apache Mesos - Steve ...{code} by Dell EMC
What do Apple, ADP, Netflix, eBay, Time Warner Cable, and UC Berkley have in common? All of them use Apache Mesos! Did you know {code} by Dell EMC contributed the first storage module to Apache Mesos? This session will examine Mesos architecture and will focus on how to use frameworks such as Marathon to deploy containerized applications. See how {code} by Dell EMC is integrating REX-Ray to provide stateful application support to the most battle-tested container platform.
Implement DevOps Like a Unicorn—Even If You’re Not OneTechWell
Etsy, Netflix, and the unicorns have done great things with DevOps. Although most people don't work at a unicorn, they still want to combine agility and stability. To close the gap between developers and operations, Mason Leung says his company runs operation workshops, blogs about infrastructure, and experiments with different tools—and are solving the same problems as the unicorns only on a smaller scale. Mason explains that you don't get to millions of requests without going through the first several hundred. Ideas you can take from unicorns include how to use containers to enhance development experience, how to avoid production meltdown with continuous deployment, how to tame infrastructure gone wild, why “new shiny” is not always the correct solution, and why putting all your eggs in a cloud service provider is a good idea. There is no single, correct way to DevOps. By observing the unicorns and applying the lessons to your situation, your DevOps journey can be less volatile and more fulfilling as you prepare for the hypergrowth.
DELWP’s Data Lake: Investing in Asset Wealth for Public/Community Benefit – B...Amazon Web Services
In the last 10 years, Department of Environment, Land, Water and Planning (DELWP) has procured half a petabyte of aerial photography, satellite imagery, and point cloud data worth more than $45million. In addition, a range of faster products critical to the delivery of services are produced continuously. DELWP’s faster data needs is expected to grow overtime and an enterprise system to manage, publish and enable discovery and analysis of this growing catalogue is essential.
Find out how DELWP is implementing pioneering data science strategies to successfully integrate disparate systems and data silos, reduce risks and increase compliance, and support a rapid pace of innovation to improve DELWP’s services to the Victorian public.
Project Manager with DELWP’s Digital First program, Alena Moison will share the department’s journey through digital transformation, key successes and learning points, as well as how they worked with Bulletproof to implement solutions on AWS.
Speakers:
Giles Bill, Bulletproof Networks
Sam Mason, Bulletproof Networks
Alena Moison, Department of Environment, Land, Water and Planning
“The next release is probably going to be at late”... these are words that every AppDev leader has uttered… and often.
Development teams burdened with complex release requirements often run over schedule and over budget. One of the biggest offenders? Data. Your teams are cutting corners, sacrificing quality and delivering projects late because they don’t have a good solution for managing data.
You’re one of many AppDev leaders that face these challenges. You need a new approach to manage, secure and provision your data in order to stay relevant, You need DataOps.
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
Managing ScaleIO as Software on Mesos - David vonThenen - Dell EMC World 2017{code} by Dell EMC
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Kellyn Pot'Vin Gorman presented this talk on May 23, 2018 at Data Summit 2018. Database Trends & Applications covered her talk in the following article https://t.co/J6dk30iPkc
Google does containers: Hello Kubernetes - Steve Wong and Vladimir Vivien - D...{code} by Dell EMC
Kubernetes is a production grade container orchestration system and one of the most popular open source projects released by Google. {code} by Dell EMC is focused on bringing a common storage interface. Kubernetes boasts a unique architecture and fresh perspective on running containerized applications used by companies like SoundCloud and The New York Times. Learn all about Kubernetes and see how {code} by Dell EMC is integrating REX-Ray to provide stateful application support to pods of containers.
An introduction to {code} by Dell EMC, our mission on containers, and our core project REX-Ray. This will give the audience an understanding of why REX-Ray is important and where you can go to learn more.
There's More to Docker than the Container: The Docker Platform - Kendrick Col...{code} by Dell EMC
{code} by Dell EMC has a rich history of building storage plaugins with Docker. The Docker engine is only one piece of the puzzle when it comes to solving a container-based infrastructure. The projects from Docker aim to democratize development tools, build better applications, and simplify operations. Learn about all of the different Docker projects along with {code} by Dell EMC integrations to run containers at every stage from development to production.
Deep Dive on Container Storage Architectures - Clinton Kitson and Chris Duche...{code} by Dell EMC
Every container platform has a unique storage integration approach because communicating with different storage platforms can be complex. No uniform tool exists between Docker, Mesos and Kubernetes, and each applies a different architecture for volume mounting. Come to this session to learn about the different storage architectures with each container platform and how {code} by Dell EMC is moving towards issuing a standard interface.
Large Scale Cassandra Made Better in Containers - Chris Duchesne and Aaron Sp...{code} by Dell EMC
How do you take a NoSQL, highly scalable, high-performance distributed database providing high availability with no single point of failure and turn it into an on-demand service? You use Kubernetes and containers! Come learn how Cassandra, REX-Ray, and ScaleIO creates a new architecture for an always available distributed database.
Leading an Open Source community at a large Enterprise - Jonas Rosland - Open...{code} by Dell EMC
Creating an open source initiative at a large enterprise such as Dell EMC comes with both challenges and rewards. Making sure your community is engaged and your projects thrive takes time and effort. In this session, Jonas Rosland, Open Source Community Manager at {code} by Dell EMC shares experiences, failures, and gives a glimpse into how large enterprises can embrace and lead open source communities successfully.
Data Analytics Using Container Persistence Through SMACK - Manny Rodriguez-Pe...{code} by Dell EMC
New digital business models facilitated by containers require collecting and analyzing device data. Apache Mesos removes the need to build separate stacks and combines optimized application containers and data analytics into a single platform. In this session, we will explore new approaches to data analytics using REX-Ray as a container persistence tool and the SMACK stack - Spark, Mesos, Akka, Cassandra, Kafka – a set of tools for building data and messaging layers for digital engagement apps.
Real World Modern Development Use Cases with RackHD and AdobeTimothy Gelter
Adobe and the Dell EMC RackHD team provide an overview on how Adobe is modernizing their datacenters using public and private clouds enabled by infrastructure as code technologies to abstract their infrastructure for application deployments and improve operational efficiencies.
Are you drowning in unstructured data? Can a private cloud, object based storage system be the solution? Join our live webinar "5 Reasons Object Storage Can Solve Unstructured Data Problems" and learn how Object Storage can:
1. Reduce Data Storage Costs
2. Increase Data Retention Capabilities
3. Improve Data Protection and Security
4. Provide "Cloud" Flexibility
5. Be easy to implement and operate
Mesosphere and the Enterprise: Run Your Applications on Apache Mesos - Steve ...{code} by Dell EMC
What do Apple, ADP, Netflix, eBay, Time Warner Cable, and UC Berkley have in common? All of them use Apache Mesos! Did you know {code} by Dell EMC contributed the first storage module to Apache Mesos? This session will examine Mesos architecture and will focus on how to use frameworks such as Marathon to deploy containerized applications. See how {code} by Dell EMC is integrating REX-Ray to provide stateful application support to the most battle-tested container platform.
Implement DevOps Like a Unicorn—Even If You’re Not OneTechWell
Etsy, Netflix, and the unicorns have done great things with DevOps. Although most people don't work at a unicorn, they still want to combine agility and stability. To close the gap between developers and operations, Mason Leung says his company runs operation workshops, blogs about infrastructure, and experiments with different tools—and are solving the same problems as the unicorns only on a smaller scale. Mason explains that you don't get to millions of requests without going through the first several hundred. Ideas you can take from unicorns include how to use containers to enhance development experience, how to avoid production meltdown with continuous deployment, how to tame infrastructure gone wild, why “new shiny” is not always the correct solution, and why putting all your eggs in a cloud service provider is a good idea. There is no single, correct way to DevOps. By observing the unicorns and applying the lessons to your situation, your DevOps journey can be less volatile and more fulfilling as you prepare for the hypergrowth.
DELWP’s Data Lake: Investing in Asset Wealth for Public/Community Benefit – B...Amazon Web Services
In the last 10 years, Department of Environment, Land, Water and Planning (DELWP) has procured half a petabyte of aerial photography, satellite imagery, and point cloud data worth more than $45million. In addition, a range of faster products critical to the delivery of services are produced continuously. DELWP’s faster data needs is expected to grow overtime and an enterprise system to manage, publish and enable discovery and analysis of this growing catalogue is essential.
Find out how DELWP is implementing pioneering data science strategies to successfully integrate disparate systems and data silos, reduce risks and increase compliance, and support a rapid pace of innovation to improve DELWP’s services to the Victorian public.
Project Manager with DELWP’s Digital First program, Alena Moison will share the department’s journey through digital transformation, key successes and learning points, as well as how they worked with Bulletproof to implement solutions on AWS.
Speakers:
Giles Bill, Bulletproof Networks
Sam Mason, Bulletproof Networks
Alena Moison, Department of Environment, Land, Water and Planning
“The next release is probably going to be at late”... these are words that every AppDev leader has uttered… and often.
Development teams burdened with complex release requirements often run over schedule and over budget. One of the biggest offenders? Data. Your teams are cutting corners, sacrificing quality and delivering projects late because they don’t have a good solution for managing data.
You’re one of many AppDev leaders that face these challenges. You need a new approach to manage, secure and provision your data in order to stay relevant, You need DataOps.
“TODAY, COMPANIES ACROSS ALL INDUSTRIES ARE BECOMING SOFTWARE COMPANIES.”
The familiar refrain is certainly true of the new-school, born-in-the-cloud set. But it can also apply to traditional enterprises that are reinventing themselves by coupling DevOps excellence with intelligent DataOps.
DataOps in Financial Services: enable higher-quality test ing + lower levels ...Ugo Pollio
In this session, you will learn how banks and financial services all over the world are using DataOps tools to:
- Comply with GDPR with fully masked test data
- Achieve faster environment refreshes
- Shift Left with production-like test data
- Reduce infrastructure requirements
- Enabling continuous integration and continuous delivery
2018年11月5日(月)開催セミナー
DBを10分間で1000個構築するDB仮想化テクノロジーとは?
~Database as code in Devops~
講演資料です。
"What is DevOps"
Office of the CTO, Delphix Adam Bowen
Devopsとは何か?DevopsにおけるDB環境はどうあるべきか?Facebook,ebay,WallmartのDevpos事例を交えて、DevopsとDBのベストプラクティスを解説します。
As companies have adopted faster development methodologies a new constraint has emerged in the journey to digital transformation: data. Data has long been the neglected discipline, the weakest link in the tool chain, with provisioning times still counted in days, weeks, or even months. In addition, most companies are still using decades-old processes to manage and deploy database changes, further anchoring development teams.
As companies have adopted faster development methodologies a new constraint has emerged in the journey to digital transformation: data. Data has long been the neglected discipline, the weakest link in the tool chain, with provisioning times still counted in days, weeks, or even months. In addition, most companies are still using decades-old processes to manage and deploy database changes, further anchoring development teams.
Seamless Migration of Public Sector Data and Workloads to the AWS Cloud - AWS...Amazon Web Services
Efficiently migrating data and workloads to the cloud is one of the most significant technology challenges currently facing organisations. Veritas has a unique perspective on solving this dilemma, and brings to the table a wide range of enterprise-grade solutions capable of effortlessly migrating large-scale data and workloads to the AWS Cloud. Learn from relevant global scenarios spanning the public sector and corporate landscapes.
Speaker: Dave Hamilton, Distinguished Engineer, Veritas.
Level: 200
Continua il ciclo di webinar in collaborazione con Veritas Technologies.
In questo secondo appuntamento abbiamo visto le soluzioni Veritas di Software Defined Storage.
Il settore IT è oggi una delle aree aziendali maggiormente impattate dal fenomeno dell’aumento esponenziale dei dati. Conseguentemente, gli IT Manager devono far fronte all'aumento dei costi e della complessità per l’implementazione di soluzioni di Storage atte a contenere la crescita del volume dei dati.
Al tempo stesso essi devono operare delle scelte orientate a soluzioni in grado di soddisfare i livelli prestazionali sempre più elevati richiesti dalle nuove applicazioni di business mantenendo altresì la funzionalità di quelle legacy.
L’implementazione di hardware NAS ad alte prestazioni o l’adozione di soluzioni storage di tipo diversificato non rappresentano oggi la soluzione ideale dal punto di vista degli impatti economici e di gestione. Sono infatti disponibili nuove tecnologie, sviluppate proprio in risposta all'esigenza di efficientamento e al contenimento dei costi, che permettono di realizzare infrastrutture che consentono di massimizzare l’utilizzo delle soluzioni storage già presenti nel Data Center e l’adozione si soluzioni Object Storage.
Allo scopo Veritas presenta la propria linea di soluzioni Software Defined Storage.
Webinar - Delivering Enhanced Message Processing at Scale With an Always-on D...DataStax
Managing 3.8 million e-prescriptions daily for more than 1 million healthcare professionals is no small feat. And, with rapid growth in the number of digital transactions and expansion of its network, Surescripts needed to replace its legacy relational database system to address a new set of data management challenges while meeting their customers’ demanding SLAs. Join us for this on-demand webinar to hear from Keith Willard, Chief Architect at Surescripts, to learn how and why Surescripts leverages DataStax Enterprise to deliver enhanced message processing at scale.
View recording: https://youtu.be/1T6V1XAoaJQ
Explore all DataStax webinars: https://www.datastax.com/resources/webinars
Accelerate Design and Development of Data Projects Using AWSDelphix
What if you could stand up your AWS EC2 development and test environments near instantly with fresh, secure, and masked data―and at the same time slash your EBS storage usage?
It’s what Dentegra, one of the largest US health benefits providers, achieved by adding Delphix to their DevOps/AWS stack—enabling them to release new features to the market faster and more efficiently across their hybrid cloud environment.
Webinar | Data Management for Hybrid and Multi-Cloud: A Four-Step JourneyDataStax
Data management may be the hardest part of making the transition to the cloud, but enterprises including Intuit and Macy’s have figured out how to do it right. So what do they know that you might not? Join Robin Schumacher, Chief Product Officer at DataStax as he explores best practices for defining and implementing data management strategies for the cloud. He outlines a four-step journey that will take you from your first deployment in the cloud through to a true intercloud implementation and walk through a real-world use case where a major retailer has evolved through the four phases over a period of four years and is now benefiting from a highly resilient multi-cloud deployment.
View webinar: https://youtu.be/RrTxQ2BAxjg
This are my keynote slides from SQL Saturday Oregon 2023 on AI and the Intersection of AI, Machine Learning and Economnic Challenges as a Technical Specialist
This is the second session of the learning pathway at PASS Summit 2019, which is still a stand alone session to teach you how to write proper Linux BASH scripts
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
PHP Frameworks: I want to break free (IPC Berlin 2024)Ralf Eggert
In this presentation, we examine the challenges and limitations of relying too heavily on PHP frameworks in web development. We discuss the history of PHP and its frameworks to understand how this dependence has evolved. The focus will be on providing concrete tips and strategies to reduce reliance on these frameworks, based on real-world examples and practical considerations. The goal is to equip developers with the skills and knowledge to create more flexible and future-proof web applications. We'll explore the importance of maintaining autonomy in a rapidly changing tech landscape and how to make informed decisions in PHP development.
This talk is aimed at encouraging a more independent approach to using PHP frameworks, moving towards a more flexible and future-proof approach to PHP development.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
SAP Sapphire 2024 - ASUG301 building better apps with SAP Fiori.pdfPeter Spielvogel
Building better applications for business users with SAP Fiori.
• What is SAP Fiori and why it matters to you
• How a better user experience drives measurable business benefits
• How to get started with SAP Fiori today
• How SAP Fiori elements accelerates application development
• How SAP Build Code includes SAP Fiori tools and other generative artificial intelligence capabilities
• How SAP Fiori paves the way for using AI in SAP apps
Generative AI Deep Dive: Advancing from Proof of Concept to ProductionAggregage
Join Maher Hanafi, VP of Engineering at Betterworks, in this new session where he'll share a practical framework to transform Gen AI prototypes into impactful products! He'll delve into the complexities of data collection and management, model selection and optimization, and ensuring security, scalability, and responsible use.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
83% of development is currently performed in the cloud, so we can’t discuss development today without including the cloud.
Delays are not an option- including those in hardware and IT allocation.
This pushes a need for the cloud that previously we hadn’t experienced.
The ability for developers to allocate an environment and start working within minutes is essential to agile
Of the 935 professionals surveyed,
35 percent say that they lack the experience or the expertise to confidently perform a system migration.
No surprise then that a worrying 44 percent of businesses experienced a migration failure in 2015-2017
In addition, 57 percent of migrations required more resources to perform the migration.
I speak to many DBAs about DevOps as the third option for the future of their career. It’s not just about operations or development, but a hybrid of both and the power of automation.
Our natural skill sets will benefit this. Now data is at the center of the hold on DBAs, but it can be mitigated and eliminated.
Really? You’re all saying this, I know.
Data is center of this. I’m a DBA, I know this.
Data sources are central to most development projects and as applications and code is migrated to the cloud, so must the data source.
For a typical Fortune 1000 company, just a 10% increase in data accessibility will result in more than $65 million additional net income.
Leveraging data coupld increase revenue by as much as 60%
Data Gravity has two definitions and both revolve around data sources.
For the first, there are larger data sources every day. Databases are at the center of this friction and the natural life of a database is growth.
By 2020, a third of all data will be on the cloud and 58% of data will be comprised in big data.
In the last two years, we’ve created more data has been created in the past two years than in the entire previous history of the human race.
Data isn’t going to slow down, either. By the year 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet.
By 2020, we’ll grow from today’s 4.4 zettabyets to an approximate, but staggering 44 zettabytes, or 44 trillion gigabytes.
And by 2020, a third of that data will pass through the cloud.
For the second, Data gravity is the ability of bodies of data to attract applications, services and other data. ... IT expert Dave McRory coined the term data gravity as an analogy to the way that, in accordance with the physical laws of gravity, objects with more mass attract those with less.
And yet we state that we won’t need DBAs? That data isn’t the center of challenge?
Per Forbes, by the year 2020, about 1.7 megabytes of new information will be created every second for every human being on the planet.
more data has been created in the past two years than in the entire previous history of the human race.
That data has to be stored somewhere and there’s a large chance it’s going to be in a relational data store.
Everything on prem needs to move to the cloud. There’s commonly one over all project with three sub projects that will accomplish all of this.
DBAs will be working to move the database and all the data.
Application and development will work to migrate the application and support files
Administrators will work to prepare the cloud environments if it’s IaaS.
Data gravity suffers from the Von Newmann Bottleneck. It’s a basic limitation on how fast computers can be. Pretty simple, but states that the speed of where data resides and where it’s worked with, (i.e. processed) is the limiting factor in computing speed.
This is why we have numerous local cache to address memory issues while processing data.
Microsoft researcher Jim Gray has spend most of his career looking at the economics of data, which I think is pretty cool!
This is why DBAs spend so much time on the cost based optimizer, as moving or getting data has cost.
After all is said and done, this is the fundamental principle of data gravity and why DBAs get the big bucks.
We’ll start with three areas that solve the problem of data gravity and data friction in agile development,
We Virtualize.
RMAN duplicates, cold backup to restores, datapump and other archaic data transfer processes are time consuming.
By virtualizing, we remove the “weight” of the data. We know that 80% of the data won’t change between copies, so why do we need individual copies of it. Our source is then deduped and compressed to conserve more space.
Each Virtual Database, (VDB) will no longer require space, (only background and foreground memory for SGA/PGA, etc.) and local redo logs. This is a considerable savings, but…
If we take this a step further by embracing write changes only on blocks changed from the source, then we’ll experience 10-20 copies of a database in about the same space that one database requires.
The business is able to provision new environments or refresh existing ones in a matter of minutes.
Developers and testers who’ve worked with bookmarks and branching of their code changes can now do the same with database changes, rewinding and refreshing as they need without impacting the DBAs day. This allows the DBA to do more with their time.
Having tools that includes the database in the Agile development cycle makes a pivotal change in how the DBA is capable of being part of DevOps.
** Please replace DBVisit logo with a better graphic to describe the replication engine! Feel free to add more slides to go into full detail of what it does, etc.
The goal is to say that we’ll add this replication to both the Dsource, (golden copy) and the target or virtual images/data pods.
The next step is moving to data pods. Containers are a buzz area of technology right now. If we’re talking Docker or Kubernetes, we know this is the way of the future. Instead of having locked, unique environments, the ability to package them as one, in a lighter and more flexible unit makes incredible sense.
As a DBA, I rarely, if ever, just released code to the database. It was commonly to the database, the application and linked products.
The ability to package and manage as a Data Pod is an impressive enhancement to the Developer, tester and DBA.
The next step is the ability to migrate to the cloud or from one cloud to another. Right now, 60% of customers are using 2-5 clouds on average. The ability to move a Data Pod from one cloud to another is incredibly powerful.
Companies are spending increased time now just migrating to the cloud, but to other clouds and if it would be as simple as migrating a Data pod with a few changes to the new storage location, (i.e. cloud) that could save companies millions of dollars.
With virtualization, we virtualize it all- the database, data sources, application and flat files. We containerize it into a Data Pod
With virtualization, we virtualize it all- the database, data sources, application and flat files. We containerize it into a Data Pod
By doing so, it’s easier to life and shift to the cloud. It’s lighter and we can move 20 + environments in the same space as one.
Full refreshes are excellent, but for progressive deployments and releases, replication can offer an added layer to cloud migrations.
Need to fill this out with DBVisit diagrams and solutions into the dynamic data platform slides I’ve built…
So we evolve from being a DBA to embracing more DevOps skills, including more scripting, scheduling and automation, then include DataOps to create a flexible, agile solution.
There are two areas that the DBA can build out on their path to DevOps that are database centric and crucial to the solution-
Virtualize
Data pods to package environments.
By doing all these things, impressive changes can be made by the DBA to any team and they can become valued as a key member of the agile development cycle.