Data chat with Kyle Hailey on how data virtualization with Delphix can aid Oracle DBAs in their day-to-day tasks. We talk about the basic issues facing Oracle DBAs and how specific use-cases can be improved dramatically.
Virtualized storage is fast becoming the new norm.
Nobody can justify provisioning non-production environments the way they did up to now.
This presentation is about how Delphix removes the biggest bottleneck in IT operations, development, and QA by virtualizing data. It identifies the bottleneck and the impact on IT, then describes how Delphix removes it to enable DevOps continuous delivery.
1. Delphix is a data virtualization platform that creates virtual copies of database environments for development, testing, and analytics.
2. It collects changes incrementally from the source database and stores them in a compressed format, allowing for quick provisioning of clones with near-zero storage.
3. Use cases for Delphix include accelerating development by enabling self-service cloning, facilitating forensics and testing by rewinding databases to past states, and improving BI performance through fast refresh of data warehouses.
Platform as reflection of values: Joyent, node.js, and beyondbcantrill
This document discusses how software platform decisions reflect the values of their communities. It outlines many values that platforms may aspire to, such as debuggability, robustness, and stability. However, it notes that platforms must balance these values, and different communities will prioritize certain "core values" over others. The document traces how Joyent's values diverged from those of the node.js community, particularly around priorities like compatibility versus robustness. It argues that values should be clearly defined and notes the challenges that arise when a platform's and organization's values do not align.
Software image installs on x86 server and SAN storage, VM, or cloud in under an hour. Delphix for SQL Server uses a proxy host to restore a full backup of the source database and then maintains synchronization through restoring transaction log backups as they become available, constructing a TimeFlow. Virtual databases can then be provisioned instantly from snapshots with no database recovery required.
Why it’s (past) time to run containers on bare metalbcantrill
Bryan Cantrill argues that containers have evolved from their origins in Unix to become the future of computing. While Docker popularized containers, running containers within virtual machines negates their efficiency advantages. Triton, which combines Docker and SmartOS containers running directly on hardware, provides the security and performance of containers with the simplicity of Docker. However, the container ecosystem remains in flux regarding higher-level concerns like service composition.
XCP exposes a fully featured management API called XAPI. But today, there is no active open source project providing a web GUI which uses XAPI to it's full potential. Xen Orchestra was originally designed as web interface for Xen in 2009, and is undergoing a complete re-write to fill this gap.
XCP exposes a fully featured management API called XAPI. But today, there is no active open source project providing a web GUI which uses XAPI to it's full potential. Xen Orchestra was originally designed as web interface for Xen in 2009, and is undergoing a complete re-write to fill this gap.
First, we will examine interesting features of XAPI, such as events, pools etc. that allow easy administration of virtualized environment. Then, we will see how these features fit into the Xen Orchestra architecture, which has been completely redesigned to reduce connections, bandwidth waste, storing of structured data, allowing persistence and so on. Finally, we will show how we display all that information (ergonomics choices from an ergonomist). We will conclude quickly on how you can engage and contribute to the Xen Orchestra project and make sure it helps fulfil your needs.
Virtualized storage is fast becoming the new norm.
Nobody can justify provisioning non-production environments the way they did up to now.
This presentation is about how Delphix removes the biggest bottleneck in IT operations, development, and QA by virtualizing data. It identifies the bottleneck and the impact on IT, then describes how Delphix removes it to enable DevOps continuous delivery.
1. Delphix is a data virtualization platform that creates virtual copies of database environments for development, testing, and analytics.
2. It collects changes incrementally from the source database and stores them in a compressed format, allowing for quick provisioning of clones with near-zero storage.
3. Use cases for Delphix include accelerating development by enabling self-service cloning, facilitating forensics and testing by rewinding databases to past states, and improving BI performance through fast refresh of data warehouses.
Platform as reflection of values: Joyent, node.js, and beyondbcantrill
This document discusses how software platform decisions reflect the values of their communities. It outlines many values that platforms may aspire to, such as debuggability, robustness, and stability. However, it notes that platforms must balance these values, and different communities will prioritize certain "core values" over others. The document traces how Joyent's values diverged from those of the node.js community, particularly around priorities like compatibility versus robustness. It argues that values should be clearly defined and notes the challenges that arise when a platform's and organization's values do not align.
Software image installs on x86 server and SAN storage, VM, or cloud in under an hour. Delphix for SQL Server uses a proxy host to restore a full backup of the source database and then maintains synchronization through restoring transaction log backups as they become available, constructing a TimeFlow. Virtual databases can then be provisioned instantly from snapshots with no database recovery required.
Why it’s (past) time to run containers on bare metalbcantrill
Bryan Cantrill argues that containers have evolved from their origins in Unix to become the future of computing. While Docker popularized containers, running containers within virtual machines negates their efficiency advantages. Triton, which combines Docker and SmartOS containers running directly on hardware, provides the security and performance of containers with the simplicity of Docker. However, the container ecosystem remains in flux regarding higher-level concerns like service composition.
XCP exposes a fully featured management API called XAPI. But today, there is no active open source project providing a web GUI which uses XAPI to it's full potential. Xen Orchestra was originally designed as web interface for Xen in 2009, and is undergoing a complete re-write to fill this gap.
XCP exposes a fully featured management API called XAPI. But today, there is no active open source project providing a web GUI which uses XAPI to it's full potential. Xen Orchestra was originally designed as web interface for Xen in 2009, and is undergoing a complete re-write to fill this gap.
First, we will examine interesting features of XAPI, such as events, pools etc. that allow easy administration of virtualized environment. Then, we will see how these features fit into the Xen Orchestra architecture, which has been completely redesigned to reduce connections, bandwidth waste, storing of structured data, allowing persistence and so on. Finally, we will show how we display all that information (ergonomics choices from an ergonomist). We will conclude quickly on how you can engage and contribute to the Xen Orchestra project and make sure it helps fulfil your needs.
Accelerating Devops via Data Virtualization | DelphixDelphixCorp
“Accelerating DevOps Using Data Virtualization” at the Collaborate 2016 conference in Las Vegas. It discusses the inevitability of data virtualization and its many use cases.
Delphix allows databases to run as software rather than hardware, using less space while maintaining full functionality and performance. It turns database servers into a single, virtual authority that can consolidate databases and instantly provision copies for development, testing, and other non-production uses. This cuts capital expenses by 50% and operational expenses by 90% while accelerating innovation by eliminating the time and costs associated with copying and moving databases between environments.
Leaping the chasm from proprietary to open: A survivor's guidebcantrill
This document summarizes the history of open source software and the challenges of transitioning proprietary software to open source. It discusses how Sun open sourced Solaris but Oracle later closed it, leading to the illumos community. It also describes how Joyent open sourced its SmartDataCenter and Manta software after initially being proprietary, noting the business reasons and technical challenges involved in such a transition. The document advocates for permissive licensing and avoiding strong governance for most open source projects.
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
My (very brief!) presentation at Interzone.io on March 11, 2015. A more in depth exploration of these ideas can be found at http://www.slideshare.net/bcantrill/docker-and-the-future-of-containers-in-production video: https://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production
Xen Orchestra: XAPI and XenServer from the web-XPUS13 LambertThe Linux Foundation
Xen Orchestra is a web based management tool for the XAPI toolstack that is developed by the Xen Project. XAPI is a fully featured management API for Xen, that is also used by the recently open sourced enServer. We'll see how Xen Orchestra leverages XAPI by allowing a complete control of your virtualized infrastructure. First, we'll explain quickly the XO architecture (such as cache system, asynchronous events, user management with tokens…) Then, a review of current and future possibilities will be exposed, to show what you can expect from this solution: powerful visualizations with d3js, neat interface, orchestration features and integration with all XAPI's capable hosts (XenServer or any distro with XAPI packages, such as Debian, Ubuntu or CentOS). Finally, we'll talk about how to contribute.
node.js in production: Reflections on three years of riding the unicornbcantrill
Node.js was initially challenging to use in production due to memory leaks and lack of debugging tools. Over three years, Joyent developed tools like DTrace probes, MDB for debugging core dumps, Bunyan for logging, and node-restify for building HTTP services to make node.js more reliable and observable in production. These tools helped Joyent successfully deploy many internal services using node.js and identify issues through postmortem analysis. Joyent continues working to improve node.js for production use.
Delphix turns database hardware into software that runs in a fraction of the space, preserving full functionality and performance. It aims to consolidate non-production databases, automatically refresh and provision them, cutting capital expenses by 50% and slashing operating expenses by 90%. Delphix's solution provides instant virtual copies of databases with no impact to production for accelerated application development, real-time reporting, compliance, and more.
The Peril and Promise of Early Adoption: Arriving 10 Years Early to Containersbcantrill
The document discusses Joyent's early adoption of OS-level virtualization through containers and SmartOS beginning in 2005. It explores the benefits of OS-level virtualization for performance, elasticity, and security compared to hardware virtualization. It also discusses Joyent's work developing platforms like no.de and Manta that combined OS containers with technologies like node.js and ZFS. A key challenge was gaining developer adoption due to SmartOS being different than mainstream Linux. Docker later helped popularize the container model. Joyent contributed to container technologies through projects like porting KVM to SmartOS and reviving Linux container support in illumos.
The Container Revolution: Reflections after the first decadebcantrill
The document summarizes the history and evolution of containers over the past decade and a half. It discusses:
- The origins of containers in Unix in the 1970s-80s with chroot. Early implementations in the 2000s included FreeBSD jails and Solaris zones.
- Docker in the early 2010s popularized containers by making them easy for developers to use. This helped accelerate adoption, especially with microservices.
- Joyent developed technologies like SmartOS zones, Manta, and Triton to take advantage of containers' performance and flexibility benefits compared to VMs.
- Going forward, frameworks should be more modular like libraries to maintain flexibility. Failure handling also needs work to make distributed container
Docker's Remote API allows for implementations of Docker that are radically different than the reference Docker implementation. Joyent implemented the Docker Remote API in their SmartDataCenter product to virtualize the Docker host and allow Docker containers to run on any machine in their data center. This allows them to leverage capabilities of SmartOS like ZFS, DTrace and virtualized networking. By unlocking innovation down the stack, the Remote API is Docker's killer feature as it does not imply physical co-location of containers and is flexible enough to accommodate different implementations.
This document is a slide deck about Hyper-V high availability and live migration presented by Greg Shields of Concentrated Technology. The deck covers understanding live migration and its role in Hyper-V HA, fundamentals of Windows failover clustering, building a two-node Hyper-V cluster with iSCSI storage, managing a Hyper-V cluster, and adding disaster recovery with multi-site clustering. The deck is intended to help IT professionals implement and manage highly available Hyper-V environments.
Google uses virtualization for internal corporate infrastructure. As part of this, we have developed a number of tools, some open source, for managing the Xen deployment. The talk will describe the technical infrastructure used, the internal workflows and machine management processes, and the specific use-cases for virtualization.
XPDS14: Xen and the Art of Certification - Nathan Studer & Robert VonVossen, ...The Linux Foundation
With the rapid growth in computing power of embedded platforms, system designers are turning to hypervisors to consolidate functionality in order to reduce the Size, Weight, Power, and Cost of embedded systems. With the recent addition of ARM support to the Xen hypervisor, Xen provides an attractive Open Source option for such systems. However, some of the industries most interested in this technology, such as automotive, medical, and avionics, have strict safety certification requirements. Nathan Studer will give a brief overview on DornerWorks efforts certifying Xen, describe the hurdles and advantages that Xen and its development model lend to the certification effort, and layout a proposed path for certifying Xen.
Denver devops : enabling DevOps with data virtualizationKyle Hailey
This document discusses how data constraints can limit DevOps efforts and proposes a solution using virtual data and thin cloning. It notes that moving and copying production data is challenging due to storage, personnel and time requirements. This typically results in bottlenecks, long wait times for environments, code check-ins and production bugs. The solution presented is to use a data virtualization platform that can take thin clones of production data using file system snapshots, compress the data and share it across environments through a centralized cache. This allows self-service provisioning of database environments and accelerates DevOps processes.
Undine: Turnkey Drupal Development EnvironmentsDavid Watson
Undine is a cross-platform, fully-featured development VM (virtual machine) for Drupalistas of all experience levels. Sponsored by Stevens Institute of Technology, it is a turnkey solution to many of the common pain points encountered in developing for Drupal.
Download Undine: http://drupal.org/project/undine
This document discusses the history and future of containers. It begins by covering the origins of containers in Unix and how they evolved. It then discusses the limitations of prior container and virtualization technologies, and how hardware virtualization became prevalent. The document outlines how Joyent pioneered the use of containers in production and how Docker revolutionized containers by making them easy for developers. It proposes a future where containers can run directly on hardware for maximum performance while maintaining security, as exemplified by Joyent's Triton platform.
VMworld 2013: How UC San Francisco Delivered ‘Science as a Service’ with Priv...VMworld
This document discusses a project between UC San Francisco and VMware to test running high-performance computing (HPC) workloads in a private cloud environment. The project aimed to prove that certain life sciences workloads could run virtually without significant performance degradation compared to dedicated hardware. An initial private cloud was set up using Dell servers and storage from EMC, DDN, and Mellanox switches. Benchmarking of applications like BLAST, Bowtie, and R was planned to compare performance between bare-metal and virtualized environments. The results would assess whether the private cloud could provide benefits like self-service provisioning, multi-tenancy, and isolation of workloads.
Docker is the new kool kid in town. This presentation covers some of the common goof-ups and what should be kept in mind when dealing with docker configurations.
Download the Vulnerable Docker VM : https://www.notsosecure.com/vulnerable-docker-vm/
This slide deck describes some of the best practices found when running Oracle Database inside a Docker container. Those best practices are general observations collected over time and may not reflect your actual environment or current situation.
Accelerating Devops via Data Virtualization | DelphixDelphixCorp
“Accelerating DevOps Using Data Virtualization” at the Collaborate 2016 conference in Las Vegas. It discusses the inevitability of data virtualization and its many use cases.
Delphix allows databases to run as software rather than hardware, using less space while maintaining full functionality and performance. It turns database servers into a single, virtual authority that can consolidate databases and instantly provision copies for development, testing, and other non-production uses. This cuts capital expenses by 50% and operational expenses by 90% while accelerating innovation by eliminating the time and costs associated with copying and moving databases between environments.
Leaping the chasm from proprietary to open: A survivor's guidebcantrill
This document summarizes the history of open source software and the challenges of transitioning proprietary software to open source. It discusses how Sun open sourced Solaris but Oracle later closed it, leading to the illumos community. It also describes how Joyent open sourced its SmartDataCenter and Manta software after initially being proprietary, noting the business reasons and technical challenges involved in such a transition. The document advocates for permissive licensing and avoiding strong governance for most open source projects.
Bare-metal, Docker Containers, and Virtualization: The Growing Choices for Cl...Odinot Stanislas
(FR)
Introduction très sympathique autour des environnements Cloud avec un focus particulier sur la virtualisation et les containers (Docker)
(ENG)
Friendly presentation about Cloud solutions with a focus on virtualization and containers (Docker).
Author: Nicholas Weaver – Principal Architect, Intel Corporation
My (very brief!) presentation at Interzone.io on March 11, 2015. A more in depth exploration of these ideas can be found at http://www.slideshare.net/bcantrill/docker-and-the-future-of-containers-in-production video: https://www.joyent.com/developers/videos/docker-and-the-future-of-containers-in-production
Xen Orchestra: XAPI and XenServer from the web-XPUS13 LambertThe Linux Foundation
Xen Orchestra is a web based management tool for the XAPI toolstack that is developed by the Xen Project. XAPI is a fully featured management API for Xen, that is also used by the recently open sourced enServer. We'll see how Xen Orchestra leverages XAPI by allowing a complete control of your virtualized infrastructure. First, we'll explain quickly the XO architecture (such as cache system, asynchronous events, user management with tokens…) Then, a review of current and future possibilities will be exposed, to show what you can expect from this solution: powerful visualizations with d3js, neat interface, orchestration features and integration with all XAPI's capable hosts (XenServer or any distro with XAPI packages, such as Debian, Ubuntu or CentOS). Finally, we'll talk about how to contribute.
node.js in production: Reflections on three years of riding the unicornbcantrill
Node.js was initially challenging to use in production due to memory leaks and lack of debugging tools. Over three years, Joyent developed tools like DTrace probes, MDB for debugging core dumps, Bunyan for logging, and node-restify for building HTTP services to make node.js more reliable and observable in production. These tools helped Joyent successfully deploy many internal services using node.js and identify issues through postmortem analysis. Joyent continues working to improve node.js for production use.
Delphix turns database hardware into software that runs in a fraction of the space, preserving full functionality and performance. It aims to consolidate non-production databases, automatically refresh and provision them, cutting capital expenses by 50% and slashing operating expenses by 90%. Delphix's solution provides instant virtual copies of databases with no impact to production for accelerated application development, real-time reporting, compliance, and more.
The Peril and Promise of Early Adoption: Arriving 10 Years Early to Containersbcantrill
The document discusses Joyent's early adoption of OS-level virtualization through containers and SmartOS beginning in 2005. It explores the benefits of OS-level virtualization for performance, elasticity, and security compared to hardware virtualization. It also discusses Joyent's work developing platforms like no.de and Manta that combined OS containers with technologies like node.js and ZFS. A key challenge was gaining developer adoption due to SmartOS being different than mainstream Linux. Docker later helped popularize the container model. Joyent contributed to container technologies through projects like porting KVM to SmartOS and reviving Linux container support in illumos.
The Container Revolution: Reflections after the first decadebcantrill
The document summarizes the history and evolution of containers over the past decade and a half. It discusses:
- The origins of containers in Unix in the 1970s-80s with chroot. Early implementations in the 2000s included FreeBSD jails and Solaris zones.
- Docker in the early 2010s popularized containers by making them easy for developers to use. This helped accelerate adoption, especially with microservices.
- Joyent developed technologies like SmartOS zones, Manta, and Triton to take advantage of containers' performance and flexibility benefits compared to VMs.
- Going forward, frameworks should be more modular like libraries to maintain flexibility. Failure handling also needs work to make distributed container
Docker's Remote API allows for implementations of Docker that are radically different than the reference Docker implementation. Joyent implemented the Docker Remote API in their SmartDataCenter product to virtualize the Docker host and allow Docker containers to run on any machine in their data center. This allows them to leverage capabilities of SmartOS like ZFS, DTrace and virtualized networking. By unlocking innovation down the stack, the Remote API is Docker's killer feature as it does not imply physical co-location of containers and is flexible enough to accommodate different implementations.
This document is a slide deck about Hyper-V high availability and live migration presented by Greg Shields of Concentrated Technology. The deck covers understanding live migration and its role in Hyper-V HA, fundamentals of Windows failover clustering, building a two-node Hyper-V cluster with iSCSI storage, managing a Hyper-V cluster, and adding disaster recovery with multi-site clustering. The deck is intended to help IT professionals implement and manage highly available Hyper-V environments.
Google uses virtualization for internal corporate infrastructure. As part of this, we have developed a number of tools, some open source, for managing the Xen deployment. The talk will describe the technical infrastructure used, the internal workflows and machine management processes, and the specific use-cases for virtualization.
XPDS14: Xen and the Art of Certification - Nathan Studer & Robert VonVossen, ...The Linux Foundation
With the rapid growth in computing power of embedded platforms, system designers are turning to hypervisors to consolidate functionality in order to reduce the Size, Weight, Power, and Cost of embedded systems. With the recent addition of ARM support to the Xen hypervisor, Xen provides an attractive Open Source option for such systems. However, some of the industries most interested in this technology, such as automotive, medical, and avionics, have strict safety certification requirements. Nathan Studer will give a brief overview on DornerWorks efforts certifying Xen, describe the hurdles and advantages that Xen and its development model lend to the certification effort, and layout a proposed path for certifying Xen.
Denver devops : enabling DevOps with data virtualizationKyle Hailey
This document discusses how data constraints can limit DevOps efforts and proposes a solution using virtual data and thin cloning. It notes that moving and copying production data is challenging due to storage, personnel and time requirements. This typically results in bottlenecks, long wait times for environments, code check-ins and production bugs. The solution presented is to use a data virtualization platform that can take thin clones of production data using file system snapshots, compress the data and share it across environments through a centralized cache. This allows self-service provisioning of database environments and accelerates DevOps processes.
Undine: Turnkey Drupal Development EnvironmentsDavid Watson
Undine is a cross-platform, fully-featured development VM (virtual machine) for Drupalistas of all experience levels. Sponsored by Stevens Institute of Technology, it is a turnkey solution to many of the common pain points encountered in developing for Drupal.
Download Undine: http://drupal.org/project/undine
This document discusses the history and future of containers. It begins by covering the origins of containers in Unix and how they evolved. It then discusses the limitations of prior container and virtualization technologies, and how hardware virtualization became prevalent. The document outlines how Joyent pioneered the use of containers in production and how Docker revolutionized containers by making them easy for developers. It proposes a future where containers can run directly on hardware for maximum performance while maintaining security, as exemplified by Joyent's Triton platform.
VMworld 2013: How UC San Francisco Delivered ‘Science as a Service’ with Priv...VMworld
This document discusses a project between UC San Francisco and VMware to test running high-performance computing (HPC) workloads in a private cloud environment. The project aimed to prove that certain life sciences workloads could run virtually without significant performance degradation compared to dedicated hardware. An initial private cloud was set up using Dell servers and storage from EMC, DDN, and Mellanox switches. Benchmarking of applications like BLAST, Bowtie, and R was planned to compare performance between bare-metal and virtualized environments. The results would assess whether the private cloud could provide benefits like self-service provisioning, multi-tenancy, and isolation of workloads.
Docker is the new kool kid in town. This presentation covers some of the common goof-ups and what should be kept in mind when dealing with docker configurations.
Download the Vulnerable Docker VM : https://www.notsosecure.com/vulnerable-docker-vm/
This slide deck describes some of the best practices found when running Oracle Database inside a Docker container. Those best practices are general observations collected over time and may not reflect your actual environment or current situation.
This document discusses the transition from DevOps to DataOps. It begins by introducing the speaker, Kellyn Pot'Vin-Gorman, and their background. It then provides definitions and histories of DevOps and some common DevOps tools and practices. The document argues that database administrators (DBAs) need to embrace DevOps tools and practices like automation, version control, and database virtualization in order to stay relevant. It presents database virtualization and containerization as ways to overcome "data gravity" and better enable continuous delivery of database changes. Finally, it discusses how methodologies like Agile, Scrum, and Kanban can be combined with data-centric tools to transition from DevOps to DataOps.
Microservices with Terraform, Docker and the Cloud. JavaOne 2017 2017-10-02Derek Ashmore
This document summarizes a presentation on managing microservices using Terraform, Docker, and cloud infrastructure. The presentation covered microservices and containerization with Docker, deploying microservices to the cloud with AWS using infrastructure as code with Terraform. It also compared Terraform to other configuration management tools and outlined an accompanying hands-on lab for attendees to deploy a sample Java microservice to AWS using Terraform.
Microservices with Terraform, Docker and the Cloud. Chicago Coders Conference...Derek Ashmore
Much has been written about how to write Microservices, but not enough about how to effectively deploy and manage them. Microservices architecture multiplies the number of deployables IT has to manage by at least 10x. In that world, tooling to manage cloud deployments and related infrastructure becames essential for success. Terraform and Docker are increasingly being leveraged to facilitate microservice environments. Terraform has become becoming the leading coding framework for building and managing change in cloud environments.
Attendees will learn best practices for deploying and managing microservices in production. We will leverage true "infrastructure as code" using Terraform. That code is easily re-used and make changes easy. That code makes it easy to deploy and scale software including Docker images. You will learn not only how to establish that environment initially, but how changes can be effectively managed. I'll cover best practices and common mistakes along the way. AWS will be used as the cloud provider, but Terraform operates seemlessly on other cloud environments as well.
This session is targeted at architects and team leads. This session is intended to be platform-generic.
This document discusses optimizing testing using data virtualization. It describes how data is often the constraint in software development and testing processes. Traditional attempts to solve this problem, like copying subsets of production data or taking snapshots, are inefficient and don't provide developers and testers access to fresh, full production data. The document introduces data virtualization as a solution, allowing instant provisioning of full production databases on demand for various testing environments.
This document discusses Vimeo's architecture and tools for video transcoding. It summarizes:
1. Vimeo uses a distributed transcoding pipeline that leverages tools like Gearman for job scheduling and FFmpeg for encoding. Video files are split into chunks that are encoded in parallel across multiple servers.
2. Popular open source multimedia tools used include FFmpeg, x264, L-SMASH and ffms2. Vimeo contributes back to these projects and others to support long-term maintainability.
3. Emerging technologies discussed include VP9, DASH, HEVC and Opus, along with notes on bandwidth limitations and the state of multimedia development in Europe versus North America
Jenkins (formerly known as Hudson) is one of the most used tool in Java (non exclusive) to support continuous integration. Created as a hobby project, it quickly became a strategic tool for most development teams. Designed for extensibility, it also choosed from beginning an incremental development model, applying literally the 'release early, release often' principle. It focussed on building a large, active community, with the lowest contribution barrier I ever seen on opensource project and a complete transparency on project management, making Jenkins something uncommon in opensource world. During this session, I'll explain the Jenkins management & technical model, how it promotes contribution and how it allow CloudBees to both support the opensource community-driven project and deliver business value with proprietary extensions.
The influence of "Distributed platforms" on #devopsKris Buytaert
The document discusses the evolution of devops practices from the early 2000s to present day. It describes how early tools like openMosix helped distribute processes across nodes but had limitations that prevented widespread adoption. Linux-HA helped make high availability services more common by defining resources and constraints. This highlighted the need for applications to be adaptable and share state in a distributed manner. Private clouds initially failed to adopt configuration management, monitoring, and other devops practices. Today, containers are more widely used but often resemble virtual machines with multiple services and no standard way to connect or monitor them. Adopting microservices and devops fully requires changes across software, infrastructure, mindsets and organizations.
The document is a slide deck presentation by Bret Fisher on going into production with Docker and Swarm. Some key points from the presentation include focusing first on Dockerfiles rather than complex orchestration, avoiding anti-patterns like using the "latest" tag or trapping unique data in containers, and starting with a simple 3 node Swarm cluster for high availability before scaling up further. The presentation also provides examples of full tech stacks using various open source and commercial tools for a Dockerized infrastructure.
Platform Engineering for the Modern Oracle WorldSimon Haslam
DevOps has become the de facto approach for custom software delivery. Yet, if automation is claimed to be the answer to all ills, why do many organisations struggle to implement it well? This session reflects on experiences from the last decade or so of provisioning projects, highlighting lessons (and one or two regrets!) and considers how organisations building custom software should focus their Oracle platform engineering efforts to deliver better software to users, faster.
This presentation reviews these steps, scenarios and more:
• What is this database going to be used for – a reporting server or data warehouse, or as an operational database supporting an application?
• Which resources should I spend the budget on to ensure optimal database performance – bigger servers, more CPUs/cores, disks, or more memory?
• What are my backup requirements? If I ever need to restore, how far back do I need to go and what will that mean to the business?
• How will I handle any hot fixes, such as security patches? What downtime can be afforded and what processes need to be in place to apply critical or maintenance updates?
• What are my replication and failover requirements and what should I do for my high availability configuration?
To listen to the recording visit www.enterprisedb.com - click on the Resources tab - and review the list of On-Demand Webcasts. If you have further questions, email sales@enterprisedb.com.
1. The document outlines the author's journey to becoming a Docker Captain, including founding their company Collabnix in 2015 and containerizing legacy Dell applications.
2. It discusses what Docker is and how it helps address the modern challenges of developing and deploying distributed, loosely coupled applications across multiple servers.
3. Docker Captains are elite community leaders and ambassadors who promote Docker through blogging, writing, speaking, tutorials, and open source contributions. The tips shared encourage getting involved in the Docker community by sharing knowledge and speaking at events.
Microservices with Terraform, Docker and the Cloud. IJug Chicago 2017-06-06Derek Ashmore
Much has been written about how to write Microservices, but not enough about how to effectively deploy and manage them. Microservices architecture multiplies the number of deployables IT has to manage by at least 10x. In that world, tooling to manage cloud deployments and related infrastructure becames essential for success. Terraform and Docker are increasingly being leveraged to facilitate microservice environments. Terraform has become becoming the leading coding framework for building and managing change in cloud environments.
Attendees will learn best practices for deploying and managing microservices in production. We will leverage true "infrastructure as code" using Terraform. That code is easily re-used and make changes easy. That code makes it easy to deploy and scale software including Docker images. You will learn not only how to establish that environment initially, but how changes can be effectively managed. I'll cover best practices and common mistakes along the way. AWS will be used as the cloud provider, but Terraform operates seemlessly on other cloud environments as well.
This session is targeted at architects and team leads. This session is intended to be platform-generic.
Chris Swan at QCon 2014: Using Docker in Cloud NetworksCohesive Networks
This document discusses Docker, DevOps, and security issues related to containers. It notes that while Dockerfiles are productive for DevOps, containers do not fully contain processes yet. It also points out that images in Docker have a "manifest problem" where it is unclear which exact versions of packages and dependencies were installed. The document encourages keeping track of software versions installed in container images to help address this problem.
Hot Topics: The DuraSpace Community Webinar Series
Series Two: All About Audio and Video Content in Repositories
5-16-12 Webinar, Preserving Audio & Video Digital Media
In this presentation is briefly introduced the use of Docker for Data Science.
Are presented arguments like the management of containers and the creation of new Docker images
Managing ScaleIO as Software on Mesos - David vonThenen - Dell EMC World 2017{code} by Dell EMC
Software can be complex, but it is a key part of modern data centers. {code}'s ScaleIO Framework for Apache Mesos is a storage framework that automates the complete lifecycle of the ScaleIO storage platform on top of commodity hardware. Moving storage to a framework reduces the complexity involved and transforms the operational approach. Watch how the Mesos framework simplifies all aspects of ScaleIO to provide storage for containerized applications.
This document discusses bringing native Ceph support to Windows environments by removing the need for gateway components. Currently, Windows clients access Ceph storage through iSCSI or Samba gateways, which introduces bottlenecks and single points of failure. SUSE and Cloudbase Solutions are working to port librbd and librados to Windows to allow direct client connections to Ceph storage. This would improve performance and ease deployment compared to gateway models. Potential use cases include Windows-based backup solutions like Veeam using Ceph block devices, and providing shared storage for Microsoft Hyper-V and Cluster Server environments.
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Marc embraces database virtualization and containerization to help Dave's team adopt DataOps practices. This allows team members to access self-service virtual test environments on demand. It increases data accessibility by 10%, resulting in over $65 million in additional income. DataOps removes the biggest barrier by automating and accelerating data delivery to support fast development and testing cycles.
Similar to Data chatwith Kyle Hailey andTim Gorman (20)
Discover top-tier mobile app development services, offering innovative solutions for iOS and Android. Enhance your business with custom, user-friendly mobile applications.
Essentials of Automations: Exploring Attributes & Automation ParametersSafe Software
Building automations in FME Flow can save time, money, and help businesses scale by eliminating data silos and providing data to stakeholders in real-time. One essential component to orchestrating complex automations is the use of attributes & automation parameters (both formerly known as “keys”). In fact, it’s unlikely you’ll ever build an Automation without using these components, but what exactly are they?
Attributes & automation parameters enable the automation author to pass data values from one automation component to the next. During this webinar, our FME Flow Specialists will cover leveraging the three types of these output attributes & parameters in FME Flow: Event, Custom, and Automation. As a bonus, they’ll also be making use of the Split-Merge Block functionality.
You’ll leave this webinar with a better understanding of how to maximize the potential of automations by making use of attributes & automation parameters, with the ultimate goal of setting your enterprise integration workflows up on autopilot.
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Connector Corner: Seamlessly power UiPath Apps, GenAI with prebuilt connectorsDianaGray10
Join us to learn how UiPath Apps can directly and easily interact with prebuilt connectors via Integration Service--including Salesforce, ServiceNow, Open GenAI, and more.
The best part is you can achieve this without building a custom workflow! Say goodbye to the hassle of using separate automations to call APIs. By seamlessly integrating within App Studio, you can now easily streamline your workflow, while gaining direct access to our Connector Catalog of popular applications.
We’ll discuss and demo the benefits of UiPath Apps and connectors including:
Creating a compelling user experience for any software, without the limitations of APIs.
Accelerating the app creation process, saving time and effort
Enjoying high-performance CRUD (create, read, update, delete) operations, for
seamless data management.
Speakers:
Russell Alfeche, Technology Leader, RPA at qBotic and UiPath MVP
Charlie Greenberg, host
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/temporal-event-neural-networks-a-more-efficient-alternative-to-the-transformer-a-presentation-from-brainchip/
Chris Jones, Director of Product Management at BrainChip , presents the “Temporal Event Neural Networks: A More Efficient Alternative to the Transformer” tutorial at the May 2024 Embedded Vision Summit.
The expansion of AI services necessitates enhanced computational capabilities on edge devices. Temporal Event Neural Networks (TENNs), developed by BrainChip, represent a novel and highly efficient state-space network. TENNs demonstrate exceptional proficiency in handling multi-dimensional streaming data, facilitating advancements in object detection, action recognition, speech enhancement and language model/sequence generation. Through the utilization of polynomial-based continuous convolutions, TENNs streamline models, expedite training processes and significantly diminish memory requirements, achieving notable reductions of up to 50x in parameters and 5,000x in energy consumption compared to prevailing methodologies like transformers.
Integration with BrainChip’s Akida neuromorphic hardware IP further enhances TENNs’ capabilities, enabling the realization of highly capable, portable and passively cooled edge devices. This presentation delves into the technical innovations underlying TENNs, presents real-world benchmarks, and elucidates how this cutting-edge approach is positioned to revolutionize edge AI across diverse applications.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/how-axelera-ai-uses-digital-compute-in-memory-to-deliver-fast-and-energy-efficient-computer-vision-a-presentation-from-axelera-ai/
Bram Verhoef, Head of Machine Learning at Axelera AI, presents the “How Axelera AI Uses Digital Compute-in-memory to Deliver Fast and Energy-efficient Computer Vision” tutorial at the May 2024 Embedded Vision Summit.
As artificial intelligence inference transitions from cloud environments to edge locations, computer vision applications achieve heightened responsiveness, reliability and privacy. This migration, however, introduces the challenge of operating within the stringent confines of resource constraints typical at the edge, including small form factors, low energy budgets and diminished memory and computational capacities. Axelera AI addresses these challenges through an innovative approach of performing digital computations within memory itself. This technique facilitates the realization of high-performance, energy-efficient and cost-effective computer vision capabilities at the thin and thick edge, extending the frontier of what is achievable with current technologies.
In this presentation, Verhoef unveils his company’s pioneering chip technology and demonstrates its capacity to deliver exceptional frames-per-second performance across a range of standard computer vision networks typical of applications in security, surveillance and the industrial sector. This shows that advanced computer vision can be accessible and efficient, even at the very edge of our technological ecosystem.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
47. Delphix Server 4.1
• Platforms
Vmware
Amazon EC2
• Databases
Oracle
SQL Server
Postgres
Sybase
• App Data
• Data Masking
• Replication
48. How can Delphix help DBAs?
• Scratch environments
Testing one-off patches, patchset updates (PSU’s), and critical-patch
upgrades (CPU’s)
• SQL tuning
When tuning a specific SQL statement, how can you effectively
test the impact of…
- adding or dropping an index?
- gathering CBO statistics a bit differently?
…without getting an act of Congress/Parliament?
Some bio, 30 years in IT, 25 years in Oracle, 20 years as a performance-tuning DBA and as on-off Apps SysAdmin, books, ACED, OakTable, joined Delphix last May. Why?
DevOps. Increasing the tempo of application development. Supporting agile development methods. Shorter delivery cycles and continuous delivery. It isn’t just people working harder, it needs some magic.
Between a rock and a hard place
Due to the constraints of building clone copy database environments one ends up in the “culture of no”
Where developers stop asking for a copy of a production database because the answer is “no”
If the developers need to debug an anomaly seen on production or if they need to write a custom module which requires a copy of production they know not to even ask and just give up.
Before we proceed, there are a few terms that will come up frequently that we should define.
Within our production environment, we will frequently refer to the “source host” or “source database.”
We will also refer frequently to the Delphix Server, and to something called a dSource.
Finally in our pre-production environments we will talk about the “target host” and “VDBs.”
[read the slide]
[read the slide]
[read the slide]
Note that no Oracle processes run on the Delphix server.
[read the slide]
The dSource is typically a fraction of the size of it’s production database, due to several storage optimizations made by Delphix.
[read the slide]
[read the slide]
I should emphasize the first sentence again: VDBs are fully functional databases. They look like databases, their data files appear to be the same size as they would be physically. They can be backed up. They can be restored to. To end users, and to DBAs alike, they should be indistinguishable from any other database whose data files and logs happen to be stored on NFS.
With Delphix, the refresh process changes.
The refresh is accomplished by a few clicks in the Delphix GUI, via a command line interface, or via an automatic policy.
Instead of requiring a full terabyte of storage for the development server, we now require a fraction of that to store a compressed copy of the production database within Delphix. This becomes the basis of the storage for our new Virtual Database (VDB) in the development environment, and is provided to the development host via NFS.
As the development users begin to change their virtual database, only the changed blocks need be stored back within the Delphix server.
To summarize, the storage requirements to make a single copy have dropped from “as large as production” to “a fraction of production,” and the provisioning process is simple. Even better, additional copies require only a few changed blocks to be stored, so the more copies we make, the better the space savings become.
DevOps. Increasing the tempo of application development. Supporting agile development methods. Shorter delivery cycles and continuous delivery. It isn’t just people working harder, it needs some magic.