Monitoramento Inteligente utilizando o ZABBIXLuiz Andrade
Zabbix é uma poderosa ferramenta para monitoramento de recursos de TI. que fazem parte do organismo vivo que sustenta o negócio de todas as empresas.
O Zabbix oferece monitoramento distribuído em “tempo-real” com interface de administração Web. Ele permite ver a saúde de qualquer host em uma rede IP monitorada por meio de um único ponto de visualização. Entre os diversos itens, vale destacar a utilização de recursos de hardware e software, tais como CPU, memória, utilização de unidades de armazenamento e execução de processos.
Building Cloud Native Applications with Oracle Autonomous Database.Oracle Developers
This document discusses building cloud native applications with Oracle Autonomous Database. It provides an overview of:
1) The evolution of computing and development from monolithic to cloud native applications.
2) The challenges of managing databases with microservices, and how Oracle Autonomous Database can serve as a single database for all development needs.
3) How to build, deploy, and manage cloud native applications using Oracle Cloud Infrastructure services like the Container Engine for Kubernetes, Functions, and the Autonomous Transaction Processing database.
Monitoramento e Gerenciamento de Infraestrutura com Zabbix - Patrícia LadislauPatricia Ladislau Silva
O documento discute o monitoramento e gerenciamento de infraestrutura com o Zabbix, incluindo:
1) A importância do monitoramento para identificar incidentes e problemas;
2) Por que usar o Zabbix, um sistema de gerenciamento de monitoramento de rede gratuito e de código aberto;
3) Uma breve história do Zabbix e visão geral de seus principais componentes e métodos de coleta de dados.
Slides da palestra apresentada no QCon 2019 sobre Kubernetes e um deepdive em seus componentes (apiserver, scheduler, ingress, etc) e os objetos do cluster
ELB를 활용한 Socket.IO 멀티노드 구축사례Anson Park
The document discusses using Elastic Load Balancing (ELB) to build a multi-node socket.io architecture. It describes implementing socket.io on a single node initially for an MVP, then adding additional socket.io nodes behind an ELB load balancer for scaling. Key challenges discussed include ensuring sticky sessions work across nodes and enabling messaging between nodes, which is solved using Redis. The architecture is deployed using CodeDeploy and auto-scaling is suggested for future growth.
RMAN uses backups to clone databases, which takes time and storage space. Delphix clones databases virtually by linking to a source and sharing blocks, allowing near-instant clones that use minimal storage. The document compares RMAN and Delphix approaches to cloning databases for development environments.
Regulatory compliance is a major challenge for banks that requires significant resources. New regulations are constantly emerging in areas like anti-money laundering and privacy, and non-compliance can result in large fines. Using data effectively is key to compliance but current practices of copying and moving large amounts of data are risky, slow, and expensive. Data virtualization provides a better approach by automating data delivery, masking, and testing to help banks respond faster to regulatory demands while reducing costs and risks of non-compliance.
Monitoramento Inteligente utilizando o ZABBIXLuiz Andrade
Zabbix é uma poderosa ferramenta para monitoramento de recursos de TI. que fazem parte do organismo vivo que sustenta o negócio de todas as empresas.
O Zabbix oferece monitoramento distribuído em “tempo-real” com interface de administração Web. Ele permite ver a saúde de qualquer host em uma rede IP monitorada por meio de um único ponto de visualização. Entre os diversos itens, vale destacar a utilização de recursos de hardware e software, tais como CPU, memória, utilização de unidades de armazenamento e execução de processos.
Building Cloud Native Applications with Oracle Autonomous Database.Oracle Developers
This document discusses building cloud native applications with Oracle Autonomous Database. It provides an overview of:
1) The evolution of computing and development from monolithic to cloud native applications.
2) The challenges of managing databases with microservices, and how Oracle Autonomous Database can serve as a single database for all development needs.
3) How to build, deploy, and manage cloud native applications using Oracle Cloud Infrastructure services like the Container Engine for Kubernetes, Functions, and the Autonomous Transaction Processing database.
Monitoramento e Gerenciamento de Infraestrutura com Zabbix - Patrícia LadislauPatricia Ladislau Silva
O documento discute o monitoramento e gerenciamento de infraestrutura com o Zabbix, incluindo:
1) A importância do monitoramento para identificar incidentes e problemas;
2) Por que usar o Zabbix, um sistema de gerenciamento de monitoramento de rede gratuito e de código aberto;
3) Uma breve história do Zabbix e visão geral de seus principais componentes e métodos de coleta de dados.
Slides da palestra apresentada no QCon 2019 sobre Kubernetes e um deepdive em seus componentes (apiserver, scheduler, ingress, etc) e os objetos do cluster
ELB를 활용한 Socket.IO 멀티노드 구축사례Anson Park
The document discusses using Elastic Load Balancing (ELB) to build a multi-node socket.io architecture. It describes implementing socket.io on a single node initially for an MVP, then adding additional socket.io nodes behind an ELB load balancer for scaling. Key challenges discussed include ensuring sticky sessions work across nodes and enabling messaging between nodes, which is solved using Redis. The architecture is deployed using CodeDeploy and auto-scaling is suggested for future growth.
RMAN uses backups to clone databases, which takes time and storage space. Delphix clones databases virtually by linking to a source and sharing blocks, allowing near-instant clones that use minimal storage. The document compares RMAN and Delphix approaches to cloning databases for development environments.
Regulatory compliance is a major challenge for banks that requires significant resources. New regulations are constantly emerging in areas like anti-money laundering and privacy, and non-compliance can result in large fines. Using data effectively is key to compliance but current practices of copying and moving large amounts of data are risky, slow, and expensive. Data virtualization provides a better approach by automating data delivery, masking, and testing to help banks respond faster to regulatory demands while reducing costs and risks of non-compliance.
The document discusses the GDPR requirements for data masking and pseudonymization. It provides context on the GDPR and how it aims to update privacy laws for a modern, digital world. The GDPR introduces legal definitions for pseudonymization, which refers to approaches like data masking that secure personal data in a way that indirect identities are still protected. It highlights how data masking technologies can help companies comply with the GDPR while maintaining data quality for analysis. Companies that fail to implement appropriate measures like pseudonymization could face fines up to 4% of global turnover under the GDPR.
Virtual Data : Eliminating the data constraint in Application DevelopmentKyle Hailey
Virtual data provided by Delphix can eliminate data as a constraint in application development by enabling:
1) Fast provisioning of full-sized development databases in minutes from production data without moving large amounts of data. This allows development and testing to parallelize and find bugs earlier.
2) Self-service access to consistent, masked data for multiple use cases like development, security and cloud migration. Masking only needs to be done once before cloning databases.
3) Optimized data movement to the cloud through compression, encryption and replication of thin cloned data sets 1/3 the size of full production databases. This improves cloud migration and enables active-active disaster recovery across sites.
The document discusses Oracle's ZS3 series enterprise storage systems. It provides an overview of Oracle's approach to driving storage system evolution from hardware-defined to software-defined. It then summarizes the key features and benefits of the ZS3 series, including extreme performance, integrated analytics, and optimization for Oracle software.
ZFS is a filesystem developed for Solaris that provides features like cheap snapshots, replication, and checksumming. It can be used for databases. While it has benefits, random writes become sequential which can hurt performance. The OpenZFS project continues developing ZFS and improved the I/O scheduler to provide smoother write latency compared to the original ZFS write throttle. Tuning parameters in OpenZFS give better control over throughput and latency. Measuring performance is important for optimizing ZFS for database use.
Oracle LOB Internals and Performance TuningTanel Poder
The document discusses a presentation on tuning Oracle LOBs (Large Objects). It covers LOB architecture including inline vs out-of-line storage, LOB locators, inodes, indexes and segments. The presentation agenda includes introduction, storing large content, LOB internals, physical storage planning, caching tuning, loading LOBs, development strategies and temporary LOBs. Examples are provided to illustrate LOB structures like locators, inodes and indexes.
DBTA Data Summit : Eliminating the data constraint in Application DevelopmentKyle Hailey
1) The document discusses how data constraints are a major problem in application development. It slows down development cycles and leads to bugs. The proposed solution is using virtual data techniques to eliminate the need to move and manage physical copies of data.
2) Key use cases of virtual data techniques discussed are faster development, enhanced security through data masking, and easier cloud migration by reducing data movement. Virtual data allows instant provisioning of development environments and fast refresh of test data.
3) Customers reported benefits like cutting development cycles in half and reducing time to roll out new insurance products from 50 days to 23 days when using virtual data techniques.
This document summarizes the findings of a 2015 study on product team performance. It discusses the respondents to the survey, which were primarily people involved in product development from technology, services, and consumer products companies. It then outlines key findings on product team dynamics, including trends in development methodologies and job satisfaction levels. Specifically, it finds that agile adoption may be leveling off while satisfaction remains high. The document also identifies four factors that contribute to high performance: strategic decision making ability, frequent standup meetings, quick problem resolution, and involvement of user experience professionals.
WANTED: Seeking Single Agile Knowledge Development Tool-setBrad Appleton
by Brad Appleton,
Presented August 2009 at at Agile 2009 Conference; Chicago, IL USA
What tools and capabilities are necessary to apply Agile development concepts+practices (such as refactoring, TDD, CI, etc.) to all knowledge-artifacts? (not just source-code).
This document discusses continuous delivery and its components of continuous integration and continuous deployment. Continuous integration involves frequently integrating code changes. Continuous deployment automates deploying integrated code to testing environments and enables easy deployment to production. Continuous delivery provides the ability to easily and quickly release new features to customers at any time by automating deployments that pass testing in under 5 minutes and allowing quick rollbacks. The document provides advice on implementing continuous delivery including splitting monolithic applications, enabling continuous integration and deployment, establishing solid testing strategies, and using tools like TeamCity, Artifactory, Chef and Vagrant.
This document discusses database virtualization and instant cloning technologies. It begins by outlining the challenges businesses face with growing databases and increasing demands for copies from developers, reporting teams, etc. It then covers three main parts:
1) Cloning technologies including physical cloning, thin provision cloning using file system snapshots, and database virtualization.
2) How these technologies can accelerate businesses by enabling faster development, testing, recovery and reporting.
3) Specific use cases like development acceleration through frequent, full clones; branching for rapid QA; recovery and testing capabilities; and enabling fast data refreshes for reporting.
Slides of the "In The Brains" talk given at SkillsMatter on the 28th of October 2014.
The use of test doubles in testing at various levels has become commonplace, however, correct usage is far less common. In this talk Giovanni Asproni shows the most common and serious mistakes he's seen in practice and he'll give some hints on how to avoid them (or fix them in existing code).
John Beeston presented on overcoming challenges of implementing continuous delivery and agile methods for data warehouses. He discussed people, process, and technology challenges including culture change, breaking down project gates, switching to agile, and implementing continuous integration. Next steps include scaling up with DevOps, infrastructure automation using cloud and configuration tools, and focusing on test-driven development, dataset management, and code automation.
The document discusses challenges with application rationalization and modernization projects. It notes that such projects carry high risks of delays and failures due to issues like internal politics, workload coexistence, and inaccurate savings expectations. Additionally, obtaining and managing data for testing during these projects can be very difficult and expensive due to the large amounts of storage needed. The Delphix Modernization Engine is presented as a solution to help mitigate these risks and challenges. It does so through capabilities like virtualizing data to reduce storage needs, efficiently synchronizing data between environments, and providing automated data services.
Software Configuration Management Problemas e Soluçõeselliando dias
O documento discute problemas e soluções relacionados à gerência de configuração de software. Apresenta os conceitos básicos de gerência de configuração e problemas clássicos como falhas de comunicação, compartilhamento de dados e manutenção múltipla. Também aborda soluções como padronização, sistemas de controle de versão e processos, além de problemas menos comuns como linhas instáveis e manutenção em produção.
Trustworthy Transparency and Lean TraceabilityBrad Appleton
This document summarizes Brad Appleton's presentation on traceability at the COMPSAC 2006 conference. It discusses lean traceability and achieving transparency while minimizing waste. It covers topics like the seven wastes of software development, facets of traceability, orders of ignorance, values of agility, drivers for traceability, objectives of traceability, principles of lean development, and comparing waterfall and iterative lifecycles. The overarching goals are achieving trustworthy transparency through lean practices while responding quickly to change.
Testing Delphix: easy data virtualizationFranck Pachot
The document summarizes the author's testing of the Delphix data virtualization software. Some key points:
- Delphix allows users to easily provision virtual copies of database sources on demand for tasks like testing, development, and disaster recovery.
- It works by maintaining incremental snapshots of source databases and virtualizing the data access. Copies can be provisioned in minutes and rewound to past points in time.
- The author demonstrated provisioning a copy of an Oracle database using Delphix and found the process very simple. Delphix integrates deeply with databases.
- Use cases include giving databases to each tester/developer, enabling continuous integration testing, creating QA environments with real
This document discusses using virtualization and containers to improve database deployments in development environments. It notes that traditional database deployments are slow, taking 85% of project time for creation and refreshes. Virtualization allows for more frequent releases by speeding up refresh times. The document discusses how virtualization engines can track database changes and provision new virtual databases in seconds from a source database. This allows developers and testers to self-service provision databases without involving DBAs. It also discusses how virtualization and containers can optimize database deployments in cloud environments by reducing storage usage and data transfers.
Delphix is a software appliance that provides database virtualization. It allows organizations to provision multiple virtual copies of a source database across different environments like development, testing, and QA. Delphix takes upfront and incremental snapshots of the source database, compresses and stores the data, and provisions virtual databases by mapping the blocks onto target systems. This eliminates redundant storage of database data and improves performance as the virtual databases can share cached blocks. Delphix also enables provisioning databases from different points in time through its "TimeFlow" feature to support activities like testing releases and bug fixes.
The document discusses dNFS (Direct NFS) configuration for Oracle databases. It provides examples of dNFS performance compared to NFS, showing that dNFS can provide higher throughput and lower latency. It also discusses investigating performance differences using tools like perf and analyzing network performance factors like TCP window size.
Kellyn Pot’Vin-Gorman discusses DevOps tools for winning agility. She emphasizes that while many organizations automate testing, the DevOps journey is longer and involves additional steps like orchestration between environments, security, collaboration, and establishing a culture of continuous improvement. She also stresses that organizations should not forget about managing their data as part of the DevOps process and advocates for approaches like database virtualization to help enhance DevOps initiatives.
This document outlines the agenda for a training on Oracle RDBMS 12c new features. The training will cover 6 chapters: introduction, multitenant architecture, upgrade features, Flex Cluster, Global Data Service, and an overview of RDBMS features. The agenda provides a high-level overview of topics to be discussed in each chapter, including multitenant architecture concepts, upgrade options and tools, Flex Cluster configurations, Global Data Service components, and new features such as temporary undo and multiple indexes on the same columns.
The document discusses using data virtualization and masking to optimize database migrations to the cloud. It notes that traditional copying of data is inefficient for large environments and can incur high data transfer costs in the cloud. Using data virtualization allows creating virtual copies of production databases that only require a small storage footprint. Masking sensitive data before migrating non-production databases ensures security while reducing costs. Overall, data virtualization and masking enable simpler, more secure, and cost-effective migrations to cloud environments.
The current trends to work in Agile and DevOps are challenging for database developers. Source control is a standard for non-database code but it’s a challenge for databases. This talk has an ambition to change that situation and help developers and DBA take over control of source code and data.
LinkedIn leverages the Apache Hadoop ecosystem for its big data analytics. Steady growth of the member base at LinkedIn along with their social activities results in exponential growth of the analytics infrastructure. Innovations in analytics tooling lead to heavier workloads on the clusters, which generate more data, which in turn encourage innovations in tooling and more workloads. Thus, the infrastructure remains under constant growth pressure. Heterogeneous environments embodied via a variety of hardware and diverse workloads make the task even more challenging.
This talk will tell the story of how we doubled our Hadoop infrastructure twice in the past two years.
• We will outline our main use cases and historical rates of cluster growth in multiple dimensions.
• We will focus on optimizations, configuration improvements, performance monitoring and architectural decisions we undertook to allow the infrastructure to keep pace with business needs.
• The topics include improvements in HDFS NameNode performance, and fine tuning of block report processing, the block balancer, and the namespace checkpointer.
• We will reveal a study on the optimal storage device for HDFS persistent journals (SATA vs. SAS vs. SSD vs. RAID).
• We will also describe Satellite Cluster project which allowed us to double the objects stored on one logical cluster by splitting an HDFS cluster into two partitions without the use of federation and practically no code changes.
• Finally, we will take a peek at our future goals, requirements, and growth perspectives.
SPEAKERS
Konstantin Shvachko, Sr Staff Software Engineer, LinkedIn
Erik Krogen, Senior Software Engineer, LinkedIn
The Rise of DataOps: Making Big Data Bite Size with DataOpsDelphix
Marc embraces database virtualization and containerization to help Dave's team adopt DataOps practices. This allows team members to access self-service virtual test environments on demand. It increases data accessibility by 10%, resulting in over $65 million in additional income. DataOps removes the biggest barrier by automating and accelerating data delivery to support fast development and testing cycles.
This document discusses virtualizing big data in the cloud using Delphix data virtualization software. It begins with an introduction of the presenter and their background. It then discusses trends in cloud adoption, including how most enterprises now use a hybrid cloud strategy. It also discusses how big data projects are increasingly being deployed in the cloud. The document demonstrates how Delphix can be used to virtualize flat files containing big data, eliminating duplication and enabling features like snapshots and cloning. It shows how files can be provisioned from a source to targets, including the cloud, and refreshed or rewound when needed. In summary, the document illustrates how Delphix virtualizes big data files to simplify deployment and management in cloud environments.
Number 8 in our Top 10 DB2 Support Nightmares series. This month we take a look at what happens when organisations are not able to keep up to date with the latest DB2 technology.
Andrew Ryan describes how Facebook operates Hadoop to provide access as a shared resource between groups.
More information and video at:
http://developer.yahoo.com/blogs/hadoop/posts/2011/02/hug-feb-2011-recap/
1) The document discusses performance testing in the cloud for Oracle Database upgrades, utilities, cloud migrations, and patching. It provides an overview of common testing challenges and how to address them when testing in the cloud.
2) Tools like SQL Performance Analyzer, Database Replay, and Real Application Testing are included with some cloud database offerings and can help with testing in the cloud. Data subsetting techniques and using snapshot standbys are also discussed.
3) Repeatable testing is important, and restoring to guaranteed restore points or using snapshot standbys allows restoring the database to a known state before and after tests. Statistics need to be refreshed after restoring to ensure accurate optimizer statistics.
[NetApp Managing Big Workspaces with Storage MagicPerforce
The document describes how NetApp FlexClone technology can be used with Perforce to quickly clone large workspaces in minutes rather than hours. FlexClone allows instant clones of data volumes that only use additional storage space when data blocks are modified. The steps outlined include creating a FlexClone volume from a snapshot of a template workspace, changing file ownership, configuring the Perforce client, and using commands like "p4 flush" to populate the new workspace instantly. This approach improves developer productivity over traditional slow methods of populating workspaces.
The document discusses techniques for compacting, compressing, and de-duplicating data in Domino applications to reduce storage usage and improve performance. It covers compacting databases, compressing design elements, documents, and attachments, using DAOS to store attachments externally, and tools for defragmenting files.
6 Ways of Solve Your Oracle Dev-Test Problems Using All-Flash Storage and Cop...Catalogic Software
By combining all-flash storage with copy data management, you can provision timely, space-efficient, masked Oracle copies both easily and automatically.
This document discusses database cloning using copy-on-write technologies like thin cloning to minimize storage usage. It describes how traditional cloning requires fully copying database files versus thin cloning which only writes modified blocks. Methods covered include CloneDB, Snap Manager Utility, ZFSSAADM, and cloning pluggable databases using ZFS and ACFS snapshots. Direct NFS is highlighted as an optimal network storage solution for database cloning.
1. The document discusses various methods for falling back or rolling back a database after an upgrade or migration, including backup, Flashback, downgrade, Data Pump, and GoldenGate.
2. Each method has advantages and limitations in terms of data loss, ability to use after going live, level of downtime required, and whether a phased migration is possible.
3. Backup should always be used but is not a primary fallback method due to restoration time. Flashback provides an easy rollback with no data loss but requires specific prerequisites. Downgrade reverts the data dictionary to a previous release.
Similar to Delphix for DBAs by Jonathan Lewis (20)
Hooks in postgresql by Guillaume LelargeKyle Hailey
Hooks in PostgreSQL allow extending functionality by intercepting and modifying PostgreSQL's internal execution flow. There are several types of hooks for different phases like planning, execution, security. Hooks are function pointers that extensions can set to run custom code. This allows monitoring and modifying queries and user actions like login. Examples show how to use hooks to log queries, profile functions, or check passwords. Hooks require installing and uninstalling functions to set the pointers.
Performance Insights is a service that provides visibility into the performance of Amazon RDS databases. It monitors database load and average active sessions to identify potential bottlenecks. The dashboard allows users to filter metrics by time frame, SQL query, user, host, and other attributes to help diagnose performance issues across different database engines like Amazon Aurora and MySQL.
This document outlines the history of database monitoring from 1988 to the present. It describes early monitoring tools like Utlbstat/Utlestat from 1988-1990 that used ratios and averages. Patrol was one of the first database monitors introduced in 1993. M2 from 1994 introduced light-weight monitoring using direct memory access and sampling. Wait events became a key focus area from 1995 onward. Statspack was introduced in 1998 and provided more comprehensive monitoring than previous tools. Spotlight in 1999 made database problem diagnosis very easy without manuals. Later versions incorporated improved graphics, multi-dimensional views of top consumers, and sampling for faster problem identification.
Ash masters : advanced ash analytics on Oracle Kyle Hailey
The document discusses database performance tuning. It recommends using Active Session History (ASH) and sampling sessions to identify the root causes of performance issues like buffer busy waits. ASH provides key details on sessions, SQL statements, wait events, and durations to understand top resource consumers. Counting rows in ASH approximates time spent and is important for analysis. Sampling sessions in real-time can provide the SQL, objects, and blocking sessions involved in issues like buffer busy waits.
Successfully convince people with data visualizationKyle Hailey
Successfully convince people with data visualization
video of presentation available at https://www.youtube.com/watch?v=3PKjNnt14mk
from Data by the Bay conference
Accelerate Develoment with VIrtual DataKyle Hailey
This document summarizes best practices for application development using data virtualization to remove data as a constraint. It discusses how data management currently does not scale with agile development and is a major bottleneck. The solution presented is using a data virtualization appliance to create thin clones from production data for development, QA, and test environments. This allows for self-service provisioning of environments and parallel development. It provides use cases showing how virtual data improves development throughput, shifts testing left to find bugs earlier, and enables continuous delivery of features to production.
Mark Farnam : Minimizing the Concurrency Footprint of TransactionsKyle Hailey
The document discusses minimizing the concurrency footprint of transactions by using packaged procedures. It recommends instrumenting all code, including PL/SQL, for performance monitoring. It provides examples of submitting trivial transactions using different methods like sending code from the client, sending a PL/SQL block, or calling a stored procedure. Calling a stored procedure is preferred as it avoids re-parsing and re-sending code and allows instrumentation to be added without extra network traffic.
The document discusses security considerations for installing and configuring an Oracle Exadata Database Machine. It recommends preparing for installation by collecting security requirements, subscribing to security alerts, and reviewing installation guidelines. During installation, it advises implementing available security features like the "Resecure Machine" step to tighten permissions and passwords. Post-deployment, it suggests addressing any site-specific security needs like changing default passwords and validating policies.
Martin Klier : Volkswagen for Oracle GuysKyle Hailey
Martin Klier of Performing Databases GmbH gave a Ted Talk at the Oak Table World 2015 conference about how Oracle database administrators are like Volkswagen cars. He compared different aspects of maintaining Oracle databases to maintaining Volkswagens, noting both require regular maintenance to ensure optimal performance. The talk referenced NOx emissions and concluded that as IT professionals, database administrators have power and a responsibility to use it wisely.
This document provides an overview of DevOps. It begins by describing the waterfall development process and its limitations in meeting goals and deadlines. It then introduces Agile as an improvement over waterfall by allowing for more frequent testing and deployment. The document discusses how Continuous Delivery takes Agile further by aiming to deploy new features continuously. It states that DevOps is required to fully achieve Continuous Delivery. DevOps is defined as achieving a fast flow of features from development to operations to customers. The top constraints preventing this flow are identified as development environments, testing environments, code architecture, development speed, and product management.
This document discusses using data virtualization to accelerate application projects by 50%. It outlines some common problems with physical data copies, such as bottlenecks, bugs due to old data, difficulty creating subsets, and delays. The document then introduces the concept of using a data virtualization appliance to take snapshots of production data and create thin clones for development and testing environments. This allows for fast, full-sized, self-service clones that can be refreshed quickly. Use cases discussed include improved development and testing workflows, faster production support like recovery and migration, and enabling continuous business intelligence functions.
Data Virtualization: Revolutionizing data cloningKyle Hailey
This document discusses data virtualization and its use in DevOps. It begins by explaining that data virtualization, also known as copy data management, is becoming more common. It then discusses how data virtualization enables DevOps practices like continuous integration by allowing fast provisioning of full database environments.
The document outlines some of the typical challenges with traditional database architectures, including long setup times, lack of parallel environments, and high storage costs due to many full database copies. It presents data virtualization as a solution, allowing instant provisioning of thin clones from a production database. Finally, it provides examples of how data virtualization can help with development/QA, production support, and business intelligence use cases.
The document discusses using data virtualization to address the constraint of data in DevOps workflows. It describes how traditional database cloning methods are inefficient and consume significant resources. The solution presented uses thin cloning technology to take snapshots of production databases and provide virtual copies for development, QA, and other environments. This allows for unlimited, self-service virtual databases that reduce bottlenecks and waiting times compared to physical copies.
Denver devops : enabling DevOps with data virtualizationKyle Hailey
This document discusses how data constraints can limit DevOps efforts and proposes a solution using virtual data and thin cloning. It notes that moving and copying production data is challenging due to storage, personnel and time requirements. This typically results in bottlenecks, long wait times for environments, code check-ins and production bugs. The solution presented is to use a data virtualization platform that can take thin clones of production data using file system snapshots, compress the data and share it across environments through a centralized cache. This allows self-service provisioning of database environments and accelerates DevOps processes.
Oracle Open World 2014: Lies, Damned Lies, and I/O Statistics [ CON3671]Kyle Hailey
The document discusses analyzing I/O performance and summarizing lessons learned. It describes common tools used to measure I/O like moats.sh, strace, and ioh.sh. It also summarizes the top 10 anomalies encountered like caching effects, shared drives, connection limits, I/O request consolidation and fragmentation over NFS, and tiered storage migration. Solutions provided focus on avoiding caching, isolating workloads, proper sizing of NFS parameters, and direct I/O.
Oaktable World 2014 Toon Koppelaars: database constraints polite excuseKyle Hailey
The document discusses validation execution models for SQL assertions. It proposes moving from less efficient models that evaluate all assertions for every change (EM1) to more efficient models. Later models (EM3-EM5) evaluate only assertions involving changed tables, columns or literals based on parsing the assertion and change being made. The most efficient model (EM5) evaluates assertions only when the change transition effect potentially impacts the assertion. Overall the document argues SQL assertions could improve data quality if DBMS vendors supported more optimized evaluation models.
Profiling the logwriter and database writerKyle Hailey
The document discusses the behavior of the Oracle log writer (LGWR) process under different conditions. In idle mode, LGWR sleeps for 3 seconds at a time on a semaphore without writing to the redo log buffer. When a transaction is committed, LGWR may write the committed redo entries to disk either before or after the foreground process waits on a "log file sync" event, depending on whether LGWR has already flushed the data. The document also compares the "post-wait" and "polling" modes used for the log file sync wait.
Oaktable World 2014 Kevin Closson: SLOB – For More Than I/O!Kyle Hailey
The document discusses using SLOB (Synthetic Load On Box) to test various Oracle database configurations and platforms. SLOB is described as a simple and predictable workload generator that allows testing the performance of databases under different conditions with minimal variability. The document outlines several potential uses of SLOB, including testing Oracle in-memory database options, multitenant architectures, and measuring the impact of database contention. It provides examples of using SLOB to analyze CPU and storage I/O performance.
Oracle Open World Thursday 230 ashmastersKyle Hailey
This document discusses database performance tuning using Oracle's ASH (Active Session History) feature. It provides examples of ASH queries to identify top wait events, long running SQL statements, and sessions consuming the most CPU. It also explains how to use ASH data to diagnose specific problems like buffer busy waits and latch contention by tracking session details over time.
WhatsApp offers simple, reliable, and private messaging and calling services for free worldwide. With end-to-end encryption, your personal messages and calls are secure, ensuring only you and the recipient can access them. Enjoy voice and video calls to stay connected with loved ones or colleagues. Express yourself using stickers, GIFs, or by sharing moments on Status. WhatsApp Business enables global customer outreach, facilitating sales growth and relationship building through showcasing products and services. Stay connected effortlessly with group chats for planning outings with friends or staying updated on family conversations.
Hand Rolled Applicative User ValidationCode KataPhilip Schwarz
Could you use a simple piece of Scala validation code (granted, a very simplistic one too!) that you can rewrite, now and again, to refresh your basic understanding of Applicative operators <*>, <*, *>?
The goal is not to write perfect code showcasing validation, but rather, to provide a small, rough-and ready exercise to reinforce your muscle-memory.
Despite its grandiose-sounding title, this deck consists of just three slides showing the Scala 3 code to be rewritten whenever the details of the operators begin to fade away.
The code is my rough and ready translation of a Haskell user-validation program found in a book called Finding Success (and Failure) in Haskell - Fall in love with applicative functors.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Artificia Intellicence and XPath Extension FunctionsOctavian Nadolu
The purpose of this presentation is to provide an overview of how you can use AI from XSLT, XQuery, Schematron, or XML Refactoring operations, the potential benefits of using AI, and some of the challenges we face.
What is Augmented Reality Image Trackingpavan998932
Augmented Reality (AR) Image Tracking is a technology that enables AR applications to recognize and track images in the real world, overlaying digital content onto them. This enhances the user's interaction with their environment by providing additional information and interactive elements directly tied to physical images.
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
UI5con 2024 - Keynote: Latest News about UI5 and it’s EcosystemPeter Muessig
Learn about the latest innovations in and around OpenUI5/SAPUI5: UI5 Tooling, UI5 linter, UI5 Web Components, Web Components Integration, UI5 2.x, UI5 GenAI.
Recording:
https://www.youtube.com/live/MSdGLG2zLy8?si=INxBHTqkwHhxV5Ta&t=0
Takashi Kobayashi and Hironori Washizaki, "SWEBOK Guide and Future of SE Education," First International Symposium on the Future of Software Engineering (FUSE), June 3-6, 2024, Okinawa, Japan
Measures in SQL (SIGMOD 2024, Santiago, Chile)Julian Hyde
SQL has attained widespread adoption, but Business Intelligence tools still use their own higher level languages based upon a multidimensional paradigm. Composable calculations are what is missing from SQL, and we propose a new kind of column, called a measure, that attaches a calculation to a table. Like regular tables, tables with measures are composable and closed when used in queries.
SQL-with-measures has the power, conciseness and reusability of multidimensional languages but retains SQL semantics. Measure invocations can be expanded in place to simple, clear SQL.
To define the evaluation semantics for measures, we introduce context-sensitive expressions (a way to evaluate multidimensional expressions that is consistent with existing SQL semantics), a concept called evaluation context, and several operations for setting and modifying the evaluation context.
A talk at SIGMOD, June 9–15, 2024, Santiago, Chile
Authors: Julian Hyde (Google) and John Fremlin (Google)
https://doi.org/10.1145/3626246.3653374
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
UI5con 2024 - Boost Your Development Experience with UI5 Tooling ExtensionsPeter Muessig
The UI5 tooling is the development and build tooling of UI5. It is built in a modular and extensible way so that it can be easily extended by your needs. This session will showcase various tooling extensions which can boost your development experience by far so that you can really work offline, transpile your code in your project to use even newer versions of EcmaScript (than 2022 which is supported right now by the UI5 tooling), consume any npm package of your choice in your project, using different kind of proxies, and even stitching UI5 projects during development together to mimic your target environment.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
OpenMetadata Community Meeting - 5th June 2024OpenMetadata
The OpenMetadata Community Meeting was held on June 5th, 2024. In this meeting, we discussed about the data quality capabilities that are integrated with the Incident Manager, providing a complete solution to handle your data observability needs. Watch the end-to-end demo of the data quality features.
* How to run your own data quality framework
* What is the performance impact of running data quality frameworks
* How to run the test cases in your own ETL pipelines
* How the Incident Manager is integrated
* Get notified with alerts when test cases fail
Watch the meeting recording here - https://www.youtube.com/watch?v=UbNOje0kf6E
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Odoo ERP software
Odoo ERP software, a leading open-source software for Enterprise Resource Planning (ERP) and business management, has recently launched its latest version, Odoo 17 Community Edition. This update introduces a range of new features and enhancements designed to streamline business operations and support growth.
The Odoo Community serves as a cost-free edition within the Odoo suite of ERP systems. Tailored to accommodate the standard needs of business operations, it provides a robust platform suitable for organisations of different sizes and business sectors. Within the Odoo Community Edition, users can access a variety of essential features and services essential for managing day-to-day tasks efficiently.
This blog presents a detailed overview of the features available within the Odoo 17 Community edition, and the differences between Odoo 17 community and enterprise editions, aiming to equip you with the necessary information to make an informed decision about its suitability for your business.
Different strategies:
&quot;Instant&quot; generation of metadata then
A) before updating production block copy it away to new location
B) put new production block in new location
(maybe assign an empty file and gradually fill it).
This &quot;copy&quot; could be something like SRDF to a remove device, with a &quot;split mirror&quot; operation at the remote.
I used an ISO to install Delphix so I selected Sun Solaris 10 from the VMWare list of O/S options.
At the time of speaking (Sept 2014) the latest version of Delphix is 4.2
It may be possible to store completely empty Oracle pages in the metadata entry of the block.
The Delphix-driven rman backups are &quot;from SCN&quot; - Delphix keeps track of the SCN reached at its previous rman call. The code takes steps to ensure that the Delphix backups don&apos;t cause confusion in the rman catalogue if you are also using rman as your primary backup mechanism.
Snapsync - for incrementals, you could do two one after the other if the typical incremental is slow: the second incremental will be small, applied quickly, and allow for faster provisioning.
Pre-Provisioning: pre-provisioning applies redo necessary to make a snapsync immediately provisionable, ahead of time. Allows for constant time provisioning in a few minutes, regardless of database size or change rate.