The document describes the author's journey of automating various Oracle database administration tasks using Python. Some key points:
1) The author started using Python to automate routine database maintenance tasks like log file management and backups that were previously done manually or via scripts.
2) Over time, the author applied object-oriented principles in Python to develop reusable modules and classes to further standardize and simplify administration of multiple databases.
3) Large projects like database migrations were made possible by building on the Python codebase, applying concepts like packages and modules to organize the code into a reusable framework. Proper documentation of the code was also emphasized.
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudMarkus Michalewicz
This presentation discusses the support guidelines for using Oracle Real Application Clusters (RAC) in virtualized environments, for which general Oracle Database support guidelines are discussed shortly first.
First presented during DOAG 2021 User Conference, this presentation replaces its predecessor from 2016: https://www.slideshare.net/MarkusMichalewicz/how-to-use-oracle-rac-in-a-cloud-a-support-question
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
Standard Edition High Availability (SEHA) is the latest addition to Oracle’s high availability solutions. This presentation explains the motivation for Standard Edition High Availability, how it is implemented and the way it works currently as well as what is planned for future improvements. It was first presented during Oracle Groundbreakers Yatra (OGYatra) Online in July 2020.
Oracle RAC Virtualized - In VMs, in Containers, On-premises, and in the CloudMarkus Michalewicz
This presentation discusses the support guidelines for using Oracle Real Application Clusters (RAC) in virtualized environments, for which general Oracle Database support guidelines are discussed shortly first.
First presented during DOAG 2021 User Conference, this presentation replaces its predecessor from 2016: https://www.slideshare.net/MarkusMichalewicz/how-to-use-oracle-rac-in-a-cloud-a-support-question
Standard Edition High Availability (SEHA) - The Why, What & HowMarkus Michalewicz
Standard Edition High Availability (SEHA) is the latest addition to Oracle’s high availability solutions. This presentation explains the motivation for Standard Edition High Availability, how it is implemented and the way it works currently as well as what is planned for future improvements. It was first presented during Oracle Groundbreakers Yatra (OGYatra) Online in July 2020.
Oracle RAC is an option to the Oracle Database Enterprise Edition. At least, this is what it is known for. This presentation shows the many ways in which the stack, which is known as Oracle RAC can be used in the most efficient way for various use cases.
Oracle Automatic Storage Management has proven to be one of the most widely adopted new features in Oracle Database 10g and it has been dramatically improved in the later 11g releases. This presentation will explain what changes are solved by ASM, how these challenges are solved, what barriers there are to ASM adoptions, and how 11g Release 2 addresses these barriers.
This procedure for archive-to-cloud builds on the techniques used for copy-to-tape. The difference is that it sends backups to cloud repositories for longer term storage. This procedure includes configuring a credential wallet to store TDE master keys, because backups are encrypted before they are archived to a cloud repository. The initial configuration tasks are performed in the Oracle Key Vault to prepare the wallet. At the end, a job template is created and run for archive-to-cloud.
Introduction to Real Application Cluster
RAC - Savior of DBA
Oracle Clusterware (Platform on Platform)
RAC Startup sequence
RAC Architecture
RAC Components
Single Instance on RAC
Node Eviction
Important Log directories in RAC.
Tips to monitor and improve the RAC environment.
This PPT is all about Fast Start Failover DataGuard and it will also helps you to easily understand basics of Fast Start Failover DataGuard in Oracle 12c.
in this PPT I have covered topics as below :
1.FSFO(Fast_Start Failover)
2.Dataguard
3.Types of Dataguard
4.Protection Modes
5.FSFO with physical Standby
6.Dataguard Broker
7.Observer Process.
Oracle Open World (OOW) 2014 presentation on Oracle Cache Fusion; how it works and how to use it in an optimized fashion to scale an Oracle RAC system.
All of the Performance Tuning Features in Oracle SQL DeveloperJeff Smith
An overview of all of the performance tuning instrumentation, tools, and features in Oracle SQL Developer. Get help making those applications and their queries more performant.
Oracle MAA (Maximum Availability Architecture) 18c - An OverviewMarkus Michalewicz
Providing the highest levels of availability is the main goal of Oracle’s Maximum Availability Architecture (MAA), which has been available for more than a decade. This session looks at Oracle MAA from a slightly different angle, as MAA should really be considered by the Oracle DBA as well as by developers and even by non-Oracle customers. First presented in Sangam18.
Oracle Database performance tuning using oratopSandesh Rao
Oratop is a text-based user interface tool for monitoring basic database operations in real-time. This presentation will go into depth on how to use the tool and some example scenarios. It can be used for both RAC and single-instance databases and in combination with top to get a more holistic view of system performance and identify any bottlenecks.
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...OpenShift Origin
Extending OpenShift Origin: Build Your Own Cartridge
Presenters: Bill DeCoste
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
Oracle RAC is an option to the Oracle Database Enterprise Edition. At least, this is what it is known for. This presentation shows the many ways in which the stack, which is known as Oracle RAC can be used in the most efficient way for various use cases.
Oracle Automatic Storage Management has proven to be one of the most widely adopted new features in Oracle Database 10g and it has been dramatically improved in the later 11g releases. This presentation will explain what changes are solved by ASM, how these challenges are solved, what barriers there are to ASM adoptions, and how 11g Release 2 addresses these barriers.
This procedure for archive-to-cloud builds on the techniques used for copy-to-tape. The difference is that it sends backups to cloud repositories for longer term storage. This procedure includes configuring a credential wallet to store TDE master keys, because backups are encrypted before they are archived to a cloud repository. The initial configuration tasks are performed in the Oracle Key Vault to prepare the wallet. At the end, a job template is created and run for archive-to-cloud.
Introduction to Real Application Cluster
RAC - Savior of DBA
Oracle Clusterware (Platform on Platform)
RAC Startup sequence
RAC Architecture
RAC Components
Single Instance on RAC
Node Eviction
Important Log directories in RAC.
Tips to monitor and improve the RAC environment.
This PPT is all about Fast Start Failover DataGuard and it will also helps you to easily understand basics of Fast Start Failover DataGuard in Oracle 12c.
in this PPT I have covered topics as below :
1.FSFO(Fast_Start Failover)
2.Dataguard
3.Types of Dataguard
4.Protection Modes
5.FSFO with physical Standby
6.Dataguard Broker
7.Observer Process.
Oracle Open World (OOW) 2014 presentation on Oracle Cache Fusion; how it works and how to use it in an optimized fashion to scale an Oracle RAC system.
All of the Performance Tuning Features in Oracle SQL DeveloperJeff Smith
An overview of all of the performance tuning instrumentation, tools, and features in Oracle SQL Developer. Get help making those applications and their queries more performant.
Oracle MAA (Maximum Availability Architecture) 18c - An OverviewMarkus Michalewicz
Providing the highest levels of availability is the main goal of Oracle’s Maximum Availability Architecture (MAA), which has been available for more than a decade. This session looks at Oracle MAA from a slightly different angle, as MAA should really be considered by the Oracle DBA as well as by developers and even by non-Oracle customers. First presented in Sangam18.
Oracle Database performance tuning using oratopSandesh Rao
Oratop is a text-based user interface tool for monitoring basic database operations in real-time. This presentation will go into depth on how to use the tool and some example scenarios. It can be used for both RAC and single-instance databases and in combination with top to get a more holistic view of system performance and identify any bottlenecks.
Extending OpenShift Origin: Build Your Own Cartridge with Bill DeCoste of Red...OpenShift Origin
Extending OpenShift Origin: Build Your Own Cartridge
Presenters: Bill DeCoste
Cartridges allow developers to provide services running on top of the Red Hat OpenShift Platform-as-a-Service (PaaS). OpenShift already provides cartridges for numerous web application frameworks and databases. Writing your own cartridges allows you to customize or enhance an existing service, or provide new services. In this session, the presenter will discuss best practices for cartridge development and the latest changes in the OpenShift cartridge support.
* Latest changes made in the platform to ease cartridge development
* OpenShift Cartridges vs. plugins
* Outline for development of a new cartridge
* Customization of existing cartridges
* Quickstarts: leveraging a cartridge or cartridges to provide a complete application
Most developers start adopting Docker by integrating it with their development environment. Unfortunately development environments are nuanced. Using Docker to automate and isolate development environments is rewarding, but you'll need to keep a few things in mind when designing that integration.
Five Real-World Strategies for Perforce StreamsPerforce
Before you deploy Perforce Streams in your organization, you should have a plan in place. Get advice and hear the five strategies for using Streams and how to handle integration exceptions gracefully.
Dockerizing the Hard Services: Neutron and Novaclayton_oneill
Talk about the benefits and pitfalls involved in successfully running complex services like Neutron and Nova inside of Docker containers.
Topics include:
* What magic incantations are needed to run these services at all?
* How to prevent HA router failover on service restarts.
* How to prevent network namespaces from breaking everything.
* Bonus: How network namespace fixes also helped fix Cinder NFS backend
Install FD.IO VPP On Intel(r) Architecture & Test with Trex*Michelle Holley
This demo/lab will guide you to install and configure FD.io Vector Packet Processing (VPP) on Intel® Architecture (AI) Server. You will also learn to install TRex* on another AI Server to send packets to the VPP, and use some VPP commands to forward packets back to the TRex*.
Speaker: Loc Nguyen. Loc is a Software Application Engineer in Data Center Scale Engineering Team. Loc joined Intel in 2005, and has worked in various projects. Before joining the network group, Loc worked in High-Performance Computing area and supported Intel® Xeon Phi™ Product Family. His interest includes computer graphics, parallel computing, and computer networking.
Kubernetes vs dockers swarm supporting onap oom on multi-cloud multi-stack en...Arthur Berezin
Kubernetes vs Dockers Swarm supporting ONAP-OOM on multi-cloud multi-stack environment
Description: ONAP was set originally to support multiple container platform and cloud through TOSCA. In R1 ONAP and OOM is dependent completely on Kubernetes. As there are other container platforms such as Docker Swarm that are gaining more wider adoption as a simple alternative to Kubernetes. In addition operator may need the flexibility to choose their own container platform and be open for future platform. We need to weight the alternatives and avoid using package managers as Helm that makes K8s mandatory.
The use of TOSCA in conjunction with Kubernetes provides that "happy medium" where on one hand we can leverage Kubernetes to a full extent while at the same time be open to other alternative. In this workshop, we will compare Kubernetes with Docker Swarm and walk through an example of how ONAP can be set to support both platforms using TOSCA.
A talk I gave at the recent Advanced AWS Meeup - this is a detailed guide to how I installed and set up Spinnaker to work with our infrastructure at Stitch Fix. I go over the various problems I ran into and how I solved them. I hope this can be useful for others setting up, or interested in setting up Spinnaker for their purposes.
**Big thanks to Armory for recording the talks! Video for this talk can be found here: https://youtu.be/ywzPblFpIE0 (I'm the second speaker)**
Your Inner Sysadmin - Tutorial (SunshinePHP 2015)Chris Tankersley
One thing that most programmers do not take the time to understand is the servers that their application lives on. Most know a smattering of Apache configs, PHP configs, and basic information about the OS. This talk will deal with looking at tools that can help you quickly set up a server and how it can help you be a better developer. We'll look at tools like puppet for server management, OSSEC for log management, different command line tools, and nagios/monit for system monitoring.
Introduction to node js - From "hello world" to deploying on azureColin Mackay
Slide deck from my talk on Node.js.
More information is available here: http://colinmackay.scot/2014/11/29/dunddd-2014-introduction-to-node-jsfrom-hello-world-to-deploying-on-azure/
Cyaniclab : Software Development Agency Portfolio.pdfCyanic lab
CyanicLab, an offshore custom software development company based in Sweden,India, Finland, is your go-to partner for startup development and innovative web design solutions. Our expert team specializes in crafting cutting-edge software tailored to meet the unique needs of startups and established enterprises alike. From conceptualization to execution, we offer comprehensive services including web and mobile app development, UI/UX design, and ongoing software maintenance. Ready to elevate your business? Contact CyanicLab today and let us propel your vision to success with our top-notch IT solutions.
Developing Distributed High-performance Computing Capabilities of an Open Sci...Globus
COVID-19 had an unprecedented impact on scientific collaboration. The pandemic and its broad response from the scientific community has forged new relationships among public health practitioners, mathematical modelers, and scientific computing specialists, while revealing critical gaps in exploiting advanced computing systems to support urgent decision making. Informed by our team’s work in applying high-performance computing in support of public health decision makers during the COVID-19 pandemic, we present how Globus technologies are enabling the development of an open science platform for robust epidemic analysis, with the goal of collaborative, secure, distributed, on-demand, and fast time-to-solution analyses to support public health.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Field Employee Tracking System| MiTrack App| Best Employee Tracking Solution|...informapgpstrackings
Keep tabs on your field staff effortlessly with Informap Technology Centre LLC. Real-time tracking, task assignment, and smart features for efficient management. Request a live demo today!
For more details, visit us : https://informapuae.com/field-staff-tracking/
AI Pilot Review: The World’s First Virtual Assistant Marketing SuiteGoogle
AI Pilot Review: The World’s First Virtual Assistant Marketing Suite
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-pilot-review/
AI Pilot Review: Key Features
✅Deploy AI expert bots in Any Niche With Just A Click
✅With one keyword, generate complete funnels, websites, landing pages, and more.
✅More than 85 AI features are included in the AI pilot.
✅No setup or configuration; use your voice (like Siri) to do whatever you want.
✅You Can Use AI Pilot To Create your version of AI Pilot And Charge People For It…
✅ZERO Manual Work With AI Pilot. Never write, Design, Or Code Again.
✅ZERO Limits On Features Or Usages
✅Use Our AI-powered Traffic To Get Hundreds Of Customers
✅No Complicated Setup: Get Up And Running In 2 Minutes
✅99.99% Up-Time Guaranteed
✅30 Days Money-Back Guarantee
✅ZERO Upfront Cost
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Providing Globus Services to Users of JASMIN for Environmental Data AnalysisGlobus
JASMIN is the UK’s high-performance data analysis platform for environmental science, operated by STFC on behalf of the UK Natural Environment Research Council (NERC). In addition to its role in hosting the CEDA Archive (NERC’s long-term repository for climate, atmospheric science & Earth observation data in the UK), JASMIN provides a collaborative platform to a community of around 2,000 scientists in the UK and beyond, providing nearly 400 environmental science projects with working space, compute resources and tools to facilitate their work. High-performance data transfer into and out of JASMIN has always been a key feature, with many scientists bringing model outputs from supercomputers elsewhere in the UK, to analyse against observational or other model data in the CEDA Archive. A growing number of JASMIN users are now realising the benefits of using the Globus service to provide reliable and efficient data movement and other tasks in this and other contexts. Further use cases involve long-distance (intercontinental) transfers to and from JASMIN, and collecting results from a mobile atmospheric radar system, pushing data to JASMIN via a lightweight Globus deployment. We provide details of how Globus fits into our current infrastructure, our experience of the recent migration to GCSv5.4, and of our interest in developing use of the wider ecosystem of Globus services for the benefit of our user community.
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Exploring Innovations in Data Repository Solutions - Insights from the U.S. G...Globus
The U.S. Geological Survey (USGS) has made substantial investments in meeting evolving scientific, technical, and policy driven demands on storing, managing, and delivering data. As these demands continue to grow in complexity and scale, the USGS must continue to explore innovative solutions to improve its management, curation, sharing, delivering, and preservation approaches for large-scale research data. Supporting these needs, the USGS has partnered with the University of Chicago-Globus to research and develop advanced repository components and workflows leveraging its current investment in Globus. The primary outcome of this partnership includes the development of a prototype enterprise repository, driven by USGS Data Release requirements, through exploration and implementation of the entire suite of the Globus platform offerings, including Globus Flow, Globus Auth, Globus Transfer, and Globus Search. This presentation will provide insights into this research partnership, introduce the unique requirements and challenges being addressed and provide relevant project progress.
Prosigns: Transforming Business with Tailored Technology SolutionsProsigns
Unlocking Business Potential: Tailored Technology Solutions by Prosigns
Discover how Prosigns, a leading technology solutions provider, partners with businesses to drive innovation and success. Our presentation showcases our comprehensive range of services, including custom software development, web and mobile app development, AI & ML solutions, blockchain integration, DevOps services, and Microsoft Dynamics 365 support.
Custom Software Development: Prosigns specializes in creating bespoke software solutions that cater to your unique business needs. Our team of experts works closely with you to understand your requirements and deliver tailor-made software that enhances efficiency and drives growth.
Web and Mobile App Development: From responsive websites to intuitive mobile applications, Prosigns develops cutting-edge solutions that engage users and deliver seamless experiences across devices.
AI & ML Solutions: Harnessing the power of Artificial Intelligence and Machine Learning, Prosigns provides smart solutions that automate processes, provide valuable insights, and drive informed decision-making.
Blockchain Integration: Prosigns offers comprehensive blockchain solutions, including development, integration, and consulting services, enabling businesses to leverage blockchain technology for enhanced security, transparency, and efficiency.
DevOps Services: Prosigns' DevOps services streamline development and operations processes, ensuring faster and more reliable software delivery through automation and continuous integration.
Microsoft Dynamics 365 Support: Prosigns provides comprehensive support and maintenance services for Microsoft Dynamics 365, ensuring your system is always up-to-date, secure, and running smoothly.
Learn how our collaborative approach and dedication to excellence help businesses achieve their goals and stay ahead in today's digital landscape. From concept to deployment, Prosigns is your trusted partner for transforming ideas into reality and unlocking the full potential of your business.
Join us on a journey of innovation and growth. Let's partner for success with Prosigns.
Globus Connect Server Deep Dive - GlobusWorld 2024Globus
We explore the Globus Connect Server (GCS) architecture and experiment with advanced configuration options and use cases. This content is targeted at system administrators who are familiar with GCS and currently operate—or are planning to operate—broader deployments at their institution.
Experience our free, in-depth three-part Tendenci Platform Corporate Membership Management workshop series! In Session 1 on May 14th, 2024, we began with an Introduction and Setup, mastering the configuration of your Corporate Membership Module settings to establish membership types, applications, and more. Then, on May 16th, 2024, in Session 2, we focused on binding individual members to a Corporate Membership and Corporate Reps, teaching you how to add individual members and assign Corporate Representatives to manage dues, renewals, and associated members. Finally, on May 28th, 2024, in Session 3, we covered questions and concerns, addressing any queries or issues you may have.
For more Tendenci AMS events, check out www.tendenci.com/events
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Innovating Inference - Remote Triggering of Large Language Models on HPC Clus...Globus
Large Language Models (LLMs) are currently the center of attention in the tech world, particularly for their potential to advance research. In this presentation, we'll explore a straightforward and effective method for quickly initiating inference runs on supercomputers using the vLLM tool with Globus Compute, specifically on the Polaris system at ALCF. We'll begin by briefly discussing the popularity and applications of LLMs in various fields. Following this, we will introduce the vLLM tool, and explain how it integrates with Globus Compute to efficiently manage LLM operations on Polaris. Attendees will learn the practical aspects of setting up and remotely triggering LLMs from local machines, focusing on ease of use and efficiency. This talk is ideal for researchers and practitioners looking to leverage the power of LLMs in their work, offering a clear guide to harnessing supercomputing resources for quick and effective LLM inference.
OpenFOAM solver for Helmholtz equation, helmholtzFoam / helmholtzBubbleFoamtakuyayamamoto1800
In this slide, we show the simulation example and the way to compile this solver.
In this solver, the Helmholtz equation can be solved by helmholtzFoam. Also, the Helmholtz equation with uniformly dispersed bubbles can be simulated by helmholtzBubbleFoam.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
1. The Pythonic Way crosses
Oracle‘s Exadata
by rainer schuettengruber
2. about me
• IT employee since 1998
• main focus on Oracle databases
• various positions as Oracle DBA(DMA), developer, devops
• currently employed as Exadata Administrator
5. pre python era
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• heterogeneous environment
• Oracle on AIX, Linux on VMware, Exadata
• Oracle Version 10.2.0.4, 11.2.0.1,11.2.0.2, 11.2.0.3, 11.2.0.4
• Oracle Cloud Control 11g on Linux
• Oracle Cloud Control 12c on AIX
• backup based on Cloud Control jobs
• backup based on TSM scheduler
• backup based on ksh scripts
• .. to take arms against a sea of troubles ..
7. the game is on
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
oracle@disguised:misc (TEST1) > ./tracemaint.sh -h
usage: tracemaint [-h] -d DAYS -m MAXLOGSIZE [-s SID] [-k] [--debug]
tidy up database trace and log files
optional arguments:
-h, --help show this help message and exit
-d DAYS, --days DAYS number of days that logs/traces need to be kept
-m MAXLOGSIZE, --maxlogsize MAXLOGSIZE
threshold in MB, specifying if a log files needs
rotation
-s SID, --sid SID database SID, if omitted all databases configured in
/etc/oratab are considered
-k, --keeptempfiles keep files generated during logrotation, use this for
debugging purposes only
--debug log debug output
8. the game is on
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• Python comes with Linux, however
• versions, depending on the distribution, between 2.4 – 2.7
• changing installation might do harm, especially on Exadata
• dedicated installation under /opt in accord with FHS
• moreover Exadata does not support customized RPM‘s
• building from source and installation via tarball
9. the game is on
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
disguised:/root/svnRepo/scripts/bash/osSetup>./build_python.sh
usage : build_python.sh -p <python source tarball> -r -c
-r .. remove existing installation
-c .. create tarball from the installation
disguised:/root/svnRepo/scripts/bash/osSetup>
disguised:/root/svnRepo/scripts/bash/osSetup>./install_python.sh
usage : install_python.sh -t <python tarball>
disguised:/root/svnRepo/scripts/bash/osSetup>
10. the game is on
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• SVN rather obvious
• used for deployments in combination with make files
• installation under /opt requires command line wrappers
• which comes in handy for cx_Oracle
12. the game is on
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
import logging
from logging.handlers import RotatingFileHandler
import argparse
import sys
import os
import subprocess
import glob
import re
from datetime import datetime
import cx_Oracle
13. OOP appears on the stage
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• SAP requires a dedicated toolset for database maintenance
• basically 4 compenents
• separate for each database
• installation and patching rather tedious for 20+ databases
• implementation reminiscent of the DRY principle
14. OOP appears on the stage
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
15. OOP appears on the stage
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• one python file per class, used modules without recognizing it
• patching became a matter of minutes as opposed to hours
• off the beaten script track
• peer cluster nodes taken into consideration
16. OOP appears on the stage
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
import shutil
from pwd import getpwuid
from pexpect import pxssh
import paramiko
from paramiko import SSHClient
from scp import SCPClient
17. the master‘s class
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• became aware of python‘s modul concept
• formed the idea of bundling modules in a package that provides console
scripts
• amongst improving python skills, set the stage for a clean and reusable code
base
• good riddance bash
• well established principles in software development won‘t do harm in
operations
18. the helping hand
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• Cloud Control migration/consilidation had become inevitable
• upgrade path for only one of the two systems available
• implies that 50 per cent of the monitored databases need to be configured
manually, tedious and prone to error
• further use of configuration scripts justify the effort and contribute to
standardisation
• in essence a wrapper for Cloud Control‘s python based emcli utility
• migration/consolidation a matter of hours
• adding a database became a matter of seconds
19. exadata migration on steriods
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• existing Exadata systems had done their duty
• essentially the same migration path as for the migration from AIX to Exadata
• however, due to strigent licence terms only a parallel phase of 3 months in
sharp contrast to 1 year
• further cut down to 4 weekends due to staff availability
• no reason to despair since python is around ...
• built on top of Oracle‘s dataguard, cutting down downtime to a couple of
minutes
20. exadata migration on steriods
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• typed python code instead of oracle commands on the weekends
• structured code into modules
• created a package to ease deployment
• manual build with python setup.py bdist_wheel
• implemented console scripts
• toyed with the idea to use ant as build tool
23. automatisation of the automatisation
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• automated database setup
• build and deployment appeared to be rather tedious
• opted for ant
• faced the same issues as already discussed for the pyhton installation
• addressed by installing ant under /opt
• build specific environment variables by means of the build.env script
24. automatisation of the automatisation
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
[root@disguised python]# source build.env
PATH :
/opt/python/bin:/opt/ant/bin:/u01/app/12.1.0.2/grid/bin:/usr/local/sbin:/usr/local/bin:/sbi
n:/bin:/usr/sbin:/usr/bin:/root/bin
LD_LIBRARY_PATH : /opt/oracle/product/12.2.0.1/instantclient
ANT_ARGS : -emacs -logger org.apache.tools.ant.NoBannerLogger
PYTHON_PATH : /root/svnRepo/scripts/python
[root@disguised python]# ant -p
Buildfile: /root/svnRepo/scripts/python/build.xml
Build and deploy rlb's pyhton modules
Main targets:
clean clean up all files created by either dist or docs target
dist build wheel
distclean clean up files created by the dist target
install install wheel on this host
Default target: usage
[root@disguised python]#
30. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• functionality shared by all modules has been refactored into the common sub
package
• breaking functionality within the common sub package results in breaking the
whole code base
• unit tests absolutely vital
• additional sub package with suffix ut
• one test file for each module, prefixed with test_
32. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• version control, ant build and unit tests have been implemented
• which sets the stage for a decent jenkins installation
• Subversion plugin for obvious reasons
• Cobertura plugin aimed at automatic unit tests
• SLOCCount plugin in case somebody asks
• Violations plugin since pylint reports are already in place
• Static analysis collector plugin-in as a dependency
34. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
<target name="code-metrics" description="gather code metrics">
<!-- obtain source code lines by using pygount -->
<echo message="writing pygount's output to cloc.xml" />
<exec executable="${pygount}" dir="../..">
<arg value="--format=cloc-xml" />
<arg value="--suffix=py,sh,sql,xml,pks,pkb,php,java" />
<arg value="--out=cloc.xml" />
</exec>
<!-- assess code quality with pylint -->
.
.
<!-- evaluate code quality with pep8 -->
<echo message="writing pep8's output to pep8.out" />
<exec executable="${pep8}"
dir="${package.dir}"
output="../../pep8.out">
<arg value="--filename=*.py" />
<arg value="--max-line-length=99" />
<arg value="." />
</exec>
</target>
35. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
disguised:/root/svnRepo/scripts/python>ant -p
Buildfile: /root/svnRepo/scripts/python/build.xml
Build and deploy rlb's pyhton modules
Main targets:
all build wheel and documentation
clean clean up all files created by either dist or docs target
code-metrics gather code metrics
dist build wheel
distclean clean up files created by the dist target
docs generate sphinx documentation, including pylint and pyreverse output
install install wheel on this host
jenkins build targets required by continuous integration
Default target: usage
disguised:/root/svnRepo/scripts/python>
<target name="jenkins" description="build targets required by continuous integration"
depends="clean, code-metrics, dist" />
36. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
37. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
38. a giant leap towards CI
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
39. a bit quality
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• given that jenkins and unit tests are in place, test automatisation appears to
be quite obvious
• isolated build by means of venv
• documented required modules, which comes in handy when
upgrading/installing python
• run test suite by means of pytest
• added coverage reports
43. a bit quality
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• complete test coverage would imply a dedicated exadata environment
• a case for unittest.mock
• as a precondition Exadata/database related calls need to be separated, which
has not been considered in the initial design
• might be subject to another talk
• however, the foundations are laid
• played a vital role during the upgrade vom python 3.5 to 3.6
46. the finishing touch
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
disguised:/root/svnRepo/scripts/python>ant -p
Buildfile: /root/svnRepo/scripts/python/build.xml
Build and deploy rlb's pyhton modules
Main targets:
all build wheel and documentation
clean clean up all files created by either dist or docs target
code-metrics gather code metrics
deploy install wheel on production hosts
dist build wheel
distclean clean up files created by the dist target
docs generate sphinx documentation, including pylint and pyreverse output
install install wheel on this host
jenkins build targets required by continuous integration
test run test suite
venv setup virtual Python environment
Default target: usage
disguised:/root/svnRepo/scripts/python>
47. where do we go now?
07/14
07/15
09/15
02/16
05/16
07/16
10/16
11/16
01/17
03/17
07/17
• deal with testing issues
• become proactive by forecasting capacity and load with appropriate models
• tweak pylint/pep8
• improve on documentation