prohuddle-utPLSQL v3 - Ultimate unit testing framework for OracleJacek Gebal
Fundamentals of Unit Testing and TDD as a practice and presentation of core features available with utPLSQL v3.
Unit testing for Oracle Databases and PLSQL programming language.
utPLSQL offers a unit testing API for PL/SQL that is modeled on the xUnit approach. This is an old slide deck on utPLSQL so my apologies for any inconsistencies with the current utility. Note: while I created the original utPLSQL code base, I am not actively working on utPLSQL at this time. Check out github.com/utplsql for the code and project details.
Unit Testing Oracle PL/SQL Code: utPLSQL, Excel and MoreSteven Feuerstein
This slide deck was used in the May 2017 CodeTalk conversation, hosted by Steven Feuerstein of Oracle, sponsored by ODTUG and IOUG, with three people actively using utPLSQL to test their code - and also build out v3 of the utPLSQL framework: Jacek Gabal * Pavel Kaplya * Stefan Poschenrieder.
How to Release Rock-solid RESTful APIs and Ice the Testing BackBlobBob Binder
REST APIs are a key enabling technology for the cloud. Mobile applications, service-oriented architecture, and the Internet of Things depend on reliable and usable REST APIs. Unlike browser, native, and mobile apps, REST APIs can only be tested with software that drives the APIs. Unlike developer-centric hand-coded unit testing, adequate testing of REST APIs is truly well-suited to advanced automated testing.
As most web service applications are developed following an Agile process, effective testing must also avoid the "testing backblob," in which work to maintain hand-coded BDD-style test suites exceeds available time after a few iterations.
This talk presents a methodology for developing and testing REST APIs using model-based automation that has the beneficial side-effect of shrinking the testing backblob.
prohuddle-utPLSQL v3 - Ultimate unit testing framework for OracleJacek Gebal
Fundamentals of Unit Testing and TDD as a practice and presentation of core features available with utPLSQL v3.
Unit testing for Oracle Databases and PLSQL programming language.
utPLSQL offers a unit testing API for PL/SQL that is modeled on the xUnit approach. This is an old slide deck on utPLSQL so my apologies for any inconsistencies with the current utility. Note: while I created the original utPLSQL code base, I am not actively working on utPLSQL at this time. Check out github.com/utplsql for the code and project details.
Unit Testing Oracle PL/SQL Code: utPLSQL, Excel and MoreSteven Feuerstein
This slide deck was used in the May 2017 CodeTalk conversation, hosted by Steven Feuerstein of Oracle, sponsored by ODTUG and IOUG, with three people actively using utPLSQL to test their code - and also build out v3 of the utPLSQL framework: Jacek Gabal * Pavel Kaplya * Stefan Poschenrieder.
How to Release Rock-solid RESTful APIs and Ice the Testing BackBlobBob Binder
REST APIs are a key enabling technology for the cloud. Mobile applications, service-oriented architecture, and the Internet of Things depend on reliable and usable REST APIs. Unlike browser, native, and mobile apps, REST APIs can only be tested with software that drives the APIs. Unlike developer-centric hand-coded unit testing, adequate testing of REST APIs is truly well-suited to advanced automated testing.
As most web service applications are developed following an Agile process, effective testing must also avoid the "testing backblob," in which work to maintain hand-coded BDD-style test suites exceeds available time after a few iterations.
This talk presents a methodology for developing and testing REST APIs using model-based automation that has the beneficial side-effect of shrinking the testing backblob.
Les cinq bonnes pratiques des Tests Unitaires dans un projet AgileDenis Voituron
Les projets Agiles imposent leurs propres défis aux équipes de test. Un projet Agile est souvent basé sur de multiples itérations, exploite un périmètre de développement incertain, travaille avec une documentation minimaliste. Rapidement, les Tests Unitaires se font sentir pour garantir des évolutions logicielles en douceur.
Lors de cette session, nous présenterons les concepts de base des tests unitaires, quelles en sont les implications et quels sont les sujets applicatifs à tester. Dans la seconde partie de cette session, nous présenterons, par des démonstrations en direct dans Microsoft Visual Studio, les 5 bonnes pratiques des Tests Unitaires intégrés dans un cycle de vie Agile.
Exemples sur https://github.com/dvoituron/SampleUnitTests
Kubernetes is a solid leader among different cloud orchestration engines and its adoption rate is growing on a daily basis. Naturally people want to run both their applications and databases on the same infrastructure.
There are a lot of ways to deploy and run PostgreSQL on Kubernetes, but most of them are not cloud-native. Around one year ago Zalando started to run HA setup of PostgreSQL on Kubernetes managed by Patroni. Those experiments were quite successful and produced a Helm chart for Patroni. That chart was useful, albeit a single problem: Patroni depended on Etcd, ZooKeeper or Consul.
Few people look forward to deploy two applications instead of one and support them later on. In this talk I would like to introduce Kubernetes-native Patroni. I will explain how Patroni uses Kubernetes API to run a leader election and store the cluster state. I’m going to live-demo a deployment of HA PostgreSQL cluster on Minikube and share our own experience of running more than 130 clusters on Kubernetes.
Patroni is a Python open-source project developed by Zalando in cooperation with other contributors on GitHub: https://github.com/zalando/patroni
How to do a LIVE-demo with minikube:
1. git clone https://github.com/zalando/patroni
2. cd patroni
3. git checkout feature/demo
4. cd kubernetes
5. open demo.sh and edit line #4 (specify the minikube context )
6. docker build -t patroni .
7. may be docker push patroni
8. may be edit patroni_k8s.yaml line #22 and put the name of patroni image you build there
9. install tmux
10. run tmux in one terminal
11. run bash demo.sh in another terminal and press Enter from time to time
Test Driven Development, or TDD, is the mainstream in many areas of software development, but what about the database? In this session, we explore TDD, the benefits of automated testing, and how testing data projects differs from other types of development. We introduce the tSQLt testing framework and demonstrate its use with a live coding example. Finally, we will discuss some lessons learned in doing TDD with SQL Server.
Originally presented by Steve Fibich and David Moore at Richmond SQL Server Users Group on January 11, 2018
Tempto is a product test framework that allows developers to write and execute tests for SQL databases running on Hadoop. Individual test requirements such as data generation, HDFS file copy/storage of generated data and schema creation are expressed declaratively and are automatically fulfilled by the framework. Developers can write tests using Java (using a TestNG like paradigm and AssertJ style assertion) or by providing query files with expected results. We will show how we use it for presto product tests.
Benchto is a benchmark framework that provides an easy and manageable way to define, run and analyze macro benchmarks in clustered environment. Understanding behavior of distributed systems is hard and requires good visibility intostate of the cluster and internals of tested system. This project was developed for repeatable benchmarking ofHadoop SQL engines, most importantly Presto.
Les cinq bonnes pratiques des Tests Unitaires dans un projet AgileDenis Voituron
Les projets Agiles imposent leurs propres défis aux équipes de test. Un projet Agile est souvent basé sur de multiples itérations, exploite un périmètre de développement incertain, travaille avec une documentation minimaliste. Rapidement, les Tests Unitaires se font sentir pour garantir des évolutions logicielles en douceur.
Lors de cette session, nous présenterons les concepts de base des tests unitaires, quelles en sont les implications et quels sont les sujets applicatifs à tester. Dans la seconde partie de cette session, nous présenterons, par des démonstrations en direct dans Microsoft Visual Studio, les 5 bonnes pratiques des Tests Unitaires intégrés dans un cycle de vie Agile.
Exemples sur https://github.com/dvoituron/SampleUnitTests
Kubernetes is a solid leader among different cloud orchestration engines and its adoption rate is growing on a daily basis. Naturally people want to run both their applications and databases on the same infrastructure.
There are a lot of ways to deploy and run PostgreSQL on Kubernetes, but most of them are not cloud-native. Around one year ago Zalando started to run HA setup of PostgreSQL on Kubernetes managed by Patroni. Those experiments were quite successful and produced a Helm chart for Patroni. That chart was useful, albeit a single problem: Patroni depended on Etcd, ZooKeeper or Consul.
Few people look forward to deploy two applications instead of one and support them later on. In this talk I would like to introduce Kubernetes-native Patroni. I will explain how Patroni uses Kubernetes API to run a leader election and store the cluster state. I’m going to live-demo a deployment of HA PostgreSQL cluster on Minikube and share our own experience of running more than 130 clusters on Kubernetes.
Patroni is a Python open-source project developed by Zalando in cooperation with other contributors on GitHub: https://github.com/zalando/patroni
How to do a LIVE-demo with minikube:
1. git clone https://github.com/zalando/patroni
2. cd patroni
3. git checkout feature/demo
4. cd kubernetes
5. open demo.sh and edit line #4 (specify the minikube context )
6. docker build -t patroni .
7. may be docker push patroni
8. may be edit patroni_k8s.yaml line #22 and put the name of patroni image you build there
9. install tmux
10. run tmux in one terminal
11. run bash demo.sh in another terminal and press Enter from time to time
Test Driven Development, or TDD, is the mainstream in many areas of software development, but what about the database? In this session, we explore TDD, the benefits of automated testing, and how testing data projects differs from other types of development. We introduce the tSQLt testing framework and demonstrate its use with a live coding example. Finally, we will discuss some lessons learned in doing TDD with SQL Server.
Originally presented by Steve Fibich and David Moore at Richmond SQL Server Users Group on January 11, 2018
Tempto is a product test framework that allows developers to write and execute tests for SQL databases running on Hadoop. Individual test requirements such as data generation, HDFS file copy/storage of generated data and schema creation are expressed declaratively and are automatically fulfilled by the framework. Developers can write tests using Java (using a TestNG like paradigm and AssertJ style assertion) or by providing query files with expected results. We will show how we use it for presto product tests.
Benchto is a benchmark framework that provides an easy and manageable way to define, run and analyze macro benchmarks in clustered environment. Understanding behavior of distributed systems is hard and requires good visibility intostate of the cluster and internals of tested system. This project was developed for repeatable benchmarking ofHadoop SQL engines, most importantly Presto.
Have you ever wondered what the best way would be to test emails? Or how you would go about testing a messaging queue?
Making sure your components are correctly interacting with each other is both a tester and developer’s concern. Join us to get a better understanding of what you should test and how, both manually and automated.
This session is the first ever in which we will have two units working together to give you a nuanced insight on all aspects of integration testing. We’ll start off exploring the world of integration testing, defining the terminology, and creating a general understanding of what phases and kinds of testing exist. Later on we’ll delve into integration test automation, ranging from database integration testing to selenium UI testing and even as far as LDAP integration testing.
We have a wide variety of demos prepared where we will show you how easy it is to test various components of your infrastructure. Some examples:
- Database testing (JPA)
- Arquillian, exploring container testing, EJB testing and more
- Email testing
- SOAP testing using SoapUI
- LDAP testing
- JMS testing
Strategy-driven Test Generation with Open Source FrameworksDimitry Polivaev
Test suites for complex software systems contain thousands of test cases. Keeping track on the test coverage and changing the test suite as the system requirements evolve can consume significant efforts. The tutorial introduces and demonstrates an effort saving technique for developing, controlling and modifying test suites in agile, efficient, scalable and flexible way. The technique allows complete and explicit control over test amount, test depth and test coverage. It also makes possible to avoid code duplication in the non-generated test artifacts.
This technique allows generation of complete test suites given a specification describing test categories, test flow variations, test input data variations and requirement coverage criteria. All these kinds of data are commonly referred to as test properties. Their dependencies and variations are defined in test strategies.
The test strategies are expressed in a test strategy DSL which allows to express complex dependencies in a concise and easily understandable way. Behind the scene there is a rule engine generating test property value combinations from the test strategy definitions. The test suites containing independently executable test cases can be generated in any programming or scripting language or in a textual form. The generator uses a generic and an algorithm for mapping of test properties to the test scripts based on property naming conventions. For automatic test case execution a separate test driver component containing definition of single test steps referenced by the strategy should be written specifically in the chosen test script language.
All tools used for strategy-driven test generation are freely available under open source licenses.
Nagios Conference 2011 - Nathan Vonnahme - Integrating Nagios With Test Drive...Nagios
Nathan Vonnahme's presentation on integrating Nagios with test driven development. The presentation was given during the Nagios World Conference North America held Sept 27-29th, 2011 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/nwcna
Developer Tests - Things to Know (Vilnius JUG)vilniusjug
There are many great talks that discuss challenges developers face when writing software tests. In this talk let's look at test design problems that may seem to be simple but yet fundamentally important and often misunderstood even by experienced programmers.
The Drizzle Project is a fork of the MySQL 6.0 server. One of the many goals of Drizzle is to enable a large plugin ecosystem by improving, simplifying, and modernizing the application programming interfaces between the kernel and the modules providing services for Drizzle. This tutorial serves to showcase the new APIs for Drizzle's replication through a series of in-depth examples.
How I learned to time travel, or, data pipelining and scheduling with AirflowLaura Lorenz
****UPDATE: Project is now open sourced at https://www.github.com/industrydive/fileflow****
From Pydata DC 2016
Description
Data warehousing and analytics projects can, like ours, start out small - and fragile. With an organically growing mess of scripts glued together and triggered by cron jobs hiding on different servers, we needed better plumbing. After perusing the data pipelining landscape, we landed on Airflow, an Apache incubating batch processing pipelining and scheduler tool from Airbnb.
Abstract
The power of any reporting tool breaks based on the data behind it, so when our data warehousing process got too big for its humble origins, we searched for something better. After testing out several options such as Drake, Pydoit, Luigi, AWS Data Pipeline, and Pinball, we landed on Airflow, an Apache incubating batch processing pipelining and scheduler tool originating from Airbnb, that provides the benefits of pipeline construction as directed acyclic graphs (DAGs), along with a scheduler that can handle alerting, retries, callbacks and more to make your pipeline robust. This talk will discuss the value of DAG based pipelines for data processing workflows, highlight useful features in all of the pipelining projects we tested, and dive into some of the specific challenges (like time travel) and successes (like time travel!) we’ve experienced using Airflow to productionize our data engineering tasks. By the end of this talk, you will learn
- pros and cons of several Python-based/Python-supporting data pipelining libraries
- the design paradigm behind Airflow, an Apache incubating data pipelining and scheduling service, and what it is good for
- some epic fails to avoid and some epic wins to emulate from our experience porting our data engineering tasks to a more robust system
- some quick-start tips for implementing Airflow at your organization.
Tech introduction to Yaetos, an open source tool for data engineers, scientists, and analysts to easily create data pipelines in python and SQL and put them in production in the cloud.
Learn about best practices for developing Moodle code from custom plugins to submitting bug fixes for core Moodle code. Topics covered will include:
Overview of Moodle plugin systems and available API's
Working with the Moodle tracker
Peer review process
Maintaining a custom plugin using Github
Submitting core patches / bug fixes to Moodle HQ
The Amazing and Elegant PL/SQL Function Result CacheSteven Feuerstein
The Function Result Cache, introduced in Oracle Database 11g, offers a very elegant way to cache cross-session data and make it available via PL/SQL functions. It can have a dramatic performance impact on fetching static data (even static for just a period of time) - and it's managed automatically by Oracle Database for you!
Pl sql office hours data setup and teardown in database testingDeeptiBandari
Testing pl/sql code and challenges that come with setting up and tearing down of data required for tests. Also has soe tips and techniques to overcome the challenges
North Virginia Coldfusion User Group Meetup - Testbox - July 19th 2017Ortus Solutions, Corp
Testbox is a tool we all should be using to test our ColdFusion Applications which was created and is maintained by Ortus Solutions, the people that brought you ColdBox. We will have Gavin from Ortus in house on this day to go over some testbox examples, talk about its importance, and answer any questions you have.
SO --- if you have and high level questions for Gavin, reply to this post (or hit me up) so I can get the questions to Gavin a head of time just in case he needs to consult others at Ortus.
Gavin Pickin is a proud ColdFusion developer, starting with ColdFusion in the late 90s. Now working with Ortus Solutions, a leading force in CFML Development frameworks and tools, Gavin gets to work on a lot of great projects, for a big variety of clients. At Ortus Solutions, a big focus is on free and open source tools, on open source Fridays, Gavin spend most of his open source time working on ContentBox Content Management System.
Similar to POUG2019 - Test your PL/SQL - your database will love you (20)
How to Position Your Globus Data Portal for Success Ten Good PracticesGlobus
Science gateways allow science and engineering communities to access shared data, software, computing services, and instruments. Science gateways have gained a lot of traction in the last twenty years, as evidenced by projects such as the Science Gateways Community Institute (SGCI) and the Center of Excellence on Science Gateways (SGX3) in the US, The Australian Research Data Commons (ARDC) and its platforms in Australia, and the projects around Virtual Research Environments in Europe. A few mature frameworks have evolved with their different strengths and foci and have been taken up by a larger community such as the Globus Data Portal, Hubzero, Tapis, and Galaxy. However, even when gateways are built on successful frameworks, they continue to face the challenges of ongoing maintenance costs and how to meet the ever-expanding needs of the community they serve with enhanced features. It is not uncommon that gateways with compelling use cases are nonetheless unable to get past the prototype phase and become a full production service, or if they do, they don't survive more than a couple of years. While there is no guaranteed pathway to success, it seems likely that for any gateway there is a need for a strong community and/or solid funding streams to create and sustain its success. With over twenty years of examples to draw from, this presentation goes into detail for ten factors common to successful and enduring gateways that effectively serve as best practices for any new or developing gateway.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
top nidhi software solution freedownloadvrstrong314
This presentation emphasizes the importance of data security and legal compliance for Nidhi companies in India. It highlights how online Nidhi software solutions, like Vector Nidhi Software, offer advanced features tailored to these needs. Key aspects include encryption, access controls, and audit trails to ensure data security. The software complies with regulatory guidelines from the MCA and RBI and adheres to Nidhi Rules, 2014. With customizable, user-friendly interfaces and real-time features, these Nidhi software solutions enhance efficiency, support growth, and provide exceptional member services. The presentation concludes with contact information for further inquiries.
A Comprehensive Look at Generative AI in Retail App Testing.pdfkalichargn70th171
Traditional software testing methods are being challenged in retail, where customer expectations and technological advancements continually shape the landscape. Enter generative AI—a transformative subset of artificial intelligence technologies poised to revolutionize software testing.
Paketo Buildpacks : la meilleure façon de construire des images OCI? DevopsDa...Anthony Dahanne
Les Buildpacks existent depuis plus de 10 ans ! D’abord, ils étaient utilisés pour détecter et construire une application avant de la déployer sur certains PaaS. Ensuite, nous avons pu créer des images Docker (OCI) avec leur dernière génération, les Cloud Native Buildpacks (CNCF en incubation). Sont-ils une bonne alternative au Dockerfile ? Que sont les buildpacks Paketo ? Quelles communautés les soutiennent et comment ?
Venez le découvrir lors de cette session ignite
Enhancing Research Orchestration Capabilities at ORNL.pdfGlobus
Cross-facility research orchestration comes with ever-changing constraints regarding the availability and suitability of various compute and data resources. In short, a flexible data and processing fabric is needed to enable the dynamic redirection of data and compute tasks throughout the lifecycle of an experiment. In this talk, we illustrate how we easily leveraged Globus services to instrument the ACE research testbed at the Oak Ridge Leadership Computing Facility with flexible data and task orchestration capabilities.
Climate Science Flows: Enabling Petabyte-Scale Climate Analysis with the Eart...Globus
The Earth System Grid Federation (ESGF) is a global network of data servers that archives and distributes the planet’s largest collection of Earth system model output for thousands of climate and environmental scientists worldwide. Many of these petabyte-scale data archives are located in proximity to large high-performance computing (HPC) or cloud computing resources, but the primary workflow for data users consists of transferring data, and applying computations on a different system. As a part of the ESGF 2.0 US project (funded by the United States Department of Energy Office of Science), we developed pre-defined data workflows, which can be run on-demand, capable of applying many data reduction and data analysis to the large ESGF data archives, transferring only the resultant analysis (ex. visualizations, smaller data files). In this talk, we will showcase a few of these workflows, highlighting how Globus Flows can be used for petabyte-scale climate analysis.
Quarkus Hidden and Forbidden ExtensionsMax Andersen
Quarkus has a vast extension ecosystem and is known for its subsonic and subatomic feature set. Some of these features are not as well known, and some extensions are less talked about, but that does not make them less interesting - quite the opposite.
Come join this talk to see some tips and tricks for using Quarkus and some of the lesser known features, extensions and development techniques.
SOCRadar Research Team: Latest Activities of IntelBrokerSOCRadar
The European Union Agency for Law Enforcement Cooperation (Europol) has suffered an alleged data breach after a notorious threat actor claimed to have exfiltrated data from its systems. Infamous data leaker IntelBroker posted on the even more infamous BreachForums hacking forum, saying that Europol suffered a data breach this month.
The alleged breach affected Europol agencies CCSE, EC3, Europol Platform for Experts, Law Enforcement Forum, and SIRIUS. Infiltration of these entities can disrupt ongoing investigations and compromise sensitive intelligence shared among international law enforcement agencies.
However, this is neither the first nor the last activity of IntekBroker. We have compiled for you what happened in the last few days. To track such hacker activities on dark web sources like hacker forums, private Telegram channels, and other hidden platforms where cyber threats often originate, you can check SOCRadar’s Dark Web News.
Stay Informed on Threat Actors’ Activity on the Dark Web with SOCRadar!
First Steps with Globus Compute Multi-User EndpointsGlobus
In this presentation we will share our experiences around getting started with the Globus Compute multi-user endpoint. Working with the Pharmacology group at the University of Auckland, we have previously written an application using Globus Compute that can offload computationally expensive steps in the researcher's workflows, which they wish to manage from their familiar Windows environments, onto the NeSI (New Zealand eScience Infrastructure) cluster. Some of the challenges we have encountered were that each researcher had to set up and manage their own single-user globus compute endpoint and that the workloads had varying resource requirements (CPUs, memory and wall time) between different runs. We hope that the multi-user endpoint will help to address these challenges and share an update on our progress here.
Code reviews are vital for ensuring good code quality. They serve as one of our last lines of defense against bugs and subpar code reaching production.
Yet, they often turn into annoying tasks riddled with frustration, hostility, unclear feedback and lack of standards. How can we improve this crucial process?
In this session we will cover:
- The Art of Effective Code Reviews
- Streamlining the Review Process
- Elevating Reviews with Automated Tools
By the end of this presentation, you'll have the knowledge on how to organize and improve your code review proces
Into the Box Keynote Day 2: Unveiling amazing updates and announcements for modern CFML developers! Get ready for exciting releases and updates on Ortus tools and products. Stay tuned for cutting-edge innovations designed to boost your productivity.
How Recreation Management Software Can Streamline Your Operations.pptxwottaspaceseo
Recreation management software streamlines operations by automating key tasks such as scheduling, registration, and payment processing, reducing manual workload and errors. It provides centralized management of facilities, classes, and events, ensuring efficient resource allocation and facility usage. The software offers user-friendly online portals for easy access to bookings and program information, enhancing customer experience. Real-time reporting and data analytics deliver insights into attendance and preferences, aiding in strategic decision-making. Additionally, effective communication tools keep participants and staff informed with timely updates. Overall, recreation management software enhances efficiency, improves service delivery, and boosts customer satisfaction.
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Enhancing Project Management Efficiency_ Leveraging AI Tools like ChatGPT.pdfJay Das
With the advent of artificial intelligence or AI tools, project management processes are undergoing a transformative shift. By using tools like ChatGPT, and Bard organizations can empower their leaders and managers to plan, execute, and monitor projects more effectively.
4. Jacek Gębal
twitter: @GebalJacek
mail: jgebal@gmail.com
blog: oraclethoughts.com
Principal Software Engineer
@Fidelity Investments - Ireland
co-author & maintainer of: utPLSQL
Test your PL/SQL
your Database will love you
5. About me
● Software engineer since ~2000 - mainly Oracle database
● Infected with testing and automation since ~2013
● Key contributor, author and maintainer of utPLSQL v3
● Active developer, learner, testing advocate
8. Why do we test?
● Testing is cheaper and safer than bugs
Ariane 5 catastrophe
370 million USD burned in 37 seconds bug
Knight Capital
440 million USD loss in ~40 minutes
Toyota
2.4 mln cars recalled due to software bug
9. Why do we test?
● Make the newly written code change-able
So that others (and yourself) can touch this code safely later
● Prove that solution meets requirements
● Prove that a change didn’t break existing functionality
● Document requirements and reveal intention
10. Keep in mind
● DevOps
- you’re not doing DevOps without automated testing
● Continuous Integration
- integration is all about testing
12. What should be tested (in database)?
● All logic
○ PL/SQL
○ SQL ( queries / constraints / data-types / views / calculated columns … )
○ In /out data from your code
● Exceptions / corner cases
● Data quality & integrity
16. ● write only
enough code
to pass test
● get to green fast
● don’t cleanup
● write a failing test
● test is a simple example
● test becomes
documentation
● improve structure
● remove duplication
● rename, reorganize
● run tests, stay green
● don’t change behavior
RED GREEN
CLEAN
Test Driven Development (TDD)
19. ● free, open source & IDE independent
● continuously & thoroughly tested
● actively developed & maintained
● compatible with Oracle 11g r2+
● pure PL/SQL - *tests described only by annotations & expectations
● easy cursor & object data comparison
● code coverage reporting
● automatic transaction control - *no rollback / cleanup needed
● easy to Version Control
● data-type - aware comparison
Why utPLSQL v3 ?
20. utPLSQL syntax
create or replace package test_stuff as
--%suite( description )
--%test( description )
procedure my_first_test;
end;
create or replace package body test_stuff as
procedure my_first_test is
begin
ut.expect( ‘Chuck’ ).to_equal( ‘Chuck’ );
end;
end;
exec ut.run(‘test_stuff’);
Expectation
AnnotationAnnotation
Running tests
21. Function - Create room
● Creates a room with given name and returns its id
● Throws an exception when the room name is null
● Throws an exception when the room already exists
23. utPLSQL & TDD
● Fast feedback loop
● Tests were tested for both Fail & Pass
● Tests were created from requirements
● Code created to satisfy the tests
● Visible progress with every test
● I can now safely change my code
- it’s covered with tests
29. Organizing tests
● With suitepath
○ Suites can be nested, nested, nested, nested, nested, nested
○ Suites can be grouped
● Suites can be invoked by suitepath
exec ut.run(‘:org.utplsql.demo.test_rooms’);
● Setup from parent suite is visible in child suites
● Contexts -> groups within single package
30. What is my test coverage?
http://utplsql.org/utPLSQL-coverage-html/
31. Definition
Test coverage is a measure used to describe
the degree to which the source code of a program is executed
when a particular test suite runs.
https://en.wikipedia.org/wiki/Code_coverage
32. Line coverage - from profiler
create or replace function f(a integer) return integer is
begin
if a is null then
return 0;
else
return a*a;
end if;
end;
/
3 executable lines
2 test cases -> 100% coverage
33. Block coverage - since Oracle 12.2
create or replace function f1(a integer) return integer is
begin
if a is null then return 0; else return a*a; end if;
end;
/
create or replace function f2(a integer) return integer is
begin
return nvl(a*a,0);
end;
/
1 executable line
2 executable blocks
2 test cases -> 100% coverage
1 executable line
1 executable block
1 test case -> 100% coverage
37. Continuous Integration
● Script
● Version-control
● Deploy code and tests
● Run all tests
● Report
○ All good? - promote to higher environment
○ Problems? - fix them and repeat the process
● Repeat
40. Recap
● Test to prevent bugs and avoid losses
● Tests are documentation for your code
● Test as early as possible
● Group and organize your tests
● Use automatic rollback to minimize need for complex cleanup
● Analyze your test results and code coverage to measure progress
● Automate deployment and testing