This presentation is for those who are familiar with databases and SQL, but want to learn how to move processing from their applications into the database to improve consistency, administration, and performance. Topics covered include advanced SQL features like referential integrity constraints, ANSI joins, views, rules, and triggers. The presentation also explains how to create server-side functions, operators, and custom data types in PostgreSQL.
Presentation that I gave as a guest lecture for a summer intensive development course at nod coworking in Dallas, TX. The presentation targets beginning web developers with little, to no experience in databases, SQL, or PostgreSQL. I cover the creation of a database, creating records, reading/querying records, updating records, destroying records, joining tables, and a brief introduction to transactions.
This document provides an introduction and overview of PostgreSQL, an open-source object-relational database management system. It discusses that PostgreSQL supports modern SQL features, has free commercial and academic use, and offers performance comparable to other databases while being very reliable with stable code and robust testing. The architecture uses a client-server model to handle concurrent connections and transactions provide atomic, isolated, and durable operations. PostgreSQL also supports user-defined types, inheritance, and other advanced features.
This document summarizes Josh Berkus's presentation on new features in PostgreSQL 9.1. It discusses synchronous replication, per-column collations, writable common table expressions, serializable snapshot isolation, unlogged tables, extensions, k-nearest neighbor searches, foreign data wrappers, and several other topics. The presentation provides code examples and explains how various features work. It also lists upcoming PostgreSQL events and sessions at pgCon.
This document summarizes PL/Java, which allows writing server-side functions in Java for PostgreSQL. It discusses how to define and deploy Java functions, configure PL/Java, handle parameters and return types, use JDBC from functions, and write triggers in Java. While compatible with Oracle's SQL/JRT standard, PL/Java has some limitations around memory usage and performance. It works best on Linux and is a stable option for adding Java code to PostgreSQL databases.
The document discusses PostgreSQL storage architecture, authentication, permissions, and commands. It provides details on:
- The PostgreSQL data directory structure and how tables and indexes are stored as separate files across multiple file segments.
- Authentication configuration using pg_hba.conf for host-based authentication and pg_ident.conf for user identification mapping. Authentication methods include trust, reject, ident, password, md5, and pam.
- SQL commands for managing users, databases, tables, permissions, and database maintenance like vacuuming and reindexing.
- Backup methods including SQL dumps, file system backups, and continuous archiving.
This document provides an overview of PL/Proxy, a database partitioning system implemented as a PostgreSQL procedural language extension. PL/Proxy allows applications to perform database operations like inserts, updates, deletes and queries across multiple PostgreSQL database partitions in a transparent manner. It works by routing operations to the appropriate partition based on the value of a partitioning key. The document discusses PL/Proxy concepts, areas of application, example usage, installation, backend and frontend functions, configuration options and more.
This document provides a summary of PostgreSQL 9.1 features presented at the Postgres Open conference in September 2011. It discusses synchronous and asynchronous replication, per-column collations, writable common table expressions, serialized snapshot isolation, unlogged tables, extensions, and other features. Sessions at the conference focused on unlogged tables, accelerating local search, PostgreSQL in data management, and serializable transactions.
This document provides an introduction and overview of PostgreSQL, including its history, features, installation, usage and SQL capabilities. It describes how to create and manipulate databases, tables, views, and how to insert, query, update and delete data. It also covers transaction management, functions, constraints and other advanced topics.
Presentation that I gave as a guest lecture for a summer intensive development course at nod coworking in Dallas, TX. The presentation targets beginning web developers with little, to no experience in databases, SQL, or PostgreSQL. I cover the creation of a database, creating records, reading/querying records, updating records, destroying records, joining tables, and a brief introduction to transactions.
This document provides an introduction and overview of PostgreSQL, an open-source object-relational database management system. It discusses that PostgreSQL supports modern SQL features, has free commercial and academic use, and offers performance comparable to other databases while being very reliable with stable code and robust testing. The architecture uses a client-server model to handle concurrent connections and transactions provide atomic, isolated, and durable operations. PostgreSQL also supports user-defined types, inheritance, and other advanced features.
This document summarizes Josh Berkus's presentation on new features in PostgreSQL 9.1. It discusses synchronous replication, per-column collations, writable common table expressions, serializable snapshot isolation, unlogged tables, extensions, k-nearest neighbor searches, foreign data wrappers, and several other topics. The presentation provides code examples and explains how various features work. It also lists upcoming PostgreSQL events and sessions at pgCon.
This document summarizes PL/Java, which allows writing server-side functions in Java for PostgreSQL. It discusses how to define and deploy Java functions, configure PL/Java, handle parameters and return types, use JDBC from functions, and write triggers in Java. While compatible with Oracle's SQL/JRT standard, PL/Java has some limitations around memory usage and performance. It works best on Linux and is a stable option for adding Java code to PostgreSQL databases.
The document discusses PostgreSQL storage architecture, authentication, permissions, and commands. It provides details on:
- The PostgreSQL data directory structure and how tables and indexes are stored as separate files across multiple file segments.
- Authentication configuration using pg_hba.conf for host-based authentication and pg_ident.conf for user identification mapping. Authentication methods include trust, reject, ident, password, md5, and pam.
- SQL commands for managing users, databases, tables, permissions, and database maintenance like vacuuming and reindexing.
- Backup methods including SQL dumps, file system backups, and continuous archiving.
This document provides an overview of PL/Proxy, a database partitioning system implemented as a PostgreSQL procedural language extension. PL/Proxy allows applications to perform database operations like inserts, updates, deletes and queries across multiple PostgreSQL database partitions in a transparent manner. It works by routing operations to the appropriate partition based on the value of a partitioning key. The document discusses PL/Proxy concepts, areas of application, example usage, installation, backend and frontend functions, configuration options and more.
This document provides a summary of PostgreSQL 9.1 features presented at the Postgres Open conference in September 2011. It discusses synchronous and asynchronous replication, per-column collations, writable common table expressions, serialized snapshot isolation, unlogged tables, extensions, and other features. Sessions at the conference focused on unlogged tables, accelerating local search, PostgreSQL in data management, and serializable transactions.
This document provides an introduction and overview of PostgreSQL, including its history, features, installation, usage and SQL capabilities. It describes how to create and manipulate databases, tables, views, and how to insert, query, update and delete data. It also covers transaction management, functions, constraints and other advanced topics.
This document provides an overview of Database Jones, a Node.js API for highly scalable database access to MySQL Cluster. It introduces J.D. Duncan and Craig Russell, the creators of Database Jones, and describes how Database Jones provides an asynchronous JavaScript API that can be used with MySQL Cluster and other databases. It also summarizes the key features and capabilities of Database Jones, including its data modeling approaches, operations, and usage with Node.js applications.
MySQL is an open-source relational database management system that was created to be very fast, reliable and easy to use. It discusses how to install and configure MySQL, describes basic data management commands like creating databases and tables, inserting and querying data. The document also covers advantages of MySQL like being multi-threaded and some disadvantages like not supporting stored procedures initially.
The document provides steps for installing MySQL on Windows, describes basic SQL commands like CREATE, SELECT, INSERT, UPDATE and DELETE. It also covers how to create databases and tables, grant user privileges, and includes examples of various SQL statements.
This document provides an overview of MySQL Cluster and NoSQL. It discusses how to set up nodes in a multi-node MySQL Cluster, including connecting to the network and firewall configuration. It also outlines the tutorial agenda, which will first cover deploying a MySQL Cluster and then developing applications using ClusterJ, Memcache, and Node.js connectors. Presenter biographies and a high-level introduction to database concepts, MySQL Cluster architecture, and the basics of MySQL Cluster are also included.
The document provides an agenda and overview for a hands-on workshop on Oracle 12c pluggable databases. The agenda includes topics on Oracle history, container databases, pluggable databases, new users and privileges in Oracle 12c, and several hands-on labs for activities like dropping/unplugging pluggable databases, plugging/cloning pluggable databases from remote container databases using database links, and moving a non-container database to a container database using Data Pump transportable export/import. Slides accompany the topics and provide additional technical details on concepts like container databases, pluggable databases, and the new user and role architecture in Oracle 12c.
This document provides information about installing and using the Firebird RDBMS, including:
- The two main types of Firebird servers and how to start/stop the Superserver.
- Default username and password for administration, and how to add/modify user accounts.
- Using the isql tool to connect to databases and execute SQL statements.
- Basic troubleshooting for common errors.
- Security measures like logging login attempts and restricting access after failed logins.
- Using the GBAK tool to backup and restore entire Firebird databases.
PostgreSQL is an open-source relational database management system that runs on Linux, Unix, Windows and Mac OS. It supports SQL queries, transactions, foreign keys, triggers and views. To install PostgreSQL, download the installer package for your platform and run through the installation process, which sets up the database files and creates a default database and user account. The psql command-line tool can then be used to interact with and administer the PostgreSQL database using SQL commands.
Porting Oracle applications to PostgreSQL can be difficult due to differences in SQL syntax, data types, functions, and PL/SQL implementations between the databases. While many elements like table definitions and queries may port easily, issues arise with data types, functions, outer joins, null values, triggers, date/time handling, and PL/SQL syntax. A full rewrite may be preferable to porting in many cases. Careful evaluation and planning is needed to determine the best approach.
How to export import a mysql database via ssh in aws lightsail wordpress rizw...AlexRobert25
Suppose you want a database backup of any instances ‘ in AWS Lightsail WordPress ‘ through putty or SSH. For that, first, we need to create an instance
Redis is an in-memory data structures server that can be used as a database, cache, message broker, and queue. It supports many data types like strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs and provides features like replication, Lua scripting, publish/subscribe, and key-value access. Redis is written in C and known for its high performance due to its small codebase and use of memory for storage instead of disk.
1. The document discusses various SQL concepts including DCL (Data Control Language), DDL (Data Definition Language), DML (Data Manipulation Language), functions, users, phpMyAdmin, and procedures.
2. Key SQL commands covered include COMMIT, ROLLBACK, GRANT, REVOKE, CREATE, ALTER, DROP, SELECT, INSERT, UPDATE, DELETE.
3. The document also discusses creating, using, and dropping MySQL users, as well as the features and uses of phpMyAdmin for database administration.
The document provides an introduction to basic MySQL commands for logging in, creating and modifying database structure (DDL commands), retrieving and modifying data (DML commands), managing transactions (TCL commands), controlling access (DCL commands), and other common commands like SET, DESCRIBE, SHOW, and SHUTDOWN. It explains what each type of command is used for and provides examples.
MySQL Slow Query log Monitoring using Beats & ELKI Goo Lee
This document provides instructions for using Filebeat, Logstash, Elasticsearch, and Kibana to monitor MySQL slow query logs. It describes installing and configuring each component, with Filebeat installed on database servers to collect slow query logs, Logstash to parse and index the logs, Elasticsearch for storage, and Kibana for visualization and dashboards. Key steps include configuring Filebeat to ship logs to Logstash, using grok filters in Logstash to parse the log fields, outputting to Elasticsearch, and visualizing slow queries and creating sample dashboards in Kibana.
MySQL Audit using Percona audit plugin and ELKI Goo Lee
This document discusses setting up MySQL auditing using the Percona Audit Plugin and ELK (Elasticsearch, Logstash, Kibana). It describes installing and configuring the Percona Audit Plugin on MySQL servers to generate JSON audit logs. It then covers using Rsyslog or Filebeat to ship the logs to the Logstash server, and configuring Logstash to parse, enrich, and index the logs into Elasticsearch. Finally, it discusses visualizing the audit data with Kibana dashboards containing graphs and searching. The architecture involves MySQL servers generating logs, Logstash collecting and processing them, and Elasticsearch and Kibana providing search and analytics.
Why and How Powershell will rule the Command Line - Barcamp LA 4Ilya Haykinson
PowerShell is a command shell for Windows that treats commands as objects that interact through pipes and objects. It provides a fully-fledged programming language where commands manipulate objects and share a common naming convention. PowerShell holds that commands should do one thing well and interact through a consistent environment, addressing issues with text parsing between traditional command line programs.
This document discusses setting up MySQL auditing using the Percona Audit Plugin and ELK (Elasticsearch, Logstash, Kibana) stack to retrieve and analyze MySQL logs. Key steps include installing the Percona Audit Plugin on MySQL servers, configuring it to log to syslog, installing and configuring rsyslog/syslog-ng on database and ELK servers to forward logs, and installing and configuring the ELK stack including Elasticsearch, Logstash, and Kibana to index and visualize the logs. Examples are provided of creating searches, graphs, and dashboards in Kibana for analyzing the MySQL audit logs.
The document discusses installing and configuring MySQL on Linux. It provides steps to install MySQL using RPM files, set passwords for security, test the installation, and configure applications to connect to the database. It also covers basic and advanced MySQL commands like CREATE TABLE, SELECT, JOIN, and more.
This document provides instructions on installing and configuring MySQL on Linux. It discusses downloading and installing the MySQL RPM package, setting the root password for security, starting the MySQL server and client, and running basic queries to test the installation. It also covers additional MySQL commands and configurations including user privileges, database design, backups, and restoring data.
This document provides instructions for cloning an Oracle database. The process involves:
1. Creating an identical copy of the database files and control files on the same or different machine.
2. Renaming the instance if cloning to a different machine by changing the ORACLE_SID environment variable and starting the database with a new initialization parameter file.
3. Optionally renaming the database name by generating a new control file script and editing initialization parameters to point to the new database name.
The document provides an overview of PostgreSQL performance tuning. It discusses caching, query processing internals, and optimization of storage and memory usage. Specific topics covered include the PostgreSQL configuration parameters for tuning shared buffers, work memory, and free space map settings.
This document provides an overview of Database Jones, a Node.js API for highly scalable database access to MySQL Cluster. It introduces J.D. Duncan and Craig Russell, the creators of Database Jones, and describes how Database Jones provides an asynchronous JavaScript API that can be used with MySQL Cluster and other databases. It also summarizes the key features and capabilities of Database Jones, including its data modeling approaches, operations, and usage with Node.js applications.
MySQL is an open-source relational database management system that was created to be very fast, reliable and easy to use. It discusses how to install and configure MySQL, describes basic data management commands like creating databases and tables, inserting and querying data. The document also covers advantages of MySQL like being multi-threaded and some disadvantages like not supporting stored procedures initially.
The document provides steps for installing MySQL on Windows, describes basic SQL commands like CREATE, SELECT, INSERT, UPDATE and DELETE. It also covers how to create databases and tables, grant user privileges, and includes examples of various SQL statements.
This document provides an overview of MySQL Cluster and NoSQL. It discusses how to set up nodes in a multi-node MySQL Cluster, including connecting to the network and firewall configuration. It also outlines the tutorial agenda, which will first cover deploying a MySQL Cluster and then developing applications using ClusterJ, Memcache, and Node.js connectors. Presenter biographies and a high-level introduction to database concepts, MySQL Cluster architecture, and the basics of MySQL Cluster are also included.
The document provides an agenda and overview for a hands-on workshop on Oracle 12c pluggable databases. The agenda includes topics on Oracle history, container databases, pluggable databases, new users and privileges in Oracle 12c, and several hands-on labs for activities like dropping/unplugging pluggable databases, plugging/cloning pluggable databases from remote container databases using database links, and moving a non-container database to a container database using Data Pump transportable export/import. Slides accompany the topics and provide additional technical details on concepts like container databases, pluggable databases, and the new user and role architecture in Oracle 12c.
This document provides information about installing and using the Firebird RDBMS, including:
- The two main types of Firebird servers and how to start/stop the Superserver.
- Default username and password for administration, and how to add/modify user accounts.
- Using the isql tool to connect to databases and execute SQL statements.
- Basic troubleshooting for common errors.
- Security measures like logging login attempts and restricting access after failed logins.
- Using the GBAK tool to backup and restore entire Firebird databases.
PostgreSQL is an open-source relational database management system that runs on Linux, Unix, Windows and Mac OS. It supports SQL queries, transactions, foreign keys, triggers and views. To install PostgreSQL, download the installer package for your platform and run through the installation process, which sets up the database files and creates a default database and user account. The psql command-line tool can then be used to interact with and administer the PostgreSQL database using SQL commands.
Porting Oracle applications to PostgreSQL can be difficult due to differences in SQL syntax, data types, functions, and PL/SQL implementations between the databases. While many elements like table definitions and queries may port easily, issues arise with data types, functions, outer joins, null values, triggers, date/time handling, and PL/SQL syntax. A full rewrite may be preferable to porting in many cases. Careful evaluation and planning is needed to determine the best approach.
How to export import a mysql database via ssh in aws lightsail wordpress rizw...AlexRobert25
Suppose you want a database backup of any instances ‘ in AWS Lightsail WordPress ‘ through putty or SSH. For that, first, we need to create an instance
Redis is an in-memory data structures server that can be used as a database, cache, message broker, and queue. It supports many data types like strings, hashes, lists, sets, sorted sets, bitmaps, hyperloglogs and provides features like replication, Lua scripting, publish/subscribe, and key-value access. Redis is written in C and known for its high performance due to its small codebase and use of memory for storage instead of disk.
1. The document discusses various SQL concepts including DCL (Data Control Language), DDL (Data Definition Language), DML (Data Manipulation Language), functions, users, phpMyAdmin, and procedures.
2. Key SQL commands covered include COMMIT, ROLLBACK, GRANT, REVOKE, CREATE, ALTER, DROP, SELECT, INSERT, UPDATE, DELETE.
3. The document also discusses creating, using, and dropping MySQL users, as well as the features and uses of phpMyAdmin for database administration.
The document provides an introduction to basic MySQL commands for logging in, creating and modifying database structure (DDL commands), retrieving and modifying data (DML commands), managing transactions (TCL commands), controlling access (DCL commands), and other common commands like SET, DESCRIBE, SHOW, and SHUTDOWN. It explains what each type of command is used for and provides examples.
MySQL Slow Query log Monitoring using Beats & ELKI Goo Lee
This document provides instructions for using Filebeat, Logstash, Elasticsearch, and Kibana to monitor MySQL slow query logs. It describes installing and configuring each component, with Filebeat installed on database servers to collect slow query logs, Logstash to parse and index the logs, Elasticsearch for storage, and Kibana for visualization and dashboards. Key steps include configuring Filebeat to ship logs to Logstash, using grok filters in Logstash to parse the log fields, outputting to Elasticsearch, and visualizing slow queries and creating sample dashboards in Kibana.
MySQL Audit using Percona audit plugin and ELKI Goo Lee
This document discusses setting up MySQL auditing using the Percona Audit Plugin and ELK (Elasticsearch, Logstash, Kibana). It describes installing and configuring the Percona Audit Plugin on MySQL servers to generate JSON audit logs. It then covers using Rsyslog or Filebeat to ship the logs to the Logstash server, and configuring Logstash to parse, enrich, and index the logs into Elasticsearch. Finally, it discusses visualizing the audit data with Kibana dashboards containing graphs and searching. The architecture involves MySQL servers generating logs, Logstash collecting and processing them, and Elasticsearch and Kibana providing search and analytics.
Why and How Powershell will rule the Command Line - Barcamp LA 4Ilya Haykinson
PowerShell is a command shell for Windows that treats commands as objects that interact through pipes and objects. It provides a fully-fledged programming language where commands manipulate objects and share a common naming convention. PowerShell holds that commands should do one thing well and interact through a consistent environment, addressing issues with text parsing between traditional command line programs.
This document discusses setting up MySQL auditing using the Percona Audit Plugin and ELK (Elasticsearch, Logstash, Kibana) stack to retrieve and analyze MySQL logs. Key steps include installing the Percona Audit Plugin on MySQL servers, configuring it to log to syslog, installing and configuring rsyslog/syslog-ng on database and ELK servers to forward logs, and installing and configuring the ELK stack including Elasticsearch, Logstash, and Kibana to index and visualize the logs. Examples are provided of creating searches, graphs, and dashboards in Kibana for analyzing the MySQL audit logs.
The document discusses installing and configuring MySQL on Linux. It provides steps to install MySQL using RPM files, set passwords for security, test the installation, and configure applications to connect to the database. It also covers basic and advanced MySQL commands like CREATE TABLE, SELECT, JOIN, and more.
This document provides instructions on installing and configuring MySQL on Linux. It discusses downloading and installing the MySQL RPM package, setting the root password for security, starting the MySQL server and client, and running basic queries to test the installation. It also covers additional MySQL commands and configurations including user privileges, database design, backups, and restoring data.
This document provides instructions for cloning an Oracle database. The process involves:
1. Creating an identical copy of the database files and control files on the same or different machine.
2. Renaming the instance if cloning to a different machine by changing the ORACLE_SID environment variable and starting the database with a new initialization parameter file.
3. Optionally renaming the database name by generating a new control file script and editing initialization parameters to point to the new database name.
The document provides an overview of PostgreSQL performance tuning. It discusses caching, query processing internals, and optimization of storage and memory usage. Specific topics covered include the PostgreSQL configuration parameters for tuning shared buffers, work memory, and free space map settings.
This presentation covers all aspects of PostgreSQL administration, including installation, security, file structure, configuration, reporting, backup, daily maintenance, monitoring activity, disk space computations, and disaster recovery. It shows how to control host connectivity, configure the server, find the query being run by each session, and find the disk space used by each database.
The paperback version is available on lulu.com there http://goo.gl/fraa8o
This is the first volume of the postgresql database administration book. The book covers the steps for installing, configuring and administering a PostgreSQL 9.3 on Linux debian. The book covers the logical and physical aspect of PostgreSQL. Two chapters are dedicated to the backup/restore topic.
This document provides an agenda and background information for a presentation on PostgreSQL. The agenda includes topics such as practical use of PostgreSQL, features, replication, and how to get started. The background section discusses the history and development of PostgreSQL, including its origins from INGRES and POSTGRES projects. It also introduces the PostgreSQL Global Development Team.
In 40 minutes the audience will learn a variety of ways to make postgresql database suddenly go out of memory on a box with half a terabyte of RAM.
Developer's and DBA's best practices for preventing this will also be discussed, as well as a bit of Postgres and Linux memory management internals.
The document discusses PostgreSQL's physical storage structure. It describes the various directories within the PGDATA directory that stores the database, including the global directory containing shared objects and the critical pg_control file, the base directory containing numeric files for each database, the pg_tblspc directory containing symbolic links to tablespaces, and the pg_xlog directory which contains write-ahead log (WAL) segments that are critical for database writes and recovery. It notes that tablespaces allow spreading database objects across different storage devices to optimize performance.
This document provides an overview of five steps to improve PostgreSQL performance: 1) hardware optimization, 2) operating system and filesystem tuning, 3) configuration of postgresql.conf parameters, 4) application design considerations, and 5) query tuning. The document discusses various techniques for each step such as selecting appropriate hardware components, spreading database files across multiple disks or arrays, adjusting memory and disk configuration parameters, designing schemas and queries efficiently, and leveraging caching strategies.
Constraints enforce rules at the table level to maintain data integrity. The main types are NOT NULL, UNIQUE, PRIMARY KEY, FOREIGN KEY, and CHECK. Constraints can be created at the column or table level and are defined using SQL's CREATE TABLE and ALTER TABLE statements. User can view existing constraints and their properties in data dictionary views like USER_CONSTRAINTS and USER_CONS_COLUMNS.
PLM Software, What is PLM, Cloud Based PLM, Supply Chain, Cloud vs. on premises PLM systems, PLM Benefits, Bill or Material, BOM Management, BOMControl, Content Management
This document provides an overview of administering user security in a database. It covers how to create and manage database user accounts by authenticating users, assigning default tablespaces, granting and revoking privileges, and creating and managing roles. It also discusses how to create and manage profiles to implement standard password security features and control resource usage by users. The predefined SYS and SYSTEM accounts and their privileges are described. Methods for unlocking user accounts, assigning privileges to roles, and assigning roles to users are also summarized.
The document discusses various types of constraints in SQL including column level constraints like NOT NULL, UNIQUE, DEFAULT, and CHECK constraints as well as table level constraints like PRIMARY KEY and FOREIGN KEY. It provides examples of how to define these constraints when creating or altering tables and explains how each constraint enforces integrity rules and data validation. Constraints are used to impose rules on data values and relationships between columns and tables.
This one is about advanced indexing in PostgreSQL. It guides you through basic concepts as well as through advanced techniques to speed up the database.
All important PostgreSQL Index types explained: btree, gin, gist, sp-gist and hashes.
Regular expression indexes and LIKE queries are also covered.
This document discusses database indexing. It provides information on the benefits of indexes, how to create indexes, common misconceptions about indexing, and rules for determining when and how to create indexes. Key points include that indexes improve performance of queries by enabling faster data retrieval and synchronization; indexes should be created on columns frequently filtered in WHERE and JOIN clauses; and the order of columns in an index matters for its effectiveness.
The document discusses managing users, roles, and privileges in Oracle databases. It covers creating, altering, and dropping users, viewing user information, predefined user accounts, different types of privileges including system privileges and object privileges, and user roles. It provides examples and descriptions of commands for working with users, roles, and privileges in Oracle databases.
This document discusses PostgreSQL parameter tuning, specifically related to memory and optimizer parameters. It provides guidance on setting parameters like shared_buffer, work_mem, temp_buffer, maintenance_work_mem, random_page_cost, sequential_page_cost, and effective_cache_size to optimize performance based on hardware characteristics like available RAM and disk speed. It also covers force_plan parameters that can include or exclude certain query optimization techniques.
The latest version of my PostgreSQL introduction for IL-TechTalks, a free service to introduce the Israeli hi-tech community to new and interesting technologies. In this talk, I describe the history and licensing of PostgreSQL, its built-in capabilities, and some of the new things that were added in the 9.1 and 9.2 releases which make it an attractive option for many applications.
PostgreSQL is an open source object-relational database system that has been in development since 1982. It supports Linux, Windows, Mac OS X, and Solaris and can be installed using package managers or installers. PostgreSQL provides many features including procedural languages, functions, indexes, triggers, multi-version concurrency control, and point-in-time recovery. It also has various administration and development tools.
Create tables for a human resources database schema with entities like regions, countries, locations, departments, employees and jobs. Define primary and foreign keys. Populate tables with sample data using INSERT statements. Write queries using JOINs, aggregation functions like COUNT, SUM and GROUP BY to analyze data and get insights. Create views to store commonly used queries.
This document provides an overview of keys and joins in SQL. It discusses the different types of keys like primary keys, foreign keys, and unique keys. It also covers the different types of joins like inner joins, outer joins, and self joins. The document provides examples of creating keys and using different join types. It discusses some performance trade-offs between joins and alternatives like using multiple update statements instead of cursors.
GreenDAO is an ORM (Object Relational Mapping) library that uses code generation to provide a faster alternative to reflection-based ORMs for Android. It works by mapping Java objects to SQLite database tables to allow for CRUD (create, read, update, delete) operations on the data. The library includes a schema definition and code generator that automatically generates DAO classes to manage database access and queries. This avoids the performance overhead of reflection and allows for compiler checks of the database schema.
Kellyn Pot’Vin-Gorman presented a comparison of indexing in Oracle and SQL Server databases. They loaded test data into indexes in each platform, altered the fill factor/pctfree settings, and measured performance of data loads, updates, deletes and index storage. SQL Server exhibited page splits when indexes became fragmented, while Oracle showed leaf block splits. Rebuilding indexes with high pctfree in Oracle had significantly worse performance than rebuilding with lower fillfactor in SQL Server. Overall, Oracle generally outperformed SQL Server for indexing and was less prone to fragmentation issues.
Intro to SQL by Google's Software EngineerProduct School
Intro to SQL, by Roman Polonsky, software engineer on Google's Global Tools Team.
SQL provides powerful but reasonably simple tools for data analysis and handling. This workshop will take absolute beginners through the basics of SQL. You’ll learn SQL queries needed to collect data from a database, even if it lives in different places and analyze it to find the answers you’re looking for.
Take away from this workshop the understanding of essential SQL skills that allow developers to write queries against single and multiple tables, manipulate data in tables, and create database objects.
The document discusses how to create a database and tables in SQL using DDL statements like CREATE, DROP, and ALTER. It explains that CREATE is used to define new database objects, DROP removes objects, and ALTER modifies objects. Specific examples show how to create a database called ABCCO, and tables like Persons with columns for ID, name, city. It also covers defining primary keys, foreign keys, default and null values when creating tables.
PHX - Session #4 Treating Databases as First-Class Citizens in DevelopmentSteve Lange
The document discusses treating databases as first-class citizens in development by managing schemas and data through database projects and tools. It addresses questions around where the truth of a schema resides, how to version databases, generate test data, perform unit testing, and manage changes. The key points are using database projects to represent the truth of the schema, version control to manage versions, test data generators for testing, and schema/data comparison tools to facilitate refactoring and managing changes.
Session #4: Treating Databases as First-Class Citizens in DevelopmentSteve Lange
The document discusses treating databases as first-class citizens in development by managing schemas and data through database projects and tools. It addresses questions around where the truth of a schema resides, how to version databases, generate test data, perform unit testing, and manage changes. The key points are using database projects to represent the truth of the schema, version control to manage versions, test data generators for testing, and tools for schema/data compares and refactoring to facilitate change management.
How to teach an elephant to rock'n'rollPGConf APAC
The document discusses techniques for optimizing PostgreSQL queries, including:
1. Using index only scans to efficiently skip large offsets in queries instead of scanning all rows.
2. Pulling the LIMIT clause under joins and aggregates to avoid processing unnecessary rows.
3. Employing indexes creatively to perform DISTINCT operations by scanning the index instead of the entire table.
4. Optimizing DISTINCT ON queries by looping through authors and returning the latest row for each instead of a full sort.
Connecting and using PostgreSQL database with psycopg2 [Python 2.7]Dinesh Neupane
This presentation covers the basic idea of connecting postgresql database with python and psycopg2 module.
Covered Topics:
1. Psycopg2 Installation
2. Connecting to PostgreSQL Database
3. Connection Parameters
4. Create and Drop Table
5. Adaptation of Python Values to SQL Types
6. SQL Transactions
7. DML
The document discusses Structured Query Language (SQL) and its basic statements. It covers:
- SQL is used to request and retrieve data from databases. The DBMS processes SQL queries and returns results.
- SQL statements are divided into DDL (data definition language) for managing schema, DML (data manipulation language) for data queries/modification, and DCL (data control language) for managing transactions and access control.
- The document provides examples of using SQL commands like CREATE TABLE, ALTER TABLE, DROP TABLE, INSERT, UPDATE, DELETE, SELECT and indexes. It also covers data types, constraints and operators used in SQL queries.
This document discusses Oracle database triggers. It defines triggers as stored procedures that are activated when certain conditions occur, such as when a data manipulation language (DML) statement is executed. Triggers can be used at the table, schema, or database level. Examples are provided that demonstrate using triggers to log changes to employee salaries or to validate data. Statement-level and row-level triggers are described. Triggers can call stored procedures, including Java procedures. The document also covers creating, viewing, disabling, and dropping triggers.
This document provides an introduction to NoSQL and MongoDB. It discusses the challenges with relational databases and how NoSQL databases like MongoDB are better suited for unstructured and growing datasets. The document then covers MongoDB specifically, including its features, data types, installation, and usage with PHP. It provides examples of basic CRUD operations in MongoDB and references for further reading.
Session 8 connect your universal application with database .. builders & deve...Moatasim Magdy
This document provides an overview of using SQLite database with C# and Universal Windows Platform (UWP) applications. It discusses why to use a database, the basic SQL queries like CREATE, SELECT, INSERT, UPDATE, DELETE. It then demonstrates how to connect a UWP app to a SQLite database, create and open the database, define and add records to tables, query and update records. The steps include adding SQLite references, installing SQLite packages, checking for database existence, creating and opening connections, executing queries to select, insert, update and delete records from tables.
Работа с реляционными базами данных в C++corehard_by
The document discusses various C++ libraries for working with relational databases, including native database clients, third-party libraries, and what may be on the horizon. It covers libraries for PostgreSQL, MySQL, Oracle, Microsoft SQL Server, and others. It provides code examples for connecting to a database and executing queries using libraries like QtSQL, Poco::Data, OTL, SOCI, and Sqlpp11. It also mentions a proposed new library called cppstddb that aims to provide a standardized C++ interface for databases.
This document provides an overview and introduction to MySQL. It begins with a comparison of MySQL to Microsoft Access, noting MySQL's open source nature, availability on multiple platforms, and emphasis on fast query processing. It then covers how to connect to the MySQL server from the command line. The document spends the majority of its time reviewing SQL, including data definition commands to create tables and define data types, as well as data manipulation commands to select, insert, update and delete data. It also covers joining multiple tables, ordering and grouping results, and using aggregate functions. In summary, this document serves as an introduction and primer to the basics of MySQL and SQL.
This document provides an overview and introduction to MySQL. It begins with a comparison of MySQL to Microsoft Access, noting MySQL's open source nature, cross-platform availability, and emphasis on fast query processing. Basic commands for connecting to a MySQL server and exploring database and table structures are presented. The document then covers SQL data definition and manipulation languages, including creating and modifying tables, inserting and selecting data. Joins between tables and use of aggregate functions are also summarized. Overall, the document provides a high-level tour of MySQL's basic features and capabilities.
This document provides an introduction to SQL (Structured Query Language). SQL is a language used to define, query, modify, and control relational databases. The document outlines the main SQL commands for data definition (CREATE, ALTER, DROP), data manipulation (INSERT, UPDATE, DELETE), and data control (GRANT, REVOKE). It also discusses SQL data types, integrity constraints, and how to use SELECT statements to query databases using projections, selections, comparisons, logical conditions, and ordering. The FROM clause is introduced as specifying the relations involved in a query.
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
The document provides an overview of SQL commands and operations including:
1) Creating a database and table, inserting and selecting data, updating records with WHERE clauses.
2) Altering tables by adding or modifying columns and constraints.
3) Different SQL statements like SELECT, INSERT, UPDATE and DELETE and clauses like WHERE are discussed along with syntax and examples.
Cloud Migration Paths: Kubernetes, IaaS, or DBaaSEDB
Moving to the cloud is hard, and moving Postgres databases to the cloud is even harder. Public cloud or private cloud? Infrastructure as a Service (IaaS), or Platform as a Service (PaaS)? Kubernetes for the application, or for the database and the application? This talk will juxtapose self-managed Kubernetes and container-based database solutions, Postgres deployments on IaaS, and Postgres DBaaS solutions of which EDB’s DBaaS BigAnimal is the latest example.
Die 10 besten PostgreSQL-Replikationsstrategien für Ihr UnternehmenEDB
Dieses Webinar hilft Ihnen, die Unterschiede zwischen den verschiedenen Replikationsansätzen zu verstehen, die Anforderungen der jeweiligen Strategie zu erkennen und sich über die Möglichkeiten klar zu werden, was mit jeder einzelnen zu erreichen ist. Damit werden Sie hoffentlich eher in der Lage sein, herauszufinden, welche PostgreSQL-Replikationsarten Sie wirklich für Ihr System benötigen.
- Wie physische und logische Replikation in PostgreSQL funktionieren
- Unterschiede zwischen synchroner und asynchroner Replikation
- Vorteile, Nachteile und Herausforderungen bei der Multi-Master-Replikation
- Welche Replikationsstrategie für unterschiedliche Use-Cases besser geeignet ist
Referent:
Borys Neselovskyi, Regional Sales Engineer DACH, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
Cuando busca alternativas a Oracle en la nube, hacer el cambio puede parecer un trabajo duro. Entendemos que la migración involucra más que solo la base de datos. La compatibilidad es un punto clave, especialmente cuando se consideran los recursos que posiblemente ya haya invertido en Oracle, como por ejemplo el código de aplicación específico de Oracle.Este seminario web explorará las opciones y las principales consideraciones al pasar de las bases de datos de Oracle a la nube.
- Revisión detallada de las ofertas de bases de datos disponibles en la nube
- Factores críticos que se deben considerar considerar para elegir la oferta en la nube más adecuada
- Cómo la experiencia de EDB con PostgreSQL puede ayudarlo en su decisión
- Demostración de BigAnimal de EDB
Présentateur:
Sergio Romera, Senior Sales Engineer EMEA, EDB
------------------------------------------------------------
For more #webinars, visit http://bit.ly/EDB-Webinars
Download free #PostgreSQL whitepapers: http://bit.ly/EDB-Whitepapers
Read our #Postgres Blog http://bit.ly/EDB-Blogs
Follow us on Facebook at http://bit.ly/EDB-FB
Follow us on Twitter at http://bit.ly/EDB-Twitter
Follow us on LinkedIn at http://bit.ly/EDB-LinkedIn
Reach us via email at marketing@enterprisedb.com
This document provides an overview and demonstration of EnterpriseDB's Failover Manager (EFM). It begins with an overview of EFM's capabilities in ensuring high availability and minimizing downtime during database upgrades or maintenance. It then covers installation and configuration prerequisites, supported platforms, and the EFM architecture involving a primary, standby, and witness database nodes. The remainder demonstrates switchover and failover functionality through a live demo in a replication environment using CentOS 7.7 and EnterpriseDB PostgreSQL Advanced Server 13.
Database come PostgreSQL non possono girare su Kubernetes. Questo è il ritornello che sentiamo continuamente, ma al tempo stesso la motivazione per noi di EDB di abbattere questo muro, una volta per tutte.
In questo webinar parleremo della nostra avventura finora per portare PostgreSQL su Kubernetes. Scopri perché crediamo che fare benchmark di storage e del database prima di andare in produzione porti a una più sana e longeva vita di un DBMS, anche su Kubernetes.
Condivideremo il nostro processo, i risultati fin qui ottenuti e sveleremo i nostri piani per il futuro con Cloud Native PostgreSQL.
Las Variaciones de la Replicación de PostgreSQLEDB
Replicación física, replicación lógica, síncrona, asíncrona, multi-maestro, escalabilidad horizontal, etc. Son muchos los términos asociados con la replicación de bases de datos. En esta charla revisaremos los conceptos fundamentales detrás de cada variación de la replicación de PostgreSQL, y en qué casos conviene usar una o la otra. La presentación incluye una parte práctica con demostraciones aunque no será un tutorial sobre como configurar un cluster. El enfoque está en entender cada variación para elegir la mejor dependiendo del caso de uso.
Cosas que aprenderán:
- Cómo funciona la replicación física en PostgreSQL
- Cómo funciona la replicación lógica en PostgreSQL
- Diferencias entre replicación síncrona y asíncrona
- Qué es replicación multi-maestro
NoSQL and Spatial Database Capabilities using PostgreSQLEDB
PostgreSQL is an object-relational database system. NoSQL on the other hand is a non-relational database and is document-oriented. Learn how the PostgreSQL database gives one the flexible options to combine NoSQL workloads with the relational query power by offering JSON data types. With PostgreSQL, new capabilities can be developed and plugged into the database as required.
Attend this webinar to learn:
- The new features and capabilities in PostgreSQL for new workloads, requiring greater flexibility in the data model
- NoSQL with JSON, Hstore and its performance and features for enterprises
- Spatial SQL - advanced features in PostGIS application with PostGIS extension
"Why use PgBouncer? It’s a lightweight, easy to configure connection pooler and it does one job well. As you’d expect from a talk on connection pooling, we’ll give a brief summary of connection pooling and why it increases efficiency. We’ll look at when not to use connection pooling, and we’ll demonstrate how to configure PgBouncer and how it works. But. Did you know you can also do this? 1. Scaling PgBouncer PgBouncer is single threaded which means a single instance of PgBouncer isn’t going to do you much good on a multi-threaded and/or multi-CPU machine. We’ll show you how to add more PgBouncer instances so you can use more than one thread for easy scaling. 2. Read-write / read only routing Using different pgBouncer databases you can route read-write traffic to the primary database and route read-only traffic to a number of standby databases. 3. Load balancing When we use multiple PgBouncer instances, load balancing comes for free. Load balancing can be directed to different standbys, and weighted according to ratios of load. 4. Silent failover You can perform silent failover during promotion of a new primary (assuming you have a VIP/DNS etc that always points to the primary). 5. And even: DoS prevention and protection from “badly behaved” applications! By using distinct port numbers you can provide database connections which deal with sudden bursts of incoming traffic in very different ways, which can help prevent the database from becoming swamped during high activity periods. You should leave the presentation wondering if there is anything PgBouncer can’t do."
In this talk I'll discuss how we can combine the power of PostgreSQL with TensorFlow to perform data analysis. By using the pl/python3 procedural language we can integrate machine learning libraries such as TensorFlow with PostgreSQL, opening the door for powerful data analytics combining SQL with AI. Typical use-cases might involve regression analysis to find relationships in an existing dataset and to predict results based on new inputs, or to analyse time series data and extrapolate future data taking into account general trends and seasonal variability whilst ignoring noise. Python is an ideal language for building custom systems to do this kind of work as it gives us access to a rich ecosystem of libraries such as Pandas and Numpy, in addition to TensorFlow itself.
Practical Partitioning in Production with PostgresEDB
Has your table become too large to handle? Have you thought about chopping it up into smaller pieces that are easier to query and maintain? What if it's in constant use? An introduction to the problems that can arise and how PostgreSQL's partitioning features can help, followed by a real-world scenario of partitioning an existing huge table on a live system. We will be looking at the problems caused by having very large tables in your database and how declarative table partitioning in Postgres can help. Also, how to perform dimensioning before but also after creating huge tables, partitioning key selection, the importance of upgrading to get the latest Postgres features and finally we will dive into a real-world scenario of having to partition an existing huge table in use on a production system.
There have been plenty of “explaining EXPLAIN” type talks over the years, which provide a great introduction to it. They often also cover how to identify a few of the more common issues through it. EXPLAIN is a deep topic though, and to do a good introduction talk, you have to skip over a lot of the tricky bits. As such, this talk will not be a good introduction to EXPLAIN, but instead a deeper dive into some of the things most don’t cover. The idea is to start with some of the more complex and unintuitive calculations needed to work out the relationships between operations, rows, threads, loops, timings, buffers, CTEs and subplans. Most popular tools handle at least several of these well, but there are cases where they don’t that are worth being conscious of and alert to. For example, we’ll have a look at whether certain numbers are averaged per-loop or per-thread, or both. We’ll also cover a resulting rounding issue or two to be on the lookout for. Finally, some per-operation timing quirks are worth looking out for where CTEs and subqueries are concerned, for example CTEs that are referenced more than once. As time allows, we can also look at a few rarer issues that can be spotted via EXPLAIN, as well as a few more gotchas that we’ve picked up along the way. This includes things like spotting when the query is JIT, planning, or trigger time dominated, spotting the signs of table and index bloat, issues like lossy bitmap scans or index-only scans fetching from the heap, as well as some things to be aware of when using auto_explain.
This document provides an overview of using PostgreSQL for IoT applications. Chris Ellis discusses why PostgreSQL is a good fit for IoT due to its flexibility and extensibility. He describes various ways of storing, loading, and processing IoT time series and sensor data in PostgreSQL, including partitioning, batch loading, and window functions. The document also briefly mentions the TimescaleDB extension for additional time series functionality.
The document describes a migration from an Oracle database topology to a PostgreSQL database topology at ACI. It discusses the starting Oracle topology with issues around operational complexity and non-ACID compliance. It then describes the target PostgreSQL topology with improved performance, availability and lower costs. The document outlines decisions around tools, extensions, code changes and testing approaches needed for the migration. It also discusses options for migrating the data and cutting over to the new PostgreSQL environment.
The document provides an introduction to using the psql command line tool for interacting with PostgreSQL databases. It explains how to connect to a database, perform basic queries, explain query plans, and get information about tables, schemas, and users.
EDB 13 - New Enhancements for Security and Usability - APJEDB
Database security is always of paramount importance to all organizations. In this webinar, we will explore the security, usability, and portability updates of the latest version of the EDB database server and tools.
Join us in this webinar to learn:
- The new security features such as SCRAM and the encryption of database passwords and traffic between Failover Manager agents
- Usability updates that automate partitioning, verify backup integrity, and streamline the management of failover and backups
- Portability improvements that simplify running PostgreSQL across on-premise and cloud environments
Dans ce webinar, nous allons parler des différences entre une sauvegarde physique et une sauvegarde logique. Nous allons lister les avantages et inconvénients, les principales considérations et les outils disponibles pour les deux méthodes.
- Perte de données
- Exports logiques
- Standbys
- WALs et Recovery
- Snapshots VM/Disques
- Sauvegardes physique
- Conclusion
Vieni a scoprire Cloud Native PostgreSQL (CNP), l’operatore per Kubernetes, direttamente da coloro che lo hanno ideato e lo sviluppano in EDB.
CNP facilita l’integrazione di database PostgreSQL con le tue applicazioni all’interno di cluster Kubernetes e OpenShift Container Platform di RedHat, grazie alla sua gestione automatica dell’architettura primario/standby che include: self-healing, failover, switchover, rolling update, backup, ecc.
Durante il webinar affronteremo i seguenti punti:
- DevOps e Cloud Native
- Introduzione a Cloud Native PostgreSQL
- Architetture
- Caratteristiche principali
- Esempi di uso e configurazione
- Kubernetes, Storage e Postgres
- Demo
- Conclusioni
New enhancements for security and usability in EDB 13EDB
EDB 13 enhances our flagship database server and tools. This webinar will explore its security, usability, and portability updates. Join us to learn how EDB 13 can help you improve your PostgreSQL productivity and data protection.
Webinar highlights include:
- New security features such as SCRAM and the encryption of database passwords and traffic between Failover Manager agents
- Usability updates that automate partitioning, verify backup integrity and streamline the management of failover and backups
- Portability improvements that simplify running PostgreSQL across on-premise and cloud environments
The webinar will review a multi-layered framework for PostgreSQL security, with a deeper focus on limiting access to the database and data, as well as securing the data.
Using the popular AAA (Authentication, Authorization, Auditing) framework we will cover:
- Best practices for authentication (trust, certificate, MD5, Scram, etc).
- Advanced approaches, such as password profiles.
- Deep dive of authorization and data access control for roles, database objects (tables, etc), view usage, row-level security, and data redaction.
- Auditing, encryption, and SQL injection attack prevention.
Note: this session is delivered in German
Speaker:
Borys Neselovskyi, Sales Engineer, EDB
EDB Cloud Native Postgres includes database container images and a Kubernetes Operator that manage the lifecycle of a database from deployment to operations. This Kubernetes Operator for Postgres is written by EDB entirely from scratch in the Go language and relies exclusively on the Kubernetes API.
Attend this webinar to learn about:
- DevOps & Cloud Native
- Overview of Cloud Native Postgres
- Storage for Postgres workloads in Kubernetes
- Using Cloud Native Postgres
- Demo
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Ocean lotus Threat actors project by John Sitima 2024 (1).pptxSitimaJohn
Ocean Lotus cyber threat actors represent a sophisticated, persistent, and politically motivated group that poses a significant risk to organizations and individuals in the Southeast Asian region. Their continuous evolution and adaptability underscore the need for robust cybersecurity measures and international cooperation to identify and mitigate the threats posed by such advanced persistent threat groups.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
National Security Agency - NSA mobile device best practices
Data Processing Inside PostgreSQL
1. Processing Data Inside PostgreSQL
BRUCE MOMJIAN,
ENTERPRISEDB
February, 2009
Abstract
There are indisputable advantages of doing data processing in the database
rather than in each application. This presentation explores the ability to push
data processing into the database using SQL, functions, triggers, and the
object-relational features of POSTGRESQL.
Creative Commons Attribution License http://momjian.us/presentations
2. Pre-SQL Data Access
No one wants to return to this era:
Complex cross-table access
Single index
No optimizer
Simple WHERE processing
No aggregation
Processing Data Inside PostgreSQL 1
3. SQL Data Access
You probably take these for granted:
Easy cross-table access, with optimizer assistance
Complex WHERE processing
Transaction Control
Concurrency
Portable language (SQL)
Processing Data Inside PostgreSQL 2
11. Unique Test in an Application
BEGIN;
LOCK tab;
SELECT ... WHERE col = key;
if not found
INSERT (or UPDATE)
COMMIT;
Processing Data Inside PostgreSQL 10
12. UNIQUE Constraint
CREATE TABLE tab
(
col ... UNIQUE
);
CREATE TABLE customer (id INTEGER UNIQUE);
Processing Data Inside PostgreSQL 11
13. Preventing NULLs
if (col != NULL)
INSERT/UPDATE;
Processing Data Inside PostgreSQL 12
14. NOT NULL Constraint
CREATE TABLE tab
(
col ... NOT NULL
);
CREATE TABLE customer (name TEXT NOT NULL);
Processing Data Inside PostgreSQL 13
15. Primary Key Constraint
UNIQUE
NOT NULL
CREATE TABLE customer (id INTEGER PRIMARY KEY);
Processing Data Inside PostgreSQL 14
16. Ensuring Table Linkage
Foreign —> Primary
BEGIN;
SELECT *
FROM primary
WHERE key = col
FOR UPDATE;
if found
INSERT (or UPDATE) INTO foreign;
COMMIT;
Processing Data Inside PostgreSQL 15
17. Ensuring Table Linkage
Primary —> Foreign
BEGIN;
SELECT *
FROM foreign
WHERE col = key
FOR UPDATE;
if found
?
UPDATE/DELETE primary;
COMMIT;
Processing Data Inside PostgreSQL 16
18. Ensuring Table Linkage
Example
CREATE TABLE statename (
code CHAR(2) PRIMARY KEY,
name VARCHAR(30)
);
CREATE TABLE customer
(
customer_id INTEGER,
name VARCHAR(30),
telephone VARCHAR(20),
street VARCHAR(40),
city VARCHAR(25),
state CHAR(2) REFERENCES statename,
zipcode CHAR(10),
country VARCHAR(20)
);
Processing Data Inside PostgreSQL 17
19. Ensuring Table Linkage
Larger Example
CREATE TABLE customer
(
customer_id INTEGER PRIMARY KEY,
name VARCHAR(30),
telephone VARCHAR(20),
street VARCHAR(40),
city VARCHAR(25),
state CHAR(2),
zipcode CHAR(10),
country VARCHAR(20)
);
CREATE TABLE employee
(
employee_id INTEGER PRIMARY KEY,
name VARCHAR(30),
hire_date DATE
);
CREATE TABLE part (
Processing Data Inside PostgreSQL 18
21. Ensuring Table Linkage
Prevent Change to Primary
BEGIN;
SELECT ...
FROM foreign
WHERE col = key
FOR UPDATE;
IF found
ABORT;
UPDATE/DELETE primary;
COMMIT;
Processing Data Inside PostgreSQL 20
22. Ensuring Table Linkage
REFERENCES Constraint
NO ACTION/RESTRICT (default)
CREATE TABLE foreign
(
col ... REFERENCES primary (col)
ON UPDATE NO ACTION -- not required
ON DELETE NO ACTION -- not required
);
Processing Data Inside PostgreSQL 21
23. Ensuring Table Linkage
Cascade Change to Primary
BEGIN;
SELECT ...
FROM foreign
WHERE col = key
FOR UPDATE;
IF found
UPDATE/DELETE foreign;
UPDATE/DELETE primary;
COMMIT;
Processing Data Inside PostgreSQL 22
24. Ensuring Table Linkage
REFERENCES Constraint
CASCADE
CREATE TABLE foreign
(
col ... REFERENCES primary (col)
ON UPDATE CASCADE
ON DELETE CASCADE
);
Processing Data Inside PostgreSQL 23
25. Ensuring Table Linkage
Set Foreign to NULL on Change to Primary
BEGIN;
SELECT ...
FROM foreign
WHERE col = key
FOR UPDATE;
IF found
UPDATE foreign SET col = NULL;
UPDATE/DELETE primary;
COMMIT;
Processing Data Inside PostgreSQL 24
26. Ensuring Table Linkage
REFERENCES Constraint
SET NULL
CREATE TABLE foreign
(
col ... REFERENCES primary (col)
ON UPDATE SET NULL
ON DELETE SET NULL
);
Processing Data Inside PostgreSQL 25
27. Ensuring Table Linkage
Set Foreign to DEFAULT on Change to Primary
BEGIN;
SELECT ...
FROM foreign
WHERE col = key
FOR UPDATE;
IF found
UPDATE foreign SET col = DEFAULT;
UPDATE/DELETE primary;
COMMIT;
Processing Data Inside PostgreSQL 26
28. Ensuring Table Linkage
REFERENCES Constraint
SET DEFAULT
CREATE TABLE foreign
(
col ... REFERENCES primary (col)
ON UPDATE SET DEFAULT
ON DELETE SET DEFAULT
);
CREATE TABLE order (cust_id INTEGER REFERENCES customer (id));
Processing Data Inside PostgreSQL 27
29. Controlling Data
if col > 0 ...
(col = 2 OR col = 7) ...
length(col) < 10 ...
INSERT/UPDATE tab;
Processing Data Inside PostgreSQL 28
34. Auto-numbering Column
CREATE TABLE counter (curr INTEGER);
INSERT INTO counter VALUES (1);
...
BEGIN;
val = SELECT curr FROM counter FOR UPDATE;
UPDATE counter SET curr = curr + 1;
COMMIT;
INSERT INTO tab VALUES (... val ...);
Processing Data Inside PostgreSQL 33
35. SERIAL/Sequence
CREATE TABLE tab
(
col SERIAL
);
CREATE TABLE tab
(
col INTEGER DEFAULT nextval(’tab_col_seq’)
);
CREATE TABLE customer (id SERIAL);
CREATE SEQUENCE customer_id_seq;
CREATE TABLE customer (id INTEGER DEFAULT nextval(’customer_id_seq’));
Processing Data Inside PostgreSQL 34
36. Constraint Macros
DOMAIN
CREATE DOMAIN phone AS
CHAR(12) CHECK (VALUE ~ ’^[0-9]{3}-[0-9]{3}-[0-9]{4}$’);
CREATE TABLE company ( ... phnum phone, ...);
Processing Data Inside PostgreSQL 35
37. Using
SELECT’s
Features
Processing Data Inside PostgreSQL 36
38. ANSI Outer Joins - LEFT OUTER
SELECT *
FROM tab1, tab2
WHERE tab1.col = tab2.col
UNION
SELECT *
FROM tab1
WHERE col NOT IN
(
SELECT tab2.col
FROM tab2
);
SELECT *
FROM tab1 LEFT JOIN tab2 ON tab1.col = tab2.col;
Processing Data Inside PostgreSQL 37
39. ANSI Outer Joins - RIGHT OUTER
SELECT *
FROM tab1, tab2
WHERE tab1.col = tab2.col
UNION
SELECT *
FROM tab2
WHERE col NOT IN
(
SELECT tab1.col
FROM tab1
);
SELECT *
FROM tab1 RIGHT JOIN tab2 ON tab1.col = tab2.col;
Processing Data Inside PostgreSQL 38
40. ANSI Outer Joins - FULL OUTER
SELECT *
FROM tab1, tab2
WHERE tab1.col = tab2.col
UNION
SELECT *
FROM tab1
WHERE col NOT IN
(
SELECT tab2.col
FROM tab2
)
UNION
SELECT *
FROM tab2
WHERE col NOT IN
(
SELECT tab1.col
FROM tab1
);
SELECT *
FROM tab1 FULL JOIN tab2 ON tab1.col = tab2.col;
Processing Data Inside PostgreSQL 39
41. ANSI Outer Join Example
SELECT *
FROM customer LEFT JOIN order ON customer.id = order.cust_id;
Processing Data Inside PostgreSQL 40
42. Aggregates
SUM()
total = 0
FOREACH val IN set
total = total + val;
END FOREACH
SELECT SUM(val) FROM tab;
Processing Data Inside PostgreSQL 41
43. Aggregates
MAX()
max = MIN_VAL;
FOREACH val IN set
if (val > max)
max = val;
END FOREACH
SELECT MAX(val) FROM tab;
SELECT MAX(cost) FROM part;
Processing Data Inside PostgreSQL 42
44. Aggregates
GROUP BY SUM()
qsort(set)
save = ’’;
total = 0;
FOREACH val IN set
if val != save and save != ’’
{
print save, total;
save = val;
total = 0;
}
total = total + amt;
END FOREACH
if save != ’’
print save, total;
SELECT val, SUM(amt) FROM tab GROUP BY val;
Processing Data Inside PostgreSQL 43
45. Aggregates
GROUP BY MAX()
save = ’’;
max = MIN_VAL;
FOREACH val IN set
if val != save and save != ’’
{
print save, max;
save = val;
max = MIN_VAL;
}
if (amt > max)
max = amt;
END FOREACH
if save != ’’
print save, max;
SELECT val, MAX(amt) FROM tab GROUP BY val;
Processing Data Inside PostgreSQL 44
46. Aggregates
GROUP BY Examples
SELECT part, COUNT(*)
FROM order
ORDER BY part;
SELECT cust_id, SUM(due)
FROM order
GROUP BY cust_id
ORDER BY 2 DESC;
Processing Data Inside PostgreSQL 45
47. Merging SELECTs
UNION
SELECT * INTO TEMP out FROM ...
INSERT INTO TEMP out SELECT ...
INSERT INTO TEMP out SELECT ...
SELECT DISTINCT ...
SELECT *
UNION
SELECT *
UNION
SELECT *;
Processing Data Inside PostgreSQL 46
48. Joining SELECTs
INTERSECT
SELECT * INTO TEMP out;
DELETE FROM out WHERE out.* NOT IN (SELECT ...);
DELETE FROM out WHERE out.* NOT IN (SELECT ...);
SELECT *
INTERSECT
SELECT *
INTERSECT
SELECT *;
Processing Data Inside PostgreSQL 47
49. Subtracting SELECTs
EXCEPT
SELECT * INTO TEMP out;
DELETE FROM out WHERE out.* IN (SELECT ...);
DELETE FROM out WHERE out.* IN (SELECT ...);
SELECT *
EXCEPT
SELECT *
EXCEPT
SELECT *;
Processing Data Inside PostgreSQL 48
50. Controlling Rows Returned
LIMIT/OFFSET
DECLARE limdemo CURSOR FOR SELECT ...
FOR i = 1 to 5
FETCH IN limdemo
END FOR
SELECT *
LIMIT 5;
DECLARE limdemo CURSOR FOR SELECT ...
MOVE 20 IN limdemo
FOR i = 1 to 5
FETCH IN limdemo;
END FOR
SELECT *
OFFSET 20 LIMIT 5;
Processing Data Inside PostgreSQL 49
51. Controlling Rows Returned
LIMIT/OFFSET Example
SELECT order_id, balance
FROM order
ORDER BY balance DESC
LIMIT 10;
Processing Data Inside PostgreSQL 50
52. Locking SELECT Rows
FOR UPDATE
BEGIN;
LOCK tab;
SELECT * FROM CUSTOMER WHERE id = 4452;
UPDATE customer SET balance = 0 WHERE id = 4452;
COMMIT;
BEGIN;
SELECT *
FROM customer
WHERE id = 4452
FOR UPDATE;
...
UPDATE customer
SET balance = 0
Processing Data Inside PostgreSQL 51
53. WHERE id = 4452;
COMMIT;
Processing Data Inside PostgreSQL 52
54. Temporary Tables
CREATE TABLE tab (...);
...
DROP TABLE tab;
CREATE TEMP TABLE tab (...);
SELECT *
INTO TEMPORARY hold
FROM tab1, tab2, tab3
WHERE ...
Processing Data Inside PostgreSQL 53
55. Automatically Modify SELECT
VIEW - One Column
SELECT col4
FROM tab;
CREATE VIEW view1 AS
SELECT col4
FROM tab;
SELECT * FROM view1;
Processing Data Inside PostgreSQL 54
56. Automatically Modify SELECT
VIEW - One Row
SELECT *
FROM tab
WHERE col = ’ISDN’;
CREATE VIEW view2 AS
SELECT *
FROM tab
WHERE col = ’ISDN’;
SELECT * FROM view2;
Processing Data Inside PostgreSQL 55
57. Automatically Modify SELECT
VIEW - One Field
SELECT col4
FROM tab
WHERE col = ’ISDN’;
CREATE VIEW view3 AS
SELECT col4
FROM tab
WHERE col = ’ISDN’;
SELECT * FROM view3;
Processing Data Inside PostgreSQL 56
58. Automatically Modify
INSERT/UPDATE/DELETE
Rules
INSERT INTO tab1 VALUES (...);
INSERT INTO tab2 VALUES (...);
CREATE RULE insert_tab1 AS ON INSERT TO tab1 DO
INSERT INTO tab2 VALUES (...);
INSERT INTO tab1 VALUES (...);
Processing Data Inside PostgreSQL 57
60. Rules Example - Rule Definition
CREATE RULE service_request_update AS -- UPDATE rule
ON UPDATE TO service_request
DO
INSERT INTO service_request_log (customer_id, description, mod_type)
VALUES (old.customer_id, old.description, ’U’);
CREATE RULE service_request_delete AS -- DELETE rule
ON DELETE TO service_request
DO
INSERT INTO service_request_log (customer_id, description, mod_type)
VALUES (old.customer_id, old.description, ’D’);
Processing Data Inside PostgreSQL 59
61. Multi-User Consistency
Atomic Changes
Atomic Visibility
Atomic Consistency
Reliability
User 1 User 2 Descrip
BEGIN WORK User 1 starts a t
UPDATE acct SET balance = balance - 100 WHERE acctno = 53224 remove 100 from
UPDATE acct SET balance = balance + 100 WHERE acctno = 94913 add 100 to an a
SELECT * FROM acct sees both change
SELECT * FROM acct sees no changes
COMMIT WORK
SELECT * FROM acct sees both change
Processing Data Inside PostgreSQL 60
71. Shipping Cost Function
CREATE FUNCTION shipping(numeric)
RETURNS numeric
AS ’SELECT CASE
WHEN $1 < 2 THEN CAST(3.00 AS numeric(8,2))
WHEN $1 >= 2 AND $1 < 4 THEN CAST(5.00 AS numeric(8,2))
WHEN $1 >= 4 THEN CAST(6.00 AS numeric(8,2))
END;’
LANGUAGE ’sql’;
INSERT ... VALUES ( ... cost + shipping(cost) ... );
Processing Data Inside PostgreSQL 70
72. String Processing — PL/pgSQL
CREATE FUNCTION spread(text)
RETURNS text
AS $$
DECLARE
str text;
ret text;
i integer;
len integer;
BEGIN
str := upper($1);
ret := ’’; -- start with zero length
i := 1;
len := length(str);
WHILE i <= len LOOP
ret := ret || substr(str, i, 1) || ’ ’;
i := i + 1;
END LOOP;
RETURN ret;
Processing Data Inside PostgreSQL 71
73. END;
$$
LANGUAGE ’plpgsql’;
SELECT spread(’Major Financial Report’);
spread
----------------------------------------------
M A J O R F I N A N C I A L R E P O R T
(1 row)
Processing Data Inside PostgreSQL 72
74. State Name Lookup
SQL Language Function
SELECT name
FROM statename
WHERE code = ’AL’;
CREATE FUNCTION getstatename(text)
RETURNS text
AS ’SELECT name
FROM statename
WHERE code = $1;’
LANGUAGE ’sql’;
SELECT getstatename(’AL’);
Processing Data Inside PostgreSQL 73
75. State Name Lookup From String
PL/pgSQL Language Function
CREATE FUNCTION getstatecode(text)
RETURNS text
AS $$
DECLARE
state_str statename.name%TYPE;
statename_rec record;
i integer;
len integer;
matches record;
search_str text;
BEGIN
state_str := initcap($1); -- capitalization match column
len := length(trim($1));
i := 2;
SELECT INTO statename_rec * -- first try for an exact match
FROM statename
WHERE name = state_str;
IF FOUND
THEN RETURN statename_rec.code;
END IF;
Processing Data Inside PostgreSQL 74
76. WHILE i <= len LOOP -- test 2,4,6,... chars for match
search_str = trim(substr(state_str, 1, i)) || ’%’;
SELECT INTO matches COUNT(*)
FROM statename
WHERE name LIKE search_str;
IF matches.count = 0 -- no matches, failure
THEN RETURN NULL;
END IF;
IF matches.count = 1 -- exactly one match, return it
THEN
SELECT INTO statename_rec *
FROM statename
WHERE name LIKE search_str;
IF FOUND
THEN RETURN statename_rec.code;
END IF;
END IF;
i := i + 2; -- >1 match, try 2 more chars
END LOOP;
RETURN ’’;
END;
$$
LANGUAGE ’plpgsql’;
SELECT getstatecode(’Alabama’);
SELECT getstatecode(’ALAB’);
Processing Data Inside PostgreSQL 75
78. State Name Maintenance
CREATE FUNCTION change_statename(char(2), char(30))
RETURNS boolean
AS $$
DECLARE
state_code ALIAS FOR $1;
state_name ALIAS FOR $2;
statename_rec RECORD;
BEGIN
IF length(state_code) = 0 -- no state code, failure
THEN RETURN ’f’;
ELSE
IF length(state_name) != 0 -- is INSERT or UPDATE?
THEN
SELECT INTO statename_rec *
FROM statename
WHERE code = state_code;
IF NOT FOUND -- is state not in table?
THEN INSERT INTO statename
VALUES (state_code, state_name);
ELSE UPDATE statename
SET name = state_name
Processing Data Inside PostgreSQL 77
79. WHERE code = state_code;
END IF;
RETURN ’t’;
ELSE -- is DELETE
SELECT INTO statename_rec *
FROM statename
WHERE code = state_code;
IF FOUND
THEN DELETE FROM statename
WHERE code = state_code;
RETURN ’t’;
ELSE RETURN ’f’;
END IF;
END IF;
END IF;
END;
$$
LANGUAGE ’plpgsql’;
SELECT change_statename(’AL’,’Alabama’);
SELECT change_statename(’AL’,’Bermuda’);
SELECT change_statename(’AL’,’’);
SELECT change_statename(’AL’,’’); -- row was already deleted
Processing Data Inside PostgreSQL 78
80. SELECT Inside FROM
SELECT *
FROM (SELECT * FROM tab) AS tab;
SELECT *
FROM ( SELECT 1,2,3,4,5 UNION
SELECT 6,7,8,9,10 UNION
SELECT 11,12,13,14,15) AS tab15;
col| col| col| col| col
---+----+----+----+----
1 | 2 | 3 | 4 | 5
6 | 7 | 8 | 9 | 10
11 | 12 | 13 | 14 | 15
(3 rows)
Processing Data Inside PostgreSQL 79
81. Function Returning
Multiple Values
CREATE TABLE int5(x1 INTEGER, x2 INTEGER, x3 INTEGER, x4 INTE-
GER, x5 INTEGER);
CREATE FUNCTION func5() RETURNS SETOF int5 AS
’SELECT 1,2,3,4,5;’
LANGUAGE SQL;
SELECT * FROM func5();
x1 | x2 | x3 | x4 | x5
----+----+----+----+----
1 | 2 | 3 | 4 | 5
(1 row)
Processing Data Inside PostgreSQL 80
82. Function Returning
a Table Result
CREATE OR REPLACE FUNCTION func15() RETURNS SETOF int5 AS
’ SELECT 1,2,3,4,5 UNION
SELECT 6,7,8,9,10 UNION
SELECT 11,12,13,14,15;’
LANGUAGE SQL;
SELECT * FROM func15();
x1 | x2 | x3 | x4 | x5
----+----+----+----+----
1 | 2 | 3 | 4 | 5
6 | 7 | 8 | 9 | 10
11 | 12 | 13 | 14 | 15
(3 rows)
Processing Data Inside PostgreSQL 81
83. Automatic Function Calls
Trigger
BEFORE/AFTER ROW
INSERT/UPDATE/DELETE
OLD/NEW
Processing Data Inside PostgreSQL 82
84. Trigger on Statename
CREATE FUNCTION trigger_insert_update_statename()
RETURNS trigger
AS $$
BEGIN
IF new.code !~ ’^[A-Za-z][A-Za-z]$’
THEN RAISE EXCEPTION ’State code must be two alphabetic characters.’;
END IF;
IF new.name !~ ’^[A-Za-z ]*$’
THEN RAISE EXCEPTION ’State name must be only alphabetic characters.’;
END IF;
IF length(trim(new.name)) < 3
THEN RAISE EXCEPTION ’State name must longer than two characters.’;
END IF;
new.code = upper(new.code); -- uppercase statename.code
new.name = initcap(new.name); -- capitalize statename.name
RETURN new;
END;
$$
Processing Data Inside PostgreSQL 83
86. Install Trigger
On Statename
CREATE TRIGGER trigger_statename
BEFORE INSERT OR UPDATE
ON statename
FOR EACH ROW
EXECUTE PROCEDURE trigger_insert_update_statename();
INSERT INTO statename VALUES (’a’, ’alabama’);
INSERT INTO statename VALUES (’al’, ’alabama2’);
INSERT INTO statename VALUES (’al’, ’al’);
INSERT INTO statename VALUES (’al’, ’alabama’);
Processing Data Inside PostgreSQL 85
87. Function Languages
SQL
PL/pgSQL
PL/TCL
PL/Python
PL/Perl
PL/sh
C
Processing Data Inside PostgreSQL 86
88. Function Examples
/contrib/earthdistance
/contrib/fuzzystringmatch
/contrib/pgcrypto
Processing Data Inside PostgreSQL 87
89. 3. Customizing Database Features
Adding New
Data and Indexing
Features
Processing Data Inside PostgreSQL 88
90. Creation
CREATE FUNCTIONS in C
CREATE TYPE
CREATE OPERATOR
CREATE OPERATOR CLASS (index type)
Processing Data Inside PostgreSQL 89
91. Create New Data Type
With Operator and Index Support
Write input/output functions
Register input/output functions with CREATE FUNCTION
Register type with CREATE TYPE
Write comparison functions
Register comparison functions with CREATE FUNCTION
Register comparison functions with CREATE OPERATOR
Register operator class for indexes with CREATE OPERATOR CLASS
Processing Data Inside PostgreSQL 90
92. Create New Data Type
Examples
/contrib/chkpass
/contrib/isn
/contrib/cube
/contrib/ltree
/src/backend/utils/adt
Processing Data Inside PostgreSQL 91