PGConf.ASIA 2019 Bali - Foreign Data Wrappers - Etsuro Fujita & Tatsuro YamadaEqunix Business Solutions
PGConf.ASIA 2019 Bali - 9 September 2019
Speaker: Etsuro Fujita & Tatsuro Yamada
Room: ACID
Title: Foreign Data Wrappers: A Powerful Technology for Data Integration
This document provides an introduction to HeteroDB, Inc. and its chief architect, KaiGai Kohei. It discusses PG-Strom, an open source PostgreSQL extension developed by HeteroDB for high performance data processing using heterogeneous architectures like GPUs. PG-Strom uses techniques like SSD-to-GPU direct data transfer and a columnar data store to accelerate analytics and reporting workloads on terabyte-scale log data using GPUs and NVMe SSDs. Benchmark results show PG-Strom can process terabyte workloads at throughput nearing the hardware limit of the storage and network infrastructure.
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedEqunix Business Solutions
This document discusses tuning Linux and PostgreSQL for performance. It recommends:
- Tuning Linux kernel parameters like huge pages, swappiness, and overcommit memory. Huge pages can improve TLB performance.
- Tuning PostgreSQL parameters like shared_buffers, work_mem, and checkpoint_timeout. Shared_buffers stores the most frequently accessed data.
- Other tips include choosing proper hardware, OS, and database based on workload. Tuning queries and applications can also boost performance.
PGConf.ASIA 2019 Bali - Foreign Data Wrappers - Etsuro Fujita & Tatsuro YamadaEqunix Business Solutions
PGConf.ASIA 2019 Bali - 9 September 2019
Speaker: Etsuro Fujita & Tatsuro Yamada
Room: ACID
Title: Foreign Data Wrappers: A Powerful Technology for Data Integration
This document provides an introduction to HeteroDB, Inc. and its chief architect, KaiGai Kohei. It discusses PG-Strom, an open source PostgreSQL extension developed by HeteroDB for high performance data processing using heterogeneous architectures like GPUs. PG-Strom uses techniques like SSD-to-GPU direct data transfer and a columnar data store to accelerate analytics and reporting workloads on terabyte-scale log data using GPUs and NVMe SSDs. Benchmark results show PG-Strom can process terabyte workloads at throughput nearing the hardware limit of the storage and network infrastructure.
PGConf.ASIA 2019 Bali - Tune Your LInux Box, Not Just PostgreSQL - Ibrar AhmedEqunix Business Solutions
This document discusses tuning Linux and PostgreSQL for performance. It recommends:
- Tuning Linux kernel parameters like huge pages, swappiness, and overcommit memory. Huge pages can improve TLB performance.
- Tuning PostgreSQL parameters like shared_buffers, work_mem, and checkpoint_timeout. Shared_buffers stores the most frequently accessed data.
- Other tips include choosing proper hardware, OS, and database based on workload. Tuning queries and applications can also boost performance.
KSCOPE 2013: Exadata Consolidation Success StoryKristofferson A
This document summarizes an Exadata consolidation success story. It describes how three Exadata clusters were consolidated to host 60 databases total. Tools and methodology used included gathering utilization metrics, creating a provisioning plan, implementing the plan, and auditing. The document describes some "war stories" including resolving a slow HR time entry system through SQL profiling, addressing a memory exhaustion issue from an OBIEE report, and using I/O resource management to prioritize critical processes when storage cells became saturated.
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...Equnix Business Solutions
PGConf.ASIA 2019 Bali - 10 September 2019
Speaker: Bo Peng
Room: SQL
Title: Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1
The talk will cover most of the performance enhancement introduced to Scylla over the past 12 months. As the throughput was very good before, we focused on Scylla’s behaviour under all types of workloads and data models. Scylla improved its latency under all scenarios, improving the behaviours of data models such as large partitions and time series, improvement of the I/O scheduler and behaviour of streaming and repair.
In this session Satoru Goto, Solutions Engineer at MariaDB, shows how the Pentaho connector for MariaDB ColumnStore can be used for both BI/reporting on MariaDB ColumnStore as well as loading data into MariaDB ColumnStore.
Whitepaper: Exadata Consolidation Success StoryKristofferson A
1. The document discusses database and server consolidation using Oracle Exadata and describes the challenges of managing highly consolidated environments to ensure quality of service.
2. It outlines a 4-step process for accurate provisioning and capacity planning using a tool called the Provisioning Worksheet: collecting database details, defining the target Exadata hardware capacity, creating a provisioning plan, and reviewing resource utilization.
3. The process relies on basic capacity planning to ensure workload requirements fit available capacity. Database CPU and storage requirements are gathered, a target Exadata configuration is set, databases are mapped to nodes in the plan, and final utilization is summarized to identify any capacity shortfalls.
Migrate your EOL MySQL servers to HA Complaint GR Cluster / InnoDB Cluster Wi...Mydbops
This talk focuses on the challenges and strategies to be followed when you're planning for a MySQL upgrade from the End of life versions of MySQL 5.5, 5.6, Even 5.7.
Always upgrade! There are hundreds of fixes between each PostgreSQL release, and an important number of them are security fixes! Logical replication allows making major upgrades with minimal downtime and feasible cons.
This webinar covered:
- PostgreSQL releases
- Upgrade options
- What is Pglogical?
- Major upgrades
The document discusses monitoring input/output (IO) performance in Oracle Exadata systems. It covers write-back flash cache (WBFC), various methods for monitoring IO using Automatic Workload Repository (AWR) data and cell-level scripts, correlating IO to workload, and scaling monitoring using metric extensions and Business Intelligence Publisher (BIP). The presentation provides examples of visualizing IO performance trends over time using AWR and cell data and measuring the impact of initialization parameters on latency. It also addresses reference bands for disk IO capacity and visualizing workload activity storage area activity by day per node.
Countdown to PostgreSQL v9.5 - Foriegn Tables can be part of Inheritance Tree Ashnikbiz
Distributed databases and horizontal scale up is one of the key demands in today's date. PostgreSQL already had some vertical scaling features and horizontal scale-up by adding disks and table partitioning/child tables. With release of v9.5, PostgreSQL will get basic foundation for native sharing capability. From v9.5 Foreign Tables will be able to participate in Inheritance Tree as a child or parent table i.e. one can have table partitions residing on different system.
In our countdown to v9.5 series of hangouts, we will be covering some of the great features of PostgreSQL v9.5 and what is their real life applicability. In the first hangout in this series we will be talking about-
- The feature of foreign partitions/child tables
- Syntax and usage
- EXPLAIN plan demo
- Use cases and benefits
Join us for more and send us your queries on success@ashnik.com
The document discusses PostgreSQL's write-ahead log (WAL), which records database changes before writing them to disk for crash safety. The WAL allows for features like online backups by archiving WAL records, point-in-time recovery by restoring from backups and replaying WAL, and replication by transmitting WAL to standby servers. It works by writing each change as a WAL record before updating data pages, and replaying the log during recovery to reconstruct unfinished transactions after a crash.
patroni-based citrus high availability environment deploymenthyeongchae lee
The document discusses deploying a Citus high availability environment using Patroni. It begins with an introduction and agenda. It then covers service discovery with Consul, dynamic configuration using ConfD and Consul templates, high availability with Patroni, and distributed PostgreSQL with Citus. Key points include that Patroni allows customized PostgreSQL high availability solutions, Citus enables scaling out PostgreSQL across nodes, and the demo would show integrating these for a production-ready scalable and highly available database cluster.
This document discusses using flash storage for HBase deployments. It begins by explaining the basics of NAND flash memory. It then analyzes the performance of HBase on flash versus DRAM, finding that flash can support the larger working sets now common in HBase clusters. The document details several flash-optimized features for HBase, including short-circuit reads, the BucketCache, and minimizing write amplification. It concludes by considering opportunities to further optimize HBase for flash, such as reducing write amplification and making HDFS aware of different storage technologies.
The document discusses in-memory data grids and Ampool. It describes that in-memory data grids like Ampool are sophisticated in-memory data stores that provide low latency reads and writes through data partitioning and replication across a scalable cluster. The document then provides details on Ampool's architecture based on Apache Geode, how it compares favorably to other in-memory solutions in providing both low-latency and analytics capabilities, and demonstrates its performance through examples.
A look at what HA is and what PostgreSQL has to offer for building an open source HA solution. Covers various aspects in terms of Recovery Point Objective and Recovery Time Objective. Includes backup and restore, PITR (point in time recovery) and streaming replication concepts.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
MariaDB's Andrew Hutchings and Shane Johnson walk through new features of the MariaDB ColumnStore storage engine, tools and adapters, then provide a sneak peak at what's planned for the next release.
Schema replication using oracle golden gate 12cuzzal basak
This document provides instructions for configuring asynchronous schema replication between an Oracle source database and target database using Oracle GoldenGate 12c. It outlines the necessary steps which include:
1. Enabling supplemental logging and archivelog mode on both databases.
2. Installing the GoldenGate software and starting the Manager processes on both systems.
3. Configuring the Extract, Data Pump, and Replicate processes to replicate the BASAK schema and tables from the source PDBORCL to the target PRIPDB database.
4. Starting the Extract, Data Pump, and Replicate jobs to begin the replication process and ensure the BASAK schema and tables are synchronized between the source and target databases.
KSCOPE 2013: Exadata Consolidation Success StoryKristofferson A
This document summarizes an Exadata consolidation success story. It describes how three Exadata clusters were consolidated to host 60 databases total. Tools and methodology used included gathering utilization metrics, creating a provisioning plan, implementing the plan, and auditing. The document describes some "war stories" including resolving a slow HR time entry system through SQL profiling, addressing a memory exhaustion issue from an OBIEE report, and using I/O resource management to prioritize critical processes when storage cells became saturated.
PGConf.ASIA 2019 Bali - Setup a High-Availability and Load Balancing PostgreS...Equnix Business Solutions
PGConf.ASIA 2019 Bali - 10 September 2019
Speaker: Bo Peng
Room: SQL
Title: Setup a High-Availability and Load Balancing PostgreSQL Cluster - New Features of Pgpool-II 4.1
The talk will cover most of the performance enhancement introduced to Scylla over the past 12 months. As the throughput was very good before, we focused on Scylla’s behaviour under all types of workloads and data models. Scylla improved its latency under all scenarios, improving the behaviours of data models such as large partitions and time series, improvement of the I/O scheduler and behaviour of streaming and repair.
In this session Satoru Goto, Solutions Engineer at MariaDB, shows how the Pentaho connector for MariaDB ColumnStore can be used for both BI/reporting on MariaDB ColumnStore as well as loading data into MariaDB ColumnStore.
Whitepaper: Exadata Consolidation Success StoryKristofferson A
1. The document discusses database and server consolidation using Oracle Exadata and describes the challenges of managing highly consolidated environments to ensure quality of service.
2. It outlines a 4-step process for accurate provisioning and capacity planning using a tool called the Provisioning Worksheet: collecting database details, defining the target Exadata hardware capacity, creating a provisioning plan, and reviewing resource utilization.
3. The process relies on basic capacity planning to ensure workload requirements fit available capacity. Database CPU and storage requirements are gathered, a target Exadata configuration is set, databases are mapped to nodes in the plan, and final utilization is summarized to identify any capacity shortfalls.
Migrate your EOL MySQL servers to HA Complaint GR Cluster / InnoDB Cluster Wi...Mydbops
This talk focuses on the challenges and strategies to be followed when you're planning for a MySQL upgrade from the End of life versions of MySQL 5.5, 5.6, Even 5.7.
Always upgrade! There are hundreds of fixes between each PostgreSQL release, and an important number of them are security fixes! Logical replication allows making major upgrades with minimal downtime and feasible cons.
This webinar covered:
- PostgreSQL releases
- Upgrade options
- What is Pglogical?
- Major upgrades
The document discusses monitoring input/output (IO) performance in Oracle Exadata systems. It covers write-back flash cache (WBFC), various methods for monitoring IO using Automatic Workload Repository (AWR) data and cell-level scripts, correlating IO to workload, and scaling monitoring using metric extensions and Business Intelligence Publisher (BIP). The presentation provides examples of visualizing IO performance trends over time using AWR and cell data and measuring the impact of initialization parameters on latency. It also addresses reference bands for disk IO capacity and visualizing workload activity storage area activity by day per node.
Countdown to PostgreSQL v9.5 - Foriegn Tables can be part of Inheritance Tree Ashnikbiz
Distributed databases and horizontal scale up is one of the key demands in today's date. PostgreSQL already had some vertical scaling features and horizontal scale-up by adding disks and table partitioning/child tables. With release of v9.5, PostgreSQL will get basic foundation for native sharing capability. From v9.5 Foreign Tables will be able to participate in Inheritance Tree as a child or parent table i.e. one can have table partitions residing on different system.
In our countdown to v9.5 series of hangouts, we will be covering some of the great features of PostgreSQL v9.5 and what is their real life applicability. In the first hangout in this series we will be talking about-
- The feature of foreign partitions/child tables
- Syntax and usage
- EXPLAIN plan demo
- Use cases and benefits
Join us for more and send us your queries on success@ashnik.com
The document discusses PostgreSQL's write-ahead log (WAL), which records database changes before writing them to disk for crash safety. The WAL allows for features like online backups by archiving WAL records, point-in-time recovery by restoring from backups and replaying WAL, and replication by transmitting WAL to standby servers. It works by writing each change as a WAL record before updating data pages, and replaying the log during recovery to reconstruct unfinished transactions after a crash.
patroni-based citrus high availability environment deploymenthyeongchae lee
The document discusses deploying a Citus high availability environment using Patroni. It begins with an introduction and agenda. It then covers service discovery with Consul, dynamic configuration using ConfD and Consul templates, high availability with Patroni, and distributed PostgreSQL with Citus. Key points include that Patroni allows customized PostgreSQL high availability solutions, Citus enables scaling out PostgreSQL across nodes, and the demo would show integrating these for a production-ready scalable and highly available database cluster.
This document discusses using flash storage for HBase deployments. It begins by explaining the basics of NAND flash memory. It then analyzes the performance of HBase on flash versus DRAM, finding that flash can support the larger working sets now common in HBase clusters. The document details several flash-optimized features for HBase, including short-circuit reads, the BucketCache, and minimizing write amplification. It concludes by considering opportunities to further optimize HBase for flash, such as reducing write amplification and making HDFS aware of different storage technologies.
The document discusses in-memory data grids and Ampool. It describes that in-memory data grids like Ampool are sophisticated in-memory data stores that provide low latency reads and writes through data partitioning and replication across a scalable cluster. The document then provides details on Ampool's architecture based on Apache Geode, how it compares favorably to other in-memory solutions in providing both low-latency and analytics capabilities, and demonstrates its performance through examples.
A look at what HA is and what PostgreSQL has to offer for building an open source HA solution. Covers various aspects in terms of Recovery Point Objective and Recovery Time Objective. Includes backup and restore, PITR (point in time recovery) and streaming replication concepts.
PostgreSQL Replication High Availability MethodsMydbops
This slides illustrates the need for replication in PostgreSQL, why do you need a replication DB topology, terminologies, replication nodes and many more.
MariaDB's Andrew Hutchings and Shane Johnson walk through new features of the MariaDB ColumnStore storage engine, tools and adapters, then provide a sneak peak at what's planned for the next release.
Schema replication using oracle golden gate 12cuzzal basak
This document provides instructions for configuring asynchronous schema replication between an Oracle source database and target database using Oracle GoldenGate 12c. It outlines the necessary steps which include:
1. Enabling supplemental logging and archivelog mode on both databases.
2. Installing the GoldenGate software and starting the Manager processes on both systems.
3. Configuring the Extract, Data Pump, and Replicate processes to replicate the BASAK schema and tables from the source PDBORCL to the target PRIPDB database.
4. Starting the Extract, Data Pump, and Replicate jobs to begin the replication process and ensure the BASAK schema and tables are synchronized between the source and target databases.
M|18 Battle of the Online Schema Change MethodsMariaDB plc
This document provides an overview and comparison of different methods for performing online schema changes in databases. It discusses native online DDL capabilities in MySQL/MariaDB and TokuDB, as well as alternative methods like rolling schema updates, downtime windows, and the pt-online-schema-change tool. The document outlines features, limitations, and special cases to consider for different workloads and replication scenarios.
This document discusses using Apache Kafka as a data hub to capture changes from various data sources using change data capture (CDC). It outlines several common CDC patterns like using modification dates, database triggers, or log files to identify changes. It then discusses using Kafka Connect to integrate various data sources like MongoDB, PostgreSQL and replicate changes. The document provides examples of open source CDC connectors and concludes with suggestions for getting involved in the Apache Kafka community.
In this presentation we discuss the New Features of MariaDB 10.4. First we give a short overview of the MariaDB Branches and Forks. Then we talk about the announced IPO. Technically we cover topics like Authentication, Accounts, InnoDB, Optimizer improvements, Application-Time Period Tables the new Backup Stage Galera 4 and other changes...
MariaDB 10.4 became General Available (GA = ready for production) this summer. So it is time to look at the new Features in MariaDB 10.4. After a short intro about history we look for the reason of broad usage of MariaDB nowadays. Most important improvements where in User Authentication, InnoDB improvements, and Optimizer enhancements. A completely New Feature is Application-Time Period Tables. Backup got a new Locking behaviour so LVM snapshots are possible and officially supported now. And last but not least MariaDB 10.4 comes with Galera 4.
New Features
● Developer and SQL Features
● DBA and Administration
● Replication
● Performance
By Amit Kapila at India PostgreSQL UserGroup Meetup, Bangalore at InMobi.
http://technology.inmobi.com/events/india-postgresql-usergroup-meetup-bangalore
The document discusses Pentaho Data Integration (Kettle), an open-source ETL tool. It describes Kettle's components like Spoon for designing transformations and jobs, Pan and Kitchen for executing them. It covers extracting, transforming and loading data, transformation steps, job entries, and how Kettle is used to integrate data from various sources into data warehouses.
Aurora Serverless, 서버리스 RDB의 서막 - 트랙2, Community Day 2018 re:Invent 특집AWSKRUG - AWS한국사용자모임
This document summarizes a presentation about new features of Amazon Aurora including Aurora Parallel Query Processing, Aurora Multi-Master, and the new Aurora Serverless offering. Aurora Parallel Query Processing allows queries to be parallelized across thousands of storage nodes. Aurora Multi-Master enables multiple read-write instances for high availability. Aurora Serverless automatically scales databases on demand without capacity planning, with users paying per second of use.
PGConf APAC 2018 - Managing replication clusters with repmgr, Barman and PgBo...PGConf APAC
Speaker: Ian Barwick
PostgreSQL and reliability go hand-in-hand - but your data is only truly safe with a solid and trusted backup system in place, and no matter how good your application is, it's useless if it can't talk to your database.
In this talk we'll demonstrate how to set up a reliable replication
cluster using open source tools closely associated with the PostgreSQL project. The talk will cover following areas:
- how to set up and manage a replication cluster with `repmgr`
- how to set up and manage reliable backups with `Barman`
- how to manage failover and application connections with `repmgr` and `PgBouncer`
Ian Barwick has worked for 2ndQuadrant since 2014, and as well as making various contributions to PostgreSQL itself, is lead `repmgr` developer. He lives in Tokyo, Japan.
This document summarizes new features in Oracle Database 12c Release 2. It outlines features for developers, administrators, SQL*Plus, conversion functions, and more. Key points include increased identifier length, new SQL*Plus features like history and prefetch settings, conversion functions, multi-tenant container database improvements, and performance enhancements like adaptive statistics and optimization.
Optimizing E-Business Suite Storage Using Oracle Advanced CompressionAndrejs Karpovs
The document provides information about optimizing storage for an Oracle E-Business Suite database using Oracle Advanced Compression. It begins with introductions and an overview of topics to be covered, including compression in Oracle, implementing advanced compression with E-Business Suite, and recommendations and results. The document then discusses identifying tables for compression, preparing by applying patches, implementing compression in phases, and benchmarking performance. It notes compression can reduce storage requirements but may increase CPU usage and cause some SQL plan changes. Compression requires careful testing and monitoring of performance impacts.
We are using Elasticsearch to power the search feature of our public frontend, serving 10k queries per hour across 8 markets in SEA.
Here we are sharing our experiences of running Elasticsearch on Kubernetes, presenting our general setup, configuration tweaks and possible pitfalls.
What to expect from MariaDB Platform X5, part 2MariaDB plc
This document summarizes new features and enhancements in MariaDB MaxScale 2.5 and MariaDB ColumnStore 1.5. Some key points include:
- MaxScale 2.5 includes a new graphical user interface, improved binlog router, capability to stream binlogs to Kafka as JSON, and distributed caching between MaxScale servers.
- ColumnStore 1.5 features a new API, PowerBI direct query connector, improved replication from InnoDB, and multinode support in SkySQL.
- Configuration and installation of ColumnStore has been simplified, including using a new ColumnStore.xml utility and S3 storage manager for redundant file storage in object storage.
Rapid Upgrades With Pg_Upgrade, Bruce MomjianFuenteovejuna
Pg_Upgrade allows migration between major releases of Postgres without dumping and reloading data. It works by installing the new Postgres system tables while continuing to use the data files from the previous version. Pg_Upgrade freezes all rows in the new cluster, copies over transaction logs and IDs from the old cluster, restores the database schema, and finally copies over user data files. This process allows for much faster upgrades than traditional dump and restore methods.
M|18 Creating a Reference Architecture for High Availability at NokiaMariaDB plc
This document proposes a reference architecture for providing high availability across multiple data centers using MariaDB and related open source tools. It summarizes:
- The need for a geo-redundant highly available database architecture at Nokia to support multiple product units.
- An evaluation of alternatives including Galera clusters and master-master replication between data centers.
- A proposed architecture using MaxScale for local master-slave replication within each data center and cross-data center replication between masters for redundancy.
- Testing and development of MaxScale plugins and scripts to support automatic failover and recovery after failures within or between data centers.
- Plans for containerized deployment of the database clusters and MaxScale using Kubernetes with additional
Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPCinside-BigData.com
In this deck from the Perth HPC Conference, Werner Scholz from XENON Systems presents: Exceeding the Limits of Air Cooling to Unlock Greater Potential in HPC.
"A decade ago, 100 watts per CPU was devastating to thermal design. Today, Intel’s highest performing CPUs (e.g. Intel Cascade Lake-AP 9282 processor) have a thermal design envelope of 400 watts. There really is no end in sight, and accommodating more power is critical to advancing performance. The ability to dissipate the resulting heat is the hard ceiling that systems face in terms of performance – giving greater importance to liquid cooling breakthroughs. With liquid cooling, less energy is expended to cool systems – a significant savings in HPC deployments with arrays of servers drawing energy and generating heat. Electrical current drives the CPU and enables it to function. This electrical power is converted into thermal energy (heat). To maintain a stable temperature, the CPU needs to be cooled by efficiently removing this heat and releasing it. Liquid cooling is the best way to cool a system because liquid transfers heat much more efficiently than air. From an environmental perspective, liquid cooling reduces both those characteristics to create a smarter and more ecological approach on a grand scale. The cascade of value continues, as ambient heat removed from systems can then be used to heat buildings and augment or replace traditional heating systems. It’s an intelligent approach to thermal management, distributing the economic value of reduced energy use and transforming heat into an enterprise asset."
Watch the video: https://wp.me/p3RLHQ-kZa
Learn more: https://www.xenon.com.au/
and
http://hpcadvisorycouncil.com/events/2019/australia-conference/agenda.php
Sign up for our insideHPC Newsletter: http://insidehpc.com/newsletter
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Kebocoran Data_ Tindakan Hacker atau Kriminal_ Bagaimana kita mengantisipasi...Equnix Business Solutions
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
[EWTT2022] Strategi Implementasi Database dalam Microservice Architecture.pdfEqunix Business Solutions
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Appliance- Jawaban terbaik untuk kebutuhan komputasi yang mumpuni.pdfEqunix Business Solutions
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
Equnix Business Solutions (Equnix) is an IT Solution provider in Indonesia, providing comprehensive solution services especially on the infrastructure side for corporate business needs based on research and Open Source. Equnix has 3 (three) main services known as the Trilogy of Services: Support (Maintenance/Managed), World class level of Software Development, and Expert Consulting and Assessment for High Performance Transactions System. Equnix is customer oriented, not product or principal. Equal opportunity based on merit is our credo in managing HR development.
PGConf.ASIA 2019 - PGSpider High Performance Cluster Engine - Shigeo HiroseEqunix Business Solutions
PGSpider is a high-performance SQL cluster engine developed by Toshiba Corporation. It allows distributed querying of heterogeneous data sources using standard SQL. PGSpider improves retrieval performance through parallel queries across nodes and supports multi-tenant querying to retrieve records from the same table across nodes. It utilizes techniques like pushdown of conditional expressions and aggregation functions to nodes to reduce network traffic.
Alt. GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using ...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Dr. Sean Tan, Head of Data Science, Changi Airport Group
Discover how Changi Airport Group (CAG) leverages graph technologies and generative AI to revolutionize their search capabilities. This session delves into the unique search needs of CAG’s diverse passengers and customers, showcasing how graph data structures enhance the accuracy and relevance of AI-generated search results, mitigating the risk of “hallucinations” and improving the overall customer journey.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Encryption in Microsoft 365 - ExpertsLive Netherlands 2024Albert Hoitingh
In this session I delve into the encryption technology used in Microsoft 365 and Microsoft Purview. Including the concepts of Customer Key and Double Key Encryption.
Communications Mining Series - Zero to Hero - Session 1DianaGray10
This session provides introduction to UiPath Communication Mining, importance and platform overview. You will acquire a good understand of the phases in Communication Mining as we go over the platform with you. Topics covered:
• Communication Mining Overview
• Why is it important?
• How can it help today’s business and the benefits
• Phases in Communication Mining
• Demo on Platform overview
• Q/A
Goodbye Windows 11: Make Way for Nitrux Linux 3.5.0!SOFTTECHHUB
As the digital landscape continually evolves, operating systems play a critical role in shaping user experiences and productivity. The launch of Nitrux Linux 3.5.0 marks a significant milestone, offering a robust alternative to traditional systems such as Windows 11. This article delves into the essence of Nitrux Linux 3.5.0, exploring its unique features, advantages, and how it stands as a compelling choice for both casual users and tech enthusiasts.
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
20 Comprehensive Checklist of Designing and Developing a WebsitePixlogix Infotech
Dive into the world of Website Designing and Developing with Pixlogix! Looking to create a stunning online presence? Look no further! Our comprehensive checklist covers everything you need to know to craft a website that stands out. From user-friendly design to seamless functionality, we've got you covered. Don't miss out on this invaluable resource! Check out our checklist now at Pixlogix and start your journey towards a captivating online presence today.
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
Introducing Milvus Lite: Easy-to-Install, Easy-to-Use vector database for you...Zilliz
Join us to introduce Milvus Lite, a vector database that can run on notebooks and laptops, share the same API with Milvus, and integrate with every popular GenAI framework. This webinar is perfect for developers seeking easy-to-use, well-integrated vector databases for their GenAI apps.
2. https://www.2ndQuadrant.com
PostgreSQL Now
● Features
○ Matured for very wide range of applications
● More production use case
○ Important business scenes
● More chance of database maintenance
○ Version up
○ Database design change
■ Physical
■ Logical
● Business Requirement (24/7 for example)
○ Less acceptance of long maintenance window
3. https://www.2ndQuadrant.com
Database Migration and Design Change
● Column data type change
○ INT -> BIGINT
○ INT -> NUMERIC
○ VARCHAR -> TEXT
Accomodate more records to a table
Business Growth
ALER TABLE?
COPY?
CREATE TABLE … SELECT?
6. https://www.2ndQuadrant.com
Most of these operations are blocking
Typically take days
Cannot simply perform in production environment
Many use case require minimum down time
(maintenance window), for example,
< 15min.
General Issue
8. https://www.2ndQuadrant.com
Partitioning an existing table
Usual process:
1. CREATE TABLE PARTITIONED …
2. COPY from old table to new table
Typical application/data characteristics:
● Log type: can use creation timestamp as partitioning
key
● Data are not updated (may be deleted though)
● Has another primary key which is used as foreign key
from other tables, but not declared explicitly.
9. https://www.2ndQuadrant.com
Apps can continue to run.
Partitioning Steps
1. Create new table with same column definition
a. Creation timestamp as the partitioning key
b. Exclude for PKEY/Unique constraint
2. Create partitions as needed
3. Copy row from original table
4. Lock original/new table
5. Do last copy
6. Rename original/new table
7. Unlock original/new table
Maintenance Window,
requres apps to stop.
15. https://www.2ndQuadrant.com
Maintain Unique Constraint
Partitioning key must be a part of unique constraints
Maintaining uniqueness is application’s responsibility
1. INSERT
○ Use of sequence/other default value
2. UPDATE
○ Use RULE/TRIGGER to prohibit unique value update
All these depends upon application/date characteristics and/or
requirements.
Unique constraint in each partition may reduce a risk.
21. https://www.2ndQuadrant.com
Migrate too many tables
● Some database have huge number of tables
> 500,000
● Migration by pg_dump is not practical
○ Takes too long
● Upgrade from very old version
< even 9.0
22. https://www.2ndQuadrant.com
Why FDW is good
● No specific feature/extension on the source
● Only libpq is needed
● Need postgres_fdw extension at the target
25. https://www.2ndQuadrant.com
Table migration (1)
● We can import all the remote schema
● No need to create each local table
Create Foreign Table only:
Actual data is still at remote
server.
Need to tweak
max_locks_per_transaction GUC
setting to import more tables in
single statement.
26. https://www.2ndQuadrant.com
Table Migration (2)
● Need to issue CREATE TABLE to import the data to
local server
To do this in single transaction for speedup, need
to tweak max_locks_per_transaction GUC setting.
31. https://www.2ndQuadrant.com
Result
pg_dump dump tables at PostgreSQL 9.3 1:51
pg_dump load tables to PostgreSQL 11.4 7:04
FDW Import table schema 1’18”
FDW Import all the data 2:29
We need to consider the duration needed to copy
pg_dump data from remote to local. In this case, the size
of pg_dump output was about 11Gig.
32. https://www.2ndQuadrant.com
Remarks/Summary
● No general steps to avoid long down time in
database migration
● Need to observe characteristics of application and
data itself
● Find what is dynamic and what is static
○ Beware DELETE and UPDATE
● Leverage static characteristics of the data
● Beware requirements/constraints
○ Find if you can ease requirements
■ May need to work with application
■ Plan carefully: schedule and resource
○ May need compromise
35. https://www.2ndQuadrant.com
Change enum data type
Enum type: customer_class
class1
class2
class2
priority0
priority1
normal0
normal1
internal0
internal1
Business Rule Change
->
Sometimes need both column for a while