The document discusses security best practices for IBM Informix including:
1) Enabling role separation to restrict access and privileges for database administrators, application administrators, and backup administrators.
2) Configuring file permissions and ownership for key Informix directories and files to restrict access.
3) Enabling encrypted connections using SSL or other encryption mechanisms to protect data in transit.
4) Configuring firewalls, virtual private networks, and the sqlhosts file to control which clients and users can connect to the database server.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
Modern query optimisation features in MySQL 8.Mydbops
MySQL 8 (a huge leap forward), indexing capabilities, execution plan enhancements, optimizer improvements, and many other current query tweak features are covered in the slides.
This document summarizes a presentation comparing PostgreSQL and MySQL databases. It outlines the strengths and weaknesses of each, including PostgreSQL's strong advanced features and flexible licensing but lack of integrated replication, and MySQL's replication capabilities but immature security and programming models. It also discusses common application types for each database and provides an overview of the EnterpriseDB company.
This document provides an overview of five steps to improve PostgreSQL performance: 1) hardware optimization, 2) operating system and filesystem tuning, 3) configuration of postgresql.conf parameters, 4) application design considerations, and 5) query tuning. The document discusses various techniques for each step such as selecting appropriate hardware components, spreading database files across multiple disks or arrays, adjusting memory and disk configuration parameters, designing schemas and queries efficiently, and leveraging caching strategies.
Improve speed and performance of informix 11.xx part 2am_prasanna
A presentation deck, that takes a deep dive on configuration parameters of IBM Informix database server, enhancing the Performance and thus boosting the speed with which, one can work with Informix DB. The presentation is split into two parts, i.e Part 1 and 2
Understanding Oracle RAC 12c Internals as presented during Oracle Open World 2013 with Mark Scardina.
This is part two of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
RMAN backup scripts should be improved in the following ways:
1. Log backups thoroughly and send failure alerts to ensure recoverability.
2. Avoid relying on a single backup and use redundancy to protect against data loss.
3. Back up control files last and do not delete archives until backups are complete.
4. Check backups regularly to ensure they meet recovery needs.
PostgreSQL is designed to be easily extensible. For this reason, extensions loaded into the database can function just like features that are built in. In this session, we will learn more about PostgreSQL extension framework, how are they built, look at some popular extensions, management of these extensions in your deployments.
Modern query optimisation features in MySQL 8.Mydbops
MySQL 8 (a huge leap forward), indexing capabilities, execution plan enhancements, optimizer improvements, and many other current query tweak features are covered in the slides.
This document summarizes a presentation comparing PostgreSQL and MySQL databases. It outlines the strengths and weaknesses of each, including PostgreSQL's strong advanced features and flexible licensing but lack of integrated replication, and MySQL's replication capabilities but immature security and programming models. It also discusses common application types for each database and provides an overview of the EnterpriseDB company.
This document provides an overview of five steps to improve PostgreSQL performance: 1) hardware optimization, 2) operating system and filesystem tuning, 3) configuration of postgresql.conf parameters, 4) application design considerations, and 5) query tuning. The document discusses various techniques for each step such as selecting appropriate hardware components, spreading database files across multiple disks or arrays, adjusting memory and disk configuration parameters, designing schemas and queries efficiently, and leveraging caching strategies.
Improve speed and performance of informix 11.xx part 2am_prasanna
A presentation deck, that takes a deep dive on configuration parameters of IBM Informix database server, enhancing the Performance and thus boosting the speed with which, one can work with Informix DB. The presentation is split into two parts, i.e Part 1 and 2
Understanding Oracle RAC 12c Internals as presented during Oracle Open World 2013 with Mark Scardina.
This is part two of the Oracle RAC 12c "reindeer series" used for OOW13 Oracle RAC-related presentations.
RMAN backup scripts should be improved in the following ways:
1. Log backups thoroughly and send failure alerts to ensure recoverability.
2. Avoid relying on a single backup and use redundancy to protect against data loss.
3. Back up control files last and do not delete archives until backups are complete.
4. Check backups regularly to ensure they meet recovery needs.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)Aurimas Mikalauskas
Is my MySQL server configured properly? Should I run Community MySQL, MariaDB, Percona or WebScaleSQL? How many innodb buffer pool instances should I run? Why should I NOT use the query cache? How do I size the innodb log file size and what IS that innodb log anyway? All answers are inside.
Aurimas Mikalauskas is a former Percona performance consultant and architect currently writing and teaching at speedemy.com. He's been involved with MySQL since 1999, scaling and optimizing MySQL backed systems since 2004 for companies such as BBC, EngineYard, famous social networks and small shops like EstanteVirtual, Pine Cove and hundreds of others.
Additional content mentioned in the presentation can be found here: http://speedemy.com/17
We talk a lot about Galera Cluster being great for High Availability, but what about Disaster Recovery (DR)? Database outages can occur when you lose a data centre due to data center power outages or natural disaster, so why not plan appropriately in advance?
In this webinar, we will discuss the business considerations including achieving the highest possible uptime, analysis business impact as well as risk, focus on disaster recovery itself, as well as discussing various scenarios, from having no offsite data to having synchronous replication to another data centre.
This webinar will cover MySQL with Galera Cluster, as well as branches MariaDB Galera Cluster as well as Percona XtraDB Cluster (PXC). We will focus on architecture solutions, DR scenarios and have you on your way to success at the end of it.
This document discusses Patroni, an open-source tool for managing high availability PostgreSQL clusters. It describes how Patroni uses a distributed configuration system like Etcd or Zookeeper to provide automated failover for PostgreSQL databases. Key features of Patroni include manual and scheduled failover, synchronous replication, dynamic configuration updates, and integration with backup tools like WAL-E. The document also covers some of the challenges of building automatic failover systems and how Patroni addresses issues like choosing a new master node and reattaching failed nodes.
Fail-Safe Cluster for FirebirdSQL and something moreAlexey Kovyazin
With Firebird HQbird it is possible to create high available cluster or warm standby solution. This presentation defines the problem and describes ways how to create such solutions.
10 things, an Oracle DBA should care about when moving to PostgreSQLPostgreSQL-Consulting
PostgreSQL can handle many of the same workloads as Oracle and provides alternatives to common Oracle features and practices. Some key differences for DBAs moving from Oracle to PostgreSQL include: using shared_buffers instead of SGA with a recommended 25-75% of RAM; using pgbouncer instead of a listener; performing backups with pg_basebackup and WAL archiving instead of RMAN; managing undo data in datafiles instead of undo segments; using streaming replication for high availability instead of RAC; and needing to tune autovacuum instead of manually managing redo and undo logs. PostgreSQL is very capable but may not be suited for some extremely high update workloads of 200K+ transactions per second on a single server
This document discusses PostgreSQL's VACUUM utility. It explains that VACUUM is needed to reclaim space from deleted and updated tuples, prevent transaction ID wraparound issues, and update statistics. The document covers various aspects that interact with VACUUM like commit logs, visibility maps, and free space maps. It also describes the tasks performed by VACUUM, options available, and tuning autovacuum. Finally, it provides a high-level overview of the internal workings of VACUUM.
This document provides an overview of troubleshooting streaming replication in PostgreSQL. It begins with introductions to write-ahead logging and replication internals. Common troubleshooting tools are then described, including built-in views and functions as well as third-party tools. Finally, specific troubleshooting cases are discussed such as replication lag, WAL bloat, recovery conflicts, and high CPU recovery usage. Throughout, examples are provided of how to detect and diagnose issues using the various tools.
This document provides an overview and summary of various high availability (HA) solutions for MySQL databases. It begins with an introduction to HA and definitions of key terms. It then discusses MySQL replication, including asynchronous, semi-synchronous, and features in MySQL 5.6 and MariaDB 10.0. Other HA solutions covered include MHA for automated failover, Galera/MariaDB Galera Cluster for synchronous replication, shared disk solutions like DRBD, and MySQL Cluster for in-memory synchronous replication across nodes. The document provides brief descriptions of how each solution works and when it may be applicable.
There are many ways to run high availability with PostgreSQL. Here, we present a template for you to create your own customized, high-availability solution using Python and for maximum accessibility, a distributed configuration store like ZooKeeper or etcd.
On July 6, 2021, MariaDB 10.6 became generally available (production ready). This presentation focuses on the most important aspects of it as well as the influence it has. Improvements to InnoDB, SYS Schema Adoption, and deprecated variables and engines are all part of this presentation.
Understanding of linux kernel memory modelSeongJae Park
SeongJae Park introduces himself and his work contributing to the Linux kernel memory model documentation. He developed a guaranteed contiguous memory allocator and maintains the Korean translation of the kernel's memory barrier documentation. The document discusses how the increasing prevalence of multi-core processors requires careful programming to ensure correct parallel execution given relaxed memory ordering. It notes that compilers and CPUs optimize for instruction throughput over programmer goals, and memory accesses can be reordered in ways that affect correctness on multi-processors. Understanding the memory model is important for writing high-performance parallel code.
Lightweight locks (LWLocks) in PostgreSQL provide mutually exclusive access to shared memory structures. They support both shared and exclusive locking modes. The LWLocks framework uses wait queues, semaphores, and spinlocks to efficiently manage acquiring and releasing locks. Dynamic monitoring of LWLock events is possible through special builds that incorporate statistics collection.
This document discusses PostgreSQL replication. It provides an overview of replication, including its history and features. Replication allows data to be copied from a primary database to one or more standby databases. This allows for high availability, load balancing, and read scaling. The document describes asynchronous and synchronous replication modes.
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 2Tanel Poder
This document summarizes a series of performance issues seen by the author in their work with Oracle Exadata systems. It describes random session hangs occurring across several minutes, with long transaction locks and I/O waits seen. Analysis of AWR reports and blocking trees revealed that many sessions were blocked waiting on I/O, though initial I/O metrics from the OS did not show issues. Further analysis using ASH activity breakdowns and OS tools like sar and vmstat found high apparent CPU usage in ASH that was not reflected in actual low CPU load on the system. This discrepancy was due to the way ASH attributes non-waiting time to CPU. The root cause remained unclear.
UKC - Feb 2013 - Analyzing the security of Windows 7 and Linux for cloud comp...Vincent Giersch
University of Kent 2013 - CO899 System security
Presentation of the article:
Salah K, et al, Computers & Security (2012), http://dx.doi.org/10.1016/j.cose.2012.12.001
Best Practices for Deploying Enterprise Applications on UNIXNoel McKeown
The document provides best practices for preparing a UNIX server for deploying enterprise applications. It discusses tasks such as OS installation, hardening the server, configuring shared storage, setting up system accounts, enabling sudo privileges, and disabling security features like iptables and SELinux that could interfere with applications. The goal is to baseline the server, lock down access, and set it up securely according to industry standards before deploying enterprise software.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
MySQL Performance Tuning. Part 1: MySQL Configuration (includes MySQL 5.7)Aurimas Mikalauskas
Is my MySQL server configured properly? Should I run Community MySQL, MariaDB, Percona or WebScaleSQL? How many innodb buffer pool instances should I run? Why should I NOT use the query cache? How do I size the innodb log file size and what IS that innodb log anyway? All answers are inside.
Aurimas Mikalauskas is a former Percona performance consultant and architect currently writing and teaching at speedemy.com. He's been involved with MySQL since 1999, scaling and optimizing MySQL backed systems since 2004 for companies such as BBC, EngineYard, famous social networks and small shops like EstanteVirtual, Pine Cove and hundreds of others.
Additional content mentioned in the presentation can be found here: http://speedemy.com/17
We talk a lot about Galera Cluster being great for High Availability, but what about Disaster Recovery (DR)? Database outages can occur when you lose a data centre due to data center power outages or natural disaster, so why not plan appropriately in advance?
In this webinar, we will discuss the business considerations including achieving the highest possible uptime, analysis business impact as well as risk, focus on disaster recovery itself, as well as discussing various scenarios, from having no offsite data to having synchronous replication to another data centre.
This webinar will cover MySQL with Galera Cluster, as well as branches MariaDB Galera Cluster as well as Percona XtraDB Cluster (PXC). We will focus on architecture solutions, DR scenarios and have you on your way to success at the end of it.
This document discusses Patroni, an open-source tool for managing high availability PostgreSQL clusters. It describes how Patroni uses a distributed configuration system like Etcd or Zookeeper to provide automated failover for PostgreSQL databases. Key features of Patroni include manual and scheduled failover, synchronous replication, dynamic configuration updates, and integration with backup tools like WAL-E. The document also covers some of the challenges of building automatic failover systems and how Patroni addresses issues like choosing a new master node and reattaching failed nodes.
Fail-Safe Cluster for FirebirdSQL and something moreAlexey Kovyazin
With Firebird HQbird it is possible to create high available cluster or warm standby solution. This presentation defines the problem and describes ways how to create such solutions.
10 things, an Oracle DBA should care about when moving to PostgreSQLPostgreSQL-Consulting
PostgreSQL can handle many of the same workloads as Oracle and provides alternatives to common Oracle features and practices. Some key differences for DBAs moving from Oracle to PostgreSQL include: using shared_buffers instead of SGA with a recommended 25-75% of RAM; using pgbouncer instead of a listener; performing backups with pg_basebackup and WAL archiving instead of RMAN; managing undo data in datafiles instead of undo segments; using streaming replication for high availability instead of RAC; and needing to tune autovacuum instead of manually managing redo and undo logs. PostgreSQL is very capable but may not be suited for some extremely high update workloads of 200K+ transactions per second on a single server
This document discusses PostgreSQL's VACUUM utility. It explains that VACUUM is needed to reclaim space from deleted and updated tuples, prevent transaction ID wraparound issues, and update statistics. The document covers various aspects that interact with VACUUM like commit logs, visibility maps, and free space maps. It also describes the tasks performed by VACUUM, options available, and tuning autovacuum. Finally, it provides a high-level overview of the internal workings of VACUUM.
This document provides an overview of troubleshooting streaming replication in PostgreSQL. It begins with introductions to write-ahead logging and replication internals. Common troubleshooting tools are then described, including built-in views and functions as well as third-party tools. Finally, specific troubleshooting cases are discussed such as replication lag, WAL bloat, recovery conflicts, and high CPU recovery usage. Throughout, examples are provided of how to detect and diagnose issues using the various tools.
This document provides an overview and summary of various high availability (HA) solutions for MySQL databases. It begins with an introduction to HA and definitions of key terms. It then discusses MySQL replication, including asynchronous, semi-synchronous, and features in MySQL 5.6 and MariaDB 10.0. Other HA solutions covered include MHA for automated failover, Galera/MariaDB Galera Cluster for synchronous replication, shared disk solutions like DRBD, and MySQL Cluster for in-memory synchronous replication across nodes. The document provides brief descriptions of how each solution works and when it may be applicable.
There are many ways to run high availability with PostgreSQL. Here, we present a template for you to create your own customized, high-availability solution using Python and for maximum accessibility, a distributed configuration store like ZooKeeper or etcd.
On July 6, 2021, MariaDB 10.6 became generally available (production ready). This presentation focuses on the most important aspects of it as well as the influence it has. Improvements to InnoDB, SYS Schema Adoption, and deprecated variables and engines are all part of this presentation.
Understanding of linux kernel memory modelSeongJae Park
SeongJae Park introduces himself and his work contributing to the Linux kernel memory model documentation. He developed a guaranteed contiguous memory allocator and maintains the Korean translation of the kernel's memory barrier documentation. The document discusses how the increasing prevalence of multi-core processors requires careful programming to ensure correct parallel execution given relaxed memory ordering. It notes that compilers and CPUs optimize for instruction throughput over programmer goals, and memory accesses can be reordered in ways that affect correctness on multi-processors. Understanding the memory model is important for writing high-performance parallel code.
Lightweight locks (LWLocks) in PostgreSQL provide mutually exclusive access to shared memory structures. They support both shared and exclusive locking modes. The LWLocks framework uses wait queues, semaphores, and spinlocks to efficiently manage acquiring and releasing locks. Dynamic monitoring of LWLock events is possible through special builds that incorporate statistics collection.
This document discusses PostgreSQL replication. It provides an overview of replication, including its history and features. Replication allows data to be copied from a primary database to one or more standby databases. This allows for high availability, load balancing, and read scaling. The document describes asynchronous and synchronous replication modes.
Tanel Poder - Troubleshooting Complex Oracle Performance Issues - Part 2Tanel Poder
This document summarizes a series of performance issues seen by the author in their work with Oracle Exadata systems. It describes random session hangs occurring across several minutes, with long transaction locks and I/O waits seen. Analysis of AWR reports and blocking trees revealed that many sessions were blocked waiting on I/O, though initial I/O metrics from the OS did not show issues. Further analysis using ASH activity breakdowns and OS tools like sar and vmstat found high apparent CPU usage in ASH that was not reflected in actual low CPU load on the system. This discrepancy was due to the way ASH attributes non-waiting time to CPU. The root cause remained unclear.
UKC - Feb 2013 - Analyzing the security of Windows 7 and Linux for cloud comp...Vincent Giersch
University of Kent 2013 - CO899 System security
Presentation of the article:
Salah K, et al, Computers & Security (2012), http://dx.doi.org/10.1016/j.cose.2012.12.001
Best Practices for Deploying Enterprise Applications on UNIXNoel McKeown
The document provides best practices for preparing a UNIX server for deploying enterprise applications. It discusses tasks such as OS installation, hardening the server, configuring shared storage, setting up system accounts, enabling sudo privileges, and disabling security features like iptables and SELinux that could interfere with applications. The goal is to baseline the server, lock down access, and set it up securely according to industry standards before deploying enterprise software.
This document discusses best practices for MySQL system administration. It covers things to consider before and after installing MySQL such as hardware requirements, filesystem choices, disk partitioning and MySQL configuration tuning. It also discusses online backup and maintenance techniques using tools like Percona Toolkit to minimize downtime during operations like schema changes. Regular monitoring, testing backups and optimizing SQL code are emphasized.
The document provides an overview of IBM Informix database security from both an operating system and database perspective. It discusses how Informix uses OS authentication, permissions, and network security capabilities. On the database side, it describes how Informix implements discretionary access control using SQL GRANT/REVOKE statements and label-based access control using security policies and labels. The document also outlines the seven distinct security roles in Informix and how to separate them, and provides details on configuring and using the Informix auditing functionality.
This document summarizes Docker security features as of release 1.12. It discusses key security modules like namespaces, cgroups, capabilities, seccomp, AppArmor/SELinux that provide access control and isolation in Docker containers. It also covers multi-tenant security, image signing, TLS for daemon access, and best practices like using official images and regular updates.
Securing MongoDB to Serve an AWS-Based, Multi-Tenant, Security-Fanatic SaaS A...MongoDB
MongoDB introduces new capabilities that change the way micro-services interact with the database, capabilities that are either absent or exist only partially in high-end commercial databases such as Oracle. In this session I will share from my experiences building a cloud-based, multi-tenant SaaS application with extreme security requirements. We will cover topics including considerations for storing multi-tenant data in the database, best practices for authentication and authorization, and performance considerations specific to security in MongoDB.
Presented by Tim Mackey, Senior Technology Evangelist, Black Duck Software on August 17.
To use containers safely, you need to be aware of potential security issues and the tools you need for securing container-based systems. Secure production use of containers requires an understanding of how attackers might seek to compromise the container, and what you should be aware of to minimize that potential risk.
Tim Mackey, Senior Technical Evangelist at Black Duck Software, provides guidance for developing container security policies and procedures around threats such as:
1. Network security
2. Access control
3. Tamper management and trust
4. Denial of service and SLAs
5. Vulnerabilities
Register today to learn about the biggest security challenges you face when deploying containers, and how you can effectively deal with those threats.
Watch the webinar on BrightTalk: http://bit.ly/2bpdswg
The document discusses the Windows operating system architecture and boot process. It explains that Windows uses a kernel to manage hardware resources and runs most programs in user mode for security. The boot process begins with the BIOS or UEFI initializing hardware and loading the Windows bootloader which then loads Windows kernel files and starts Windows services based on registry entries.
The document discusses the Windows operating system architecture and boot process. It explains that Windows uses a kernel to manage hardware resources and runs most programs in user mode for security. The boot process begins with the BIOS or UEFI initializing hardware and loading the Windows bootloader which then loads Windows kernel files and starts Windows services based on registry entries.
This document provides an overview of setting up an iOS penetration testing environment and common techniques for analyzing iOS applications. It discusses jailbreaking a device and installing useful tools. It also covers understanding the iOS file system and Objective-C runtime, using tools like Cycript and class-dump-z to enable runtime analysis and manipulation. The document describes insecure data storage techniques like plist files, NSUserDefaults, and CoreData that store unencrypted data. It also discusses analyzing network traffic and automated testing.
Session at ContainerDay Security 2023 on the 8th of March in Hamburg.
Containers are awesome. The technology finds more and more adaptation in our daily IT lifes. They are fast, agile and shareable. All those postives bring a downsite to it - visibility. Can I trust every container content? Is my container behaving like it should? It's to fast, how can I catch anomalities? We want to tackle those questions in our session and show you what Falco and Sysdig can do for you to win back container visibility without any loss of container benefits.
This document provides an overview of container security best practices. It discusses challenges in securing components of the container infrastructure like images, registries, runtimes and orchestrators. It outlines common container threats like privilege escalation attacks and misconfigured containers. The document recommends mitigations like using vetted base images, access controls, network segmentation and updating components. It also references resources like the OWASP Docker Top 10, NIST container security guide and CIS Docker benchmark that provide guidelines for container hardening. In summary, the key is to monitor components, limit access, use segmentation and follow security standards to protect the container environment.
Information Security Lesson 4 - Baselines - Eric VanderburgEric Vanderburg
The document discusses security baselines and hardening systems and networks. It covers topics like disabling unused services, using security templates to configure Windows settings, implementing group policy for domain configurations, and applying patches and filters to harden applications, operating systems, databases, and network devices. The document also defines several common acronyms related to information security.
This document provides an overview of a training course on system and network security for Windows 2003/XP/2000. It discusses what the course will cover, including the native security features of these Windows operating systems, how to lock down and secure Windows systems, and vulnerabilities and countermeasures. It also summarizes new and modified security features in Windows Server 2003 such as the Common Language Runtime, Internet Connection Firewall, account behavior changes, and enhancements to Encrypted File System, IPSec, authorization manager, and IIS 6.0.
In this PowerPoint, learn how a security policy can be your first line of defense. Servers running AIX and other operating systems are frequent targets of cyberattacks, according to the Data Breach Investigations Report. From DoS attacks to malware, attackers have a variety of strategies at their disposal. Having a security policy in place makes it easier to ensure you have appropriate controls in place to protect mission-critical data.
The document discusses deploying FuseMQ, an enterprise messaging system, in large enterprise environments using Fuse Fabric. Fuse Fabric provides centralized configuration and management of FuseMQ brokers across multiple hosts. It allows easy creation and configuration of brokers as well as updating the broker configuration across all hosts. It also provides broker discovery and failover capabilities for messaging clients.
Similar to Security best practices for informix (20)
Choosing the right platform for your Internet -of-Things solutionIBM_Info_Management
Deploying a solution within the context of the Internet of Things (IoT) typically requires involves many considerations, ranging from the hardware involved to the architecture of the whole environment, and from the decisions about where processing and analytics is to take place to the software choices that allow you to exploit the Internet of Things. This presentation will focus on the need to support a homogeneous processing environment. That is, it will be preferable if processing in all tiers of the IoT is consistent and compatible. This joint presentation will go on to discuss the implications of this consistency for database selection.
Leveraging compute power at the edge - M2M solutions with Informix in the IoT...IBM_Info_Management
This document discusses leveraging computational power at the edge of IoT/M2M solutions using Informix in an IoT gateway architecture. It begins with an overview of IoT solutions focusing on utilities and smart energy. It then describes a Java/OSGi-based OT architecture and building blocks for processing data and integrating with enterprise IT systems using Informix. Example use cases are provided and it concludes that such an architecture can reduce costs and complexity while preserving customer value propositions.
Informix on ARM and informix Timeseries - producing an Internet-of-Things sol...IBM_Info_Management
This document discusses using Informix databases and TimeSeries capabilities to create an Internet of Things solution for collecting and storing sensor data from devices. It describes how a Raspberry Pi or other ARM-based board running Informix could be used as a smart gateway to collect data from various sensors using a microcontroller board like Arduino, store the data locally in an Informix database using TimeSeries for efficiency, and also connect to the cloud for additional storage and analytics. Informix is presented as a good fit for this application due to its ability to run on low-cost ARM hardware, support for efficient TimeSeries data storage, and embeddability with no external database administration required.
This document provides an overview of IBM's Internet of Things (IoT) architecture and capabilities. It discusses the key components of an IoT architecture including intelligent gateways, sensor analytics zones, and the deep analytics zone in the cloud. It describes how gateways can help IoT solutions by reducing cloud costs and latency through local analytics and filtering of sensor data. The document then outlines the requirements for databases in gateways, and explains how IBM's Informix database is well-suited to meet these requirements through its small footprint, low memory usage, support for time series and spatial data, and ability to ingest and analyze sensor data in real-time. Finally, it discusses how Informix can be used both in gateways and
Highly successful performance tuning of an informix databaseIBM_Info_Management
This document contains several notices and disclaimers related to IBM products, services, and information. It states that IBM owns the copyright to the document and its contents. It also notes that performance results may vary depending on the environment. The document is provided without warranty and IBM is not liable for damages from its use. Statements regarding IBM's future plans are subject to change.
The document summarizes a presentation on developing hybrid applications with Informix. It discusses how the Informix wire listener allows unified access to JSON, relational, and time series data through MongoDB and REST APIs. It enables applications to execute SQL statements and perform joins across different data sources. Sample applications demonstrate basic CRUD operations using MongoDB and REST interfaces with Informix.
This document provides best practices for implementing high availability and disaster recovery solutions for Informix databases using HDR, RSS, SDS, and connection manager technologies. It discusses configuration parameters and strategies for minimizing data loss and downtime in the event of failures. Key recommendations include using unbuffered logging, tuning bufferpool and I/O settings, and coordinating transactions across nodes for applications.
End-to-end solution demonstration: From concept to delivery-Intel/IBMIBM_Info_Management
The document discusses a pilot program in Camden Council, London to install individual heat meters in 1,500 properties in block housing developments. This allows residents to be charged based on their actual heat usage rather than a fixed fee. Some residents have reduced usage by over 30% and overall savings of £195,000 and 16,000 tons of CO2 emissions are estimated annually. Hildebrand Technology provided the software and hosting to analyze the thousands of meter readings collected every 6 seconds using IBM Informix TimeSeries database software.
This document provides an overview of IBM's Internet of Things architecture and capabilities. It discusses how IBM's Informix database can be used in intelligent gateways and the cloud for IoT solutions. Specifically, it outlines how Informix is well-suited for gateway and cloud environments due to its small footprint, support for time series and spatial data, and ability to handle both structured and unstructured data. The document also provides examples of how Informix can be used with Node-RED and Docker to develop IoT applications and deploy databases in the cloud.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Freshworks Rethinks NoSQL for Rapid Scaling & Cost-EfficiencyScyllaDB
Freshworks creates AI-boosted business software that helps employees work more efficiently and effectively. Managing data across multiple RDBMS and NoSQL databases was already a challenge at their current scale. To prepare for 10X growth, they knew it was time to rethink their database strategy. Learn how they architected a solution that would simplify scaling while keeping costs under control.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
Digital Banking in the Cloud: How Citizens Bank Unlocked Their MainframePrecisely
Inconsistent user experience and siloed data, high costs, and changing customer expectations – Citizens Bank was experiencing these challenges while it was attempting to deliver a superior digital banking experience for its clients. Its core banking applications run on the mainframe and Citizens was using legacy utilities to get the critical mainframe data to feed customer-facing channels, like call centers, web, and mobile. Ultimately, this led to higher operating costs (MIPS), delayed response times, and longer time to market.
Ever-changing customer expectations demand more modern digital experiences, and the bank needed to find a solution that could provide real-time data to its customer channels with low latency and operating costs. Join this session to learn how Citizens is leveraging Precisely to replicate mainframe data to its customer channels and deliver on their “modern digital bank” experiences.
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
[OReilly Superstream] Occupy the Space: A grassroots guide to engineering (an...Jason Yip
The typical problem in product engineering is not bad strategy, so much as “no strategy”. This leads to confusion, lack of motivation, and incoherent action. The next time you look for a strategy and find an empty space, instead of waiting for it to be filled, I will show you how to fill it in yourself. If you’re wrong, it forces a correction. If you’re right, it helps create focus. I’ll share how I’ve approached this in the past, both what works and lessons for what didn’t work so well.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
2. • IBM‘s statements regarding its plans, directions, and intent are subject to change or withdrawal
without notice at IBM‘s sole discretion.
• Information regarding potential future products is intended to outline our general product direction
and it should not be relied on in making a purchasing decision.
• The information mentioned regarding potential future products is not a commitment, promise, or
legal obligation to deliver any material, code or functionality. Information about potential future
products may not be incorporated into any contract.
• The development, release, and timing of any future features or functionality described for our
products remains at our sole discretion.
Performance is based on measurements and projections using standard IBM benchmarks in a
controlled environment. The actual throughput or performance that any user will experience will vary
depending upon many factors, including considerations such as the amount of multiprogramming in the
user‘s job stream, the I/O configuration, the storage configuration, and the workload processed.
Therefore, no assurance can be given that an individual user will achieve results similar to those stated
here.
Please Note:
2
4. 3
Users root and informix
• The root user can ultimately do anything
Who knows the root password?
How do users become root?
• The informix user is omnipotent on the IDS server
Who knows the informix password?
How do administrators become informix?
• sudo
• Use Role Separation as an alternative
5. 4
Role Separation
• Alternative to all administrators using user informix
• Do not add users to group informix
• DBSA depends on group of INFORMIXDIR/etc
• DBSSO group depends on group of INFORMIXDIR/dbssodir
• AAO group depends on group of INFORMIXDIR/aaodir
• Backup and Recovery group — bargroup
6. 5
How to Enable Role Separation
• On Windows, role separation is enabled during install
Re-install IDS if necessary
No other supported way of doing it
• On Unix, role separation can be set during install
Choose the option (AAO and DBSSO only)
7. 6
How to Enable Role Separation
• On Unix, role separation can be changed after install
DBSA etc
AAO aaodir
DBSSO dbssodir
Change group that owns relevant directory
• Set SGID bit on directory
• Restart IDS
Fix permissions on oninit for the DBSA group
• chmod o+x $INFORMIXDIR/bin/oninit
Fix group permissions on $ONCONFIG (dbsa group)
Fix group permissions on aaodir/adtcfg (aao group)
8. 7
Server File Access
• IDS depends on several files
Server installation
Configuration files
Data files — chunks
• Required owner, group, mode
World access – NO
• onsecurity utility
9. 8
Server Installation and Setup
• Isolate the Data Server
Place the data server on its own machine
• Use appropriate controls on who can access the server machine
• Use firewalls as appropriate
• Don‘t let arbitrary users on arbitrary machines access the server
ports
Separate the data server from application servers
• Especially web servers
When not possible to use separate hardware
• Split client INFORMIXDIR from server
10. 9
Insulate Servers from Change
• Always install new versions in a new directory
This limits downtime
And provides safe backout strategy
• Make sure INFORMIXDIR is a symbolic link
• Standardize the ONCONFIG file
• If you have multiple instances on a single machine
Keep each one in a separate INFORMIXDIR
• Always deny public write access
• Usually deny public read access
11. 10
Insulate Servers from Change
• Keep things that stay constant out of INFORMIXDIR
Device files
Log files
• Think of INFORMIXDIR as ‗long-term temporary‘
It will be removed after next upgrade
12. 11
Insulate Servers from Change
• DUMPDIR should not point to /tmp
• DUMPDIR big enough for 2 shared memory dumps
• Use standard names and locations for chunks
Always use symbolic links to the actual chunks
• Ensure security of sub-directories of $INFORMIXDIR
Also security of directories to device (chunk) directory
• Use a separate directory for user informix‘s home
Do not use $INFORMIXDIR
13. 12
The onsecurity Utility (UNIX and Linux)
• onsecurity utility checks the security of a file, directory, or
path
• Troubleshoots security problems if any are detected
• Use the onsecurity command to:
Check the security of the path leading to a directory or a file
Generate diagnostic output to explain the security problems
Generate a script that can be run by root to fix the problems
• You can use the script as generated
• Or modify it to your environment‘s security needs
14. 13
The onsecurity Utility (UNIX and Linux)
• For special circumstances only:
• Specify that particular users, groups, or directories can be
trusted:
Add the information to files in the /etc/informix directory
• trusted.users
• trusted.groups
• trusted.insecure.directories
• Normally, you will be told that the path is secure
• If the path is secure, you do not need to do anything more
15. 14
An example of onsecurity at work
$ onsecurity /work/informix/ids-11
# !!! SECURITY PROBLEM !!!
# /work/informix/ids-11 (path is not trusted)
# Analysis:
# User Group Mode Type Secure Name
# 0 root 0 root 0755 DIR YES /
# 0 root 0 root 0755 DIR YES /work
# 203 unknown 8714 ccusers 0777 DIR NO /work/informix
# 200 informix 102 informix 0755 DIR NO /work/informix/ids-11
# Name: /work/informix
# Problem: owner <unknown> (uid 203) is not trusted
# Problem: group ccusers (gid 8714) is not trusted but can
modify the directory
# Problem: the permissions 0777 include public write access
16. • The informix directory of the path /work/informix has problems:
the owner of this directory is not a trusted user
the group that controls the directory is not trusted
the directory has public write access
• Possible fixes:
Change the owner to root or informix
Change the group to a system group or informix
Remove public write access
• Or grant exemptions
Dangerous, in general!
The onsecurity Utility example
15
17. 16
• At server startup, oninit checks the security of key directories:
Subdirectory Owner Group Permissions
INFORMIXDIR informix informix 755
bin informix informix 755
lib informix informix 755
gls informix informix 755
msg informix informix 755
etc informix DBSA 775
aaodir informix AAO 775
dbssodir informix DBSSO 775
tmp informix informix 770
Security checking at server startup
18. 17
INFORMIXDIR permissions
• Many Informix utilities check file permissions at startup
• Errors detected at this point will be reported
And the program will exit
• Run onsecurity with appropriate options
• Refer to Chapter 1, IBM Informix Security Guide
19. 18
Backup and Restore (BAR)
• Members of bargroup are allowed to do backup and restore
bargroup is a Unix group with a fixed name
• Backup is just as sensitive as live data
Data has been compromised by loss of backup media
Protect the backup copy
20. 19
Connection Security
• Control who can connect to the server
by default anyone with login access to machine
or a ―trusted‖ machine (hosts.equiv, .rhosts)
• Think about using PAM
even for UNIX type access
can be used to deny access to certain accounts
• e.g. Linux pam_access.so
• Encrypted connections to server
Without encryption, passwords are sent in plain text.
ENCCSM
SPWDCSM
SSL
21. • Avoid using the old r-command configuration files
• Use new configuration parameters
REMOTE_SERVER_CONFIG
• Which remote machines should be trusted
REMOTE_USERS_CONFIG
• Which remote users should be trusted
• Instead of /etc/hosts.equiv and ~/.rhosts
Connection Security
20
22. 21
Enabling Encrypted Communications
• Create or modify server entry in sqlhosts file
server_1_enc olsoctcp host 9089 csm=(s1_enc)
• Create or modify concsm.cfg file
s1_enc("/usr/informix/lib/csm/libixenc.so",
"cipher[aes:cbc],timeout[cipher:1440,key=60],
mac[levels:<high,medium>,files:<builtin>]”)
• Add new server alias to ONCONFIG
• Restart IDS
23. 22
Enabling Encrypted Communications
• ODBC can use ENCCSM
• JDBC can use an equivalent of ENCCSM
String Url = "jdbc:informix-
sqli://host:9089/sysmaster:informixserver=serve
r_1_enc;user=bob;password=bobpass;csm=(classnam
e=com.informix.jdbc.Crypto,config=concsm.cfg";
• For more details, see Informix Security Guide
24. 23
JCC and JDBC
• Java Common Client (JCC) provides encryption
Using GSKit and SSL
• http://tinyurl.com/467gpr
• http://tinyurl.com/4jr4yu
• Legacy JDBC type IV driver provides encryption
Password encryption
• SPWDCSM
Full encryption
• ENCCSM
25. • New communication protocol
drsocssl — SSL for DRDA clients
olsocssl — SSL for SQLI client
• Also supported for server to server communications
• I-Star, HDR, ER, RSS, SDS
• Example sqlhosts file entries
horus_31_ol_ssl olsocssl horus horus_ol_ssl
horus_31_dr_ssl drsocssl horus horus_dr_ssl
Setting up SSL — sqlhosts
24
26. • SSL_KEYSTORE_LABEL
Specifies label of server digital certificate in keystore
• If not specified in ONCONFIG, uses default label in keystore
• But default label is officially deprecated — be explicit
• SSL_KEYSTORE_LABEL ids_ssl_label
• Extra options for NETTYPE
NETTYPE protocol, poll threads, connections, VP class
• Specify the protocol as iiippp
• Where:
– iii=[ipc||soc|tlli]
– ppp=[shm|str|tcp|spx|ssl]
• NETTYPE socssl, 3, 50, NET
Setting up SSL — onconfig
25
27. • All encryption/decryption options performed on encrypt VPs
• Encrypt VPs configured via VPCLASS
VPCLASS encrypt,num=5
• Support encrypted and non-encrypted connections
DBSERVERNAME horus_31
DBSERVERALIASES horus_31_ol_ssl,horus_31_dr_ssl
Setting up SSL — onconfig
26
28. • IBM‘s Global Security Kit, GSKit, is installed with Informix
Server
ClientSDK and Connect
• GSKit contains gsk8capicmd_64 utility
Used to create keystores and manage digital certificates
Needed for SSL communication
• More information on gsk8capicmd_64 at
http://www-
01.ibm.com/support/knowledgecenter/SSVJJU_6.2.0/com.ibm.IB
MDS.doc/admin_gd174.htm
Keystores and Digital Certificates
27
29. • The keystore for server is password protected
• Password is stored encrypted in stash file
Also created by gsk8capicmd_64 utility
• One keystore per server instance.
It stores server‘s digital certificate
And root CA certificates of other servers it connects to
• As in I-STAR, HDR, ER, SDS, RSS
Keystores and Digital Certificates
28
30. • The location and name of the files are fixed
Server keystore
• $INFORMIXDIR/ssl/server.kdb
Server password stash
• $INFORMIXDIR/ssl/server.sth
Based on value of DBSERVERNAME
• Ownership and permissions must be correct
User informix, group informix, 660
Keystores and Digital Certificates
29
31. • Client keystore stores root CA certificates
For all servers the client connects to
• SQLI and DRDA clients can share same keystore
• Password is optional for client keystore
• Location and name of client keystore and its password stash
file can be configured via new configuration file:
$INFORMIXDIR/etc/conssl.cfg
• Note you need to set the permissions on client files correctly
Setting up SSL — Client
30
32. • Configuration parameters in conssl.cfg
SSL_KEYSTORE_FILE
• Absolute path name for client keystore file
SSL_KEYSTORE_STH
• Absolute path name for client stash file
• If conssl.cfg does not exist, defaults to
$INFORMIXDIR/etc/client.kdb
$INFORMIXDIR/etc/client.sth
• Permissions on these files should be:
User informix, group informix, permissions 664
Setting up SSL — Client
31
33. 32
Access to Data
• Who creates databases?
DBCREATE_PERMISSION
Add a DBCREATE_PERMISSION entry
• For each user who needs to create databases
• Discretionary Access Control
Users should be granted appropriate level of access to
databases and database objects.
Use roles for ease of administration
• GRANT privilege to role
• GRANT role to user
• GRANT default role
Privileges can be granted at DATABASE and TABLE level
34. 33
Other ONCONFIG parameters
• IFX_EXTEND_ROLE
Controls whether administrators can use the EXTEND role to
specify which users can register external routines.
• 0 Any user can register external routines
• 1 Only users granted the EXTEND role can register external
routines (Default)
• DB_LIBRARY_PATH
Specifies the locations from which Informix can use UDR or UDT
shared libraries.
35. 34
Other ONCONFIG parameters
• SECURITY_LOCALCONNECTION
Specifies whether IDS performs security checking for local
connections.
• 0 Off
• 1 Validate userid
• 2 Validate userid and port
• UNSECURE_ONSTAT
Controls whether non-DBSA users are allowed to run all onstat
commands.
• 0 Disabled (Default)
• 1 Enabled
36. 35
Other ONCONFIG parameters
• ADMIN_USER_MODE_WITH_DBSA
Controls who can connect to IDS in administrative mode
• 0 Only informix user (Default)
• 1 DBSAs, users specified by ADMIN_MODE_USERS, and user
informix
• ADMIN_MODE_USERS
Specifies the user names who can connect to IDS in
administrative mode,
• SSL_KEYSTORE_LABEL
The label, up to 512 characters, of the IDS certificate used in
Secure Sockets Layer (SSL) protocol communications.
37. 36
Column Level Encryption (CLE)
• Column-level encryption stores sensitive data as encrypted
strings
• Use it to selectively encrypt sensitive data
Such as credit card numbers
• Only users who can provide the password can decrypt the data
• Use the ENCRYPT_AES() and ENCRYPT_TDES() functions
to encrypt data in columns
• You can sometimes use SET ENCRYPTION PASSWORD
To set an encryption password for a session
• INSERT INTO tab1(ssn) VALUES
(ENCRYPT_AES("111-22-3333", "password"));
• SELECT DECRYPT(ssn, "password") from tab1;
38. 37
Label Based Access Control – LBAC
• Label-based access control (LBAC)
Enterprise Edition only
An implementation of multi-level security (MLS)
You control who has read access and who has write access
• To individual rows and columns of data
• MLS systems process information with different security levels
Permit simultaneous access by users with different security
clearances
Allow users access only to information for which they have
authorization
39. 38
Label Based Access Control – LBAC
• Create Security Policy and attach it to a table
• Create Security Labels and attach labels to data
• Grant labels to users
• Users can only access data with labels that ―match‖ theirs
40. 39
Audit
• Audit allows you to keep a log of important server events
• You should enable IDS auditing
Decide which events need to be audited
Decide which users need to be audited
• Audit the DBSA
Setup Appropriate Audit Masks
• Examine the audit logs for unexpected events
onshowaudit
• Save the audit logs
Easily compressible
Event generated when change to next audit log file
• Protect the audit logs carefully
41. 40
IDS Server Log
• Lots of valuable information is written to the server log
Failed login attempts
Audit Mode changes
Audit log file changes
• But you have to look!
Be sure to monitor its contents
45. We Value Your Feedback!
Don‘t forget to submit your Insight session and speaker
feedback! Your feedback is very important to us – we use it
to continually improve the conference.
Access the Insight Conference Connect tool at
insight2015survey.com to quickly submit your surveys from
your smartphone, laptop or conference kiosk.
44
47. 46
Notices and Disclaimers (con‘t)
Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly
available sources. IBM has not tested those products in connection with this publication and cannot confirm the accuracy of performance,
compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the
suppliers of those products. IBM does not warrant the quality of any third-party products, or the ability of any such third-party products to
interoperate with IBM‘s products. IBM EXPRESSLY DISCLAIMS ALL WARRANTIES, EXPRESSED OR IMPLIED, INCLUDING BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
The provision of the information contained herein is not intended to, and does not, grant any right or license under any IBM patents, copyrights,
trademarks or other intellectual property right.
• IBM, the IBM logo, ibm.com, Aspera®, Bluemix, Blueworks Live, CICS, Clearcase, Cognos®, DOORS®, Emptoris®, Enterprise Document
Management System™, FASP®, FileNet®, Global Business Services ®, Global Technology Services ®, IBM ExperienceOne™, IBM
SmartCloud®, IBM Social Business®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON,
OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®,
pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, Smarter Commerce®, SoDA, SPSS, Sterling Commerce®, StoredIQ,
Tealeaf®, Tivoli®, Trusteer®, Unica®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of
International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be
trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at:
www.ibm.com/legal/copytrade.shtml.