This document discusses DNSSEC implementation plans for the .UA domain. It describes:
1) Testing signing a subdomain of .UA with keys generated in November 2011 and publishing the public key.
2) Plans for pre-production including migrating infrastructure to newer bind versions, separating signing and publication servers, and possibly using DLV with pre-production keys.
3) Potential issues like algorithm support and key storage, as well as production plans for key generation and deployment in December 2011 and post-production key rotation schedules.
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
The document discusses disaggregated data centers using OpenNebula. It describes how OpenNebula allows for scalability through elasticity and avoids issues from human/configuration errors. It discusses types of scalability like predictable, mixed/emergency, and unpredictable scalability. It also briefly discusses provisioning tools like Oneprovision and using provision templates in YAML format.
Blackboard DevCon 2012 - How to Turn on the Lights to Your Blackboard Learn E...Noriaki Tatsumi
Zabbix is a distributed monitoring solution that can monitor Blackboard Learn. It collects various service quality metrics like uptime, response time, and failure rate to notify administrators of issues and help with troubleshooting. Blackboard uses Zabbix to monitor operations data, perform scalability analysis, and ensure server infrastructure stability. A Zabbix template suite is available to easily monitor Learn applications, Java, Linux, Windows, Tomcat, cache, and ActiveMQ components.
In this talk, we will give an overview of the usage of Nix within LogicBlox. For 4 years, we have used Nix to improve our build, test and deployment infrastructure, and we are using NixOS heavily in production. We would like to highlight why we feel Nix is awesome and give some insight in how we are trying to give back to the Nix community.
Luke Jennings, Countercept
Attackers have been avoiding disk and staying memory resident for over a decade and this has traditionally proven an Achilles heels for security products and the teams that operate them. The boom in both EDR products and memory forensics toolkits in more recent years have helped defenders to fight back but attackers are already adapting their approaches.
This talk will cover both classic and modern techniques for injecting code into legitimate processes on Microsoft Windows systems, as well as several techniques for detecting them. This will include both system tracing methods, good for proactive detection, as well as memory analysis techniques that have the added benefit of allow detection of pre-existing compromises in real-world incident response scenarios, with a brief case study example. As part of this, practical examples will be given showing how Microsoft’s ATP and Sysmon help in this area as well as other techniques. Finally, the future of this area will be considered, including how the .NET runtime already complicates detection techniques in this area and how this will likely become increasingly challenging as more attackers discover and exploit this.
By the end of the talk, the audience should understand the importance of code injection in the context of memory-resident implants, the key techniques for performing it and detecting it and the challenges of achieving this in the real-world at enterprise scale.
DevOpsDays Taipei 2021 - How FinTech Embrace Change Managementsmalltown
This document discusses how FinTech companies can embrace change management when making changes to systems in production. It introduces change management and different types of changes according to ITIL. Traditional change management approaches are outlined along with challenges startups face with limited resources. The document then proposes implementing a chatbot to streamline the request for change process and integrating it with systems to automate permissions and code releases. It concludes by emphasizing the importance of external auditing standards like SOC2 and ISO 27001 for change management processes.
Nagios Conference 2014 - Dave Williams - Multi-Tenant Nagios MonitoringNagios
Dave Williams presentation on Multi-Tenant Nagios Monitoring.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
This document discusses DNSSEC implementation plans for the .UA domain. It describes:
1) Testing signing a subdomain of .UA with keys generated in November 2011 and publishing the public key.
2) Plans for pre-production including migrating infrastructure to newer bind versions, separating signing and publication servers, and possibly using DLV with pre-production keys.
3) Potential issues like algorithm support and key storage, as well as production plans for key generation and deployment in December 2011 and post-production key rotation schedules.
OpenNebulaConf2019 - Crytek: A Video gaming Edge Implementation "on the shoul...OpenNebula Project
The document discusses disaggregated data centers using OpenNebula. It describes how OpenNebula allows for scalability through elasticity and avoids issues from human/configuration errors. It discusses types of scalability like predictable, mixed/emergency, and unpredictable scalability. It also briefly discusses provisioning tools like Oneprovision and using provision templates in YAML format.
Blackboard DevCon 2012 - How to Turn on the Lights to Your Blackboard Learn E...Noriaki Tatsumi
Zabbix is a distributed monitoring solution that can monitor Blackboard Learn. It collects various service quality metrics like uptime, response time, and failure rate to notify administrators of issues and help with troubleshooting. Blackboard uses Zabbix to monitor operations data, perform scalability analysis, and ensure server infrastructure stability. A Zabbix template suite is available to easily monitor Learn applications, Java, Linux, Windows, Tomcat, cache, and ActiveMQ components.
In this talk, we will give an overview of the usage of Nix within LogicBlox. For 4 years, we have used Nix to improve our build, test and deployment infrastructure, and we are using NixOS heavily in production. We would like to highlight why we feel Nix is awesome and give some insight in how we are trying to give back to the Nix community.
Luke Jennings, Countercept
Attackers have been avoiding disk and staying memory resident for over a decade and this has traditionally proven an Achilles heels for security products and the teams that operate them. The boom in both EDR products and memory forensics toolkits in more recent years have helped defenders to fight back but attackers are already adapting their approaches.
This talk will cover both classic and modern techniques for injecting code into legitimate processes on Microsoft Windows systems, as well as several techniques for detecting them. This will include both system tracing methods, good for proactive detection, as well as memory analysis techniques that have the added benefit of allow detection of pre-existing compromises in real-world incident response scenarios, with a brief case study example. As part of this, practical examples will be given showing how Microsoft’s ATP and Sysmon help in this area as well as other techniques. Finally, the future of this area will be considered, including how the .NET runtime already complicates detection techniques in this area and how this will likely become increasingly challenging as more attackers discover and exploit this.
By the end of the talk, the audience should understand the importance of code injection in the context of memory-resident implants, the key techniques for performing it and detecting it and the challenges of achieving this in the real-world at enterprise scale.
DevOpsDays Taipei 2021 - How FinTech Embrace Change Managementsmalltown
This document discusses how FinTech companies can embrace change management when making changes to systems in production. It introduces change management and different types of changes according to ITIL. Traditional change management approaches are outlined along with challenges startups face with limited resources. The document then proposes implementing a chatbot to streamline the request for change process and integrating it with systems to automate permissions and code releases. It concludes by emphasizing the importance of external auditing standards like SOC2 and ISO 27001 for change management processes.
Nagios Conference 2014 - Dave Williams - Multi-Tenant Nagios MonitoringNagios
Dave Williams presentation on Multi-Tenant Nagios Monitoring.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
DB2 pureScale provides a highly scalable and available database solution. It allows customers to start small and grow capacity easily by adding additional cluster members without disrupting applications or incurring extra costs. DB2 pureScale uses a shared nothing architecture with each member running on its own server. It provides a single system view to clients and automatically balances workload across members. Critical features include unlimited scalability, continuous availability even during member failures, and the ability to perform maintenance without outages.
Episode 4 DB2 pureScale Performance Webinar Oct 2010Laura Hood
DB2 pureScale provides scalability and high performance through its clustered database architecture. It uses a cluster caching facility to manage data consistency across member nodes and leverage low-latency interconnects like InfiniBand. The architecture features two-level buffer pool caching between local and global pools for improved read performance. Monitoring and tuning focuses on optimizing buffer pool hit ratios at both levels. Initial proof points showed near-linear scalability up to 12 nodes and over 80% scalability even at 128 nodes, demonstrating the architecture's ability to transparently scale database workloads across many servers.
The document discusses IBM's pureScale technology which allows DB2 databases to scale up to 128 nodes for high availability and scalability. PureScale forms a shared-disk cluster and uses proven "data sharing" technology from DB2 for z/OS. It provides agility to rapidly scale up or down capacity as needed with little application change. The company Triton built a basic 2-node pureScale cluster within a budget of under £1K to validate IBM's claims and gain hands-on experience. Their testing showed the cluster delivered 1000 transactions per second under load. The summary concludes that pureScale provides robust clustering with excellent price/performance.
DB2 pureScale provides high availability and continuous operations by automatically recovering from component failures through workload redistribution and fast in-flight transaction recovery. It protects databases by balancing workloads across nodes and uses duplexed secondary components to tolerate multiple simultaneous node failures while keeping other nodes online and services available.
DB2 10.5 contains several new features that provide significant improvements in database compression, performance for analytic applications, and flexibility in indexing. These include BLU Acceleration, expression-based indexes that allow indexing on expressions, and the ability to exclude NULL keys from indexes. DB2 10.5 also allows online table reorganization without taking the table offline, and stores XML data efficiently without losing its semistructured format.
Pure Genius: How To Get Mainframe-Like Scalability & Availability For Midrange DB2 discusses pureScale, an optional feature for DB2 that implements shared-disk clustering to provide high scalability and availability. It can support up to 128 members. The architecture uses a shared database, coordination facilities, and InfiniBand networking. Customers experience scalability gains, easy installation, and resilience like continued operation despite coordination facility failure. The presentation evaluates pureScale's benefits and customer experiences.
DB2 pureScale provides unlimited scalability, application transparency, and continuous availability for transaction processing and ERP workloads. It uses a shared-nothing architecture where multiple database instances (members) connect to a single database and cooperate to provide a single system image to clients. PowerHA pureScale technology handles global bufferpool and locking management to maintain data consistency as members scale out.
DB2 pureScale is a new DB2 feature that allows a DB2 database to span multiple database servers for increased availability, scalability and flexible capacity. It uses a shared disk architecture with Global Parallel File System technology to provide a single database image across nodes. Key components include Cluster Services, InfiniBand networking, global bufferpool and lock manager to coordinate data access and concurrency across nodes. The technology is still in development with initial support for AIX on Power hardware.
The document describes IBM DB2's High Availability Disaster Recovery (HADR) multiple standby configuration. It allows a primary database to have one principal standby and up to two auxiliary standbys. The principal standby supports all sync modes, while auxiliary standbys use super async mode. Takeovers can occur from any standby and DB2 will automatically reconfigure other standbys to connect to the new primary if they are in its target list. The document provides details on configuration, initialization, failover behavior and an example deployment across four servers.
DB2 is a family of database server products developed by IBM that support relational and object relational models. DB2 was first introduced by IBM in 1983 for mainframe systems and has since been ported to Linux, Unix, and Windows. There are three main DB2 products: DB2 for Linux, Unix, and Windows (DB2 LUW), DB2 for Z/OS (mainframe), and DB2 for iSeries. DB2 LUW provides features such as high availability, security, workload management, and federation between data sources. The document discusses DB2 architecture including the instance model, database storage model, engine dispatchable units, and memory architecture.
Herd your chickens: Ansible for DB2 configuration managementFrederik Engelen
This document provides an overview of using Ansible for configuration management and summarizes a presentation on using it to manage DB2 configurations. It describes how Ansible uses inventory files and variables to define environments and target hosts, playbooks to automate configuration tasks, and modules to implement specific changes. The key benefits of Ansible noted are that it is agentless, uses simple text files for definitions, and has a low learning curve compared to other configuration management tools.
Ibm db2 10.5 for linux, unix, and windows upgrading to db2 version 10.5bupbechanhgmail
This document provides guidance on upgrading DB2 database environments to version 10.5. It discusses upgrading the various components of a DB2 environment, including DB2 servers, clients, applications, and routines. The document is structured to first discuss planning the upgrade, then upgrading each component, and finally post-upgrade tasks. It provides overviews, essential information, pre-upgrade tasks, upgrade tasks, and post-upgrade tasks for each component. The goal is to help users successfully upgrade their entire DB2 environment to take advantage of the new features in DB2 10.5.
This document provides an overview of the DB2 10.1 Basic Database Administration Workshop for Linux, Unix and Windows. It introduces the instructor, Iqbal Goralwalla, who has extensive experience developing and working with DB2. The document discusses DB2 editions and key features, tools replaced in DB2 10 like Control Center, the new IBM Data Studio tool, and the DB2 instance and process models.
Ibm db2 10.5 for linux, unix, and windows what's new for db2 version 10.5bupbechanhgmail
This document provides an overview of new features and enhancements in DB2 Version 10.5, including:
- New column-organized table option and support for non-enforced primary/unique keys.
- New monitoring metrics for column tables and improved HADR monitoring.
- HADR now supported in DB2 pureScale environments and easier use of customer scripts with ACS.
- Expression-based indexes, larger rows, and exclusion of NULL keys from indexes.
- Customizable workload balancing and client/driver enhancements.
- Installation and configuration changes are described to aid in upgrading to Version 10.5.
This document provides an overview of new features and enhancements in DB2 9.7. Key highlights include improvements to compression, which now supports multiple automatic index compression algorithms and automatic compression of temporary tables. Other improvements focus on resource optimization through storage I/O optimization and ease of storage management, as well as ongoing flexibility through support for schema evolution and online table moves. The document also discusses enhancements for partitioned tables such as local partitioned indexes and improved partitioned table maintenance.
Business Case: IBM DB2 versus Oracle Database - Conor O'Mahonycomahony
The document discusses a presentation comparing IBM DB2 to Oracle Database. It provides background on server, software, storage and staffing costs. It indicates that DB2 often requires less of these resources for the same workloads. Business case studies by ITG are cited that found DB2 saved costs compared to Oracle, with savings ranging from 34-38% for various organizations. More information is provided on reports from ITG and Solitaire with additional performance and cost comparisons between the databases.
Showdown: IBM DB2 versus Oracle Database for OLTPcomahony
The document compares the online transaction processing (OLTP) performance of DB2 and Oracle Database. It discusses how DB2 offers more efficient logging and memory usage, which enables higher transactional performance on benchmarks like TPC-C. It also describes how DB2 pureScale provides faster and more scalable data sharing between nodes through its use of remote direct memory access (RDMA), avoiding the overhead of processes in Oracle RAC. DB2 pureScale allows nearly all data to remain available during node failures through its centralized coordination and recovery capabilities.
Learn how to build your Mac image from the ground up. Create a default user template, clean up the file structure, and utilize shell scripting for optimal automated customization. Learn more:
Sergey Dzyuban "To Build My Own Cloud with Blackjack…"Fwdays
Cloud providers like Amazon or Google have a great user experience to create and manage PaaS. But is it possible to reproduce the same experience and flexibility locally, in the on-premise datacenter? What if your own infrastructure grows to fast and your team can’t deal with it in the old way? What does Jenkins, .NET microservices and TVs for daily meetings have in common?
This talk shares our experience using DC/OS (datacenter operating system) for building flexible and stable infrastructure. I will show the evolution of private cloud from the first steps with Vagrant to the hybrid cloud with instance groups in Google Cloud, the benefits it gives us and the problems we get instead.
This document summarizes a 10-day training session on installing, configuring, and managing Dell PowerEdge blade server and VMware software. The training covers blade server components, installation and setup, storage configuration, iDRAC and CMC management, virtualization with VMware vSphere and vCenter, and installing Windows and Linux operating systems on virtual machines. The goal is to provide hands-on instruction on administering Dell blade server hardware and virtualization platforms.
DB2 pureScale provides a highly scalable and available database solution. It allows customers to start small and grow capacity easily by adding additional cluster members without disrupting applications or incurring extra costs. DB2 pureScale uses a shared nothing architecture with each member running on its own server. It provides a single system view to clients and automatically balances workload across members. Critical features include unlimited scalability, continuous availability even during member failures, and the ability to perform maintenance without outages.
Episode 4 DB2 pureScale Performance Webinar Oct 2010Laura Hood
DB2 pureScale provides scalability and high performance through its clustered database architecture. It uses a cluster caching facility to manage data consistency across member nodes and leverage low-latency interconnects like InfiniBand. The architecture features two-level buffer pool caching between local and global pools for improved read performance. Monitoring and tuning focuses on optimizing buffer pool hit ratios at both levels. Initial proof points showed near-linear scalability up to 12 nodes and over 80% scalability even at 128 nodes, demonstrating the architecture's ability to transparently scale database workloads across many servers.
The document discusses IBM's pureScale technology which allows DB2 databases to scale up to 128 nodes for high availability and scalability. PureScale forms a shared-disk cluster and uses proven "data sharing" technology from DB2 for z/OS. It provides agility to rapidly scale up or down capacity as needed with little application change. The company Triton built a basic 2-node pureScale cluster within a budget of under £1K to validate IBM's claims and gain hands-on experience. Their testing showed the cluster delivered 1000 transactions per second under load. The summary concludes that pureScale provides robust clustering with excellent price/performance.
DB2 pureScale provides high availability and continuous operations by automatically recovering from component failures through workload redistribution and fast in-flight transaction recovery. It protects databases by balancing workloads across nodes and uses duplexed secondary components to tolerate multiple simultaneous node failures while keeping other nodes online and services available.
DB2 10.5 contains several new features that provide significant improvements in database compression, performance for analytic applications, and flexibility in indexing. These include BLU Acceleration, expression-based indexes that allow indexing on expressions, and the ability to exclude NULL keys from indexes. DB2 10.5 also allows online table reorganization without taking the table offline, and stores XML data efficiently without losing its semistructured format.
Pure Genius: How To Get Mainframe-Like Scalability & Availability For Midrange DB2 discusses pureScale, an optional feature for DB2 that implements shared-disk clustering to provide high scalability and availability. It can support up to 128 members. The architecture uses a shared database, coordination facilities, and InfiniBand networking. Customers experience scalability gains, easy installation, and resilience like continued operation despite coordination facility failure. The presentation evaluates pureScale's benefits and customer experiences.
DB2 pureScale provides unlimited scalability, application transparency, and continuous availability for transaction processing and ERP workloads. It uses a shared-nothing architecture where multiple database instances (members) connect to a single database and cooperate to provide a single system image to clients. PowerHA pureScale technology handles global bufferpool and locking management to maintain data consistency as members scale out.
DB2 pureScale is a new DB2 feature that allows a DB2 database to span multiple database servers for increased availability, scalability and flexible capacity. It uses a shared disk architecture with Global Parallel File System technology to provide a single database image across nodes. Key components include Cluster Services, InfiniBand networking, global bufferpool and lock manager to coordinate data access and concurrency across nodes. The technology is still in development with initial support for AIX on Power hardware.
The document describes IBM DB2's High Availability Disaster Recovery (HADR) multiple standby configuration. It allows a primary database to have one principal standby and up to two auxiliary standbys. The principal standby supports all sync modes, while auxiliary standbys use super async mode. Takeovers can occur from any standby and DB2 will automatically reconfigure other standbys to connect to the new primary if they are in its target list. The document provides details on configuration, initialization, failover behavior and an example deployment across four servers.
DB2 is a family of database server products developed by IBM that support relational and object relational models. DB2 was first introduced by IBM in 1983 for mainframe systems and has since been ported to Linux, Unix, and Windows. There are three main DB2 products: DB2 for Linux, Unix, and Windows (DB2 LUW), DB2 for Z/OS (mainframe), and DB2 for iSeries. DB2 LUW provides features such as high availability, security, workload management, and federation between data sources. The document discusses DB2 architecture including the instance model, database storage model, engine dispatchable units, and memory architecture.
Herd your chickens: Ansible for DB2 configuration managementFrederik Engelen
This document provides an overview of using Ansible for configuration management and summarizes a presentation on using it to manage DB2 configurations. It describes how Ansible uses inventory files and variables to define environments and target hosts, playbooks to automate configuration tasks, and modules to implement specific changes. The key benefits of Ansible noted are that it is agentless, uses simple text files for definitions, and has a low learning curve compared to other configuration management tools.
Ibm db2 10.5 for linux, unix, and windows upgrading to db2 version 10.5bupbechanhgmail
This document provides guidance on upgrading DB2 database environments to version 10.5. It discusses upgrading the various components of a DB2 environment, including DB2 servers, clients, applications, and routines. The document is structured to first discuss planning the upgrade, then upgrading each component, and finally post-upgrade tasks. It provides overviews, essential information, pre-upgrade tasks, upgrade tasks, and post-upgrade tasks for each component. The goal is to help users successfully upgrade their entire DB2 environment to take advantage of the new features in DB2 10.5.
This document provides an overview of the DB2 10.1 Basic Database Administration Workshop for Linux, Unix and Windows. It introduces the instructor, Iqbal Goralwalla, who has extensive experience developing and working with DB2. The document discusses DB2 editions and key features, tools replaced in DB2 10 like Control Center, the new IBM Data Studio tool, and the DB2 instance and process models.
Ibm db2 10.5 for linux, unix, and windows what's new for db2 version 10.5bupbechanhgmail
This document provides an overview of new features and enhancements in DB2 Version 10.5, including:
- New column-organized table option and support for non-enforced primary/unique keys.
- New monitoring metrics for column tables and improved HADR monitoring.
- HADR now supported in DB2 pureScale environments and easier use of customer scripts with ACS.
- Expression-based indexes, larger rows, and exclusion of NULL keys from indexes.
- Customizable workload balancing and client/driver enhancements.
- Installation and configuration changes are described to aid in upgrading to Version 10.5.
This document provides an overview of new features and enhancements in DB2 9.7. Key highlights include improvements to compression, which now supports multiple automatic index compression algorithms and automatic compression of temporary tables. Other improvements focus on resource optimization through storage I/O optimization and ease of storage management, as well as ongoing flexibility through support for schema evolution and online table moves. The document also discusses enhancements for partitioned tables such as local partitioned indexes and improved partitioned table maintenance.
Business Case: IBM DB2 versus Oracle Database - Conor O'Mahonycomahony
The document discusses a presentation comparing IBM DB2 to Oracle Database. It provides background on server, software, storage and staffing costs. It indicates that DB2 often requires less of these resources for the same workloads. Business case studies by ITG are cited that found DB2 saved costs compared to Oracle, with savings ranging from 34-38% for various organizations. More information is provided on reports from ITG and Solitaire with additional performance and cost comparisons between the databases.
Showdown: IBM DB2 versus Oracle Database for OLTPcomahony
The document compares the online transaction processing (OLTP) performance of DB2 and Oracle Database. It discusses how DB2 offers more efficient logging and memory usage, which enables higher transactional performance on benchmarks like TPC-C. It also describes how DB2 pureScale provides faster and more scalable data sharing between nodes through its use of remote direct memory access (RDMA), avoiding the overhead of processes in Oracle RAC. DB2 pureScale allows nearly all data to remain available during node failures through its centralized coordination and recovery capabilities.
Learn how to build your Mac image from the ground up. Create a default user template, clean up the file structure, and utilize shell scripting for optimal automated customization. Learn more:
Sergey Dzyuban "To Build My Own Cloud with Blackjack…"Fwdays
Cloud providers like Amazon or Google have a great user experience to create and manage PaaS. But is it possible to reproduce the same experience and flexibility locally, in the on-premise datacenter? What if your own infrastructure grows to fast and your team can’t deal with it in the old way? What does Jenkins, .NET microservices and TVs for daily meetings have in common?
This talk shares our experience using DC/OS (datacenter operating system) for building flexible and stable infrastructure. I will show the evolution of private cloud from the first steps with Vagrant to the hybrid cloud with instance groups in Google Cloud, the benefits it gives us and the problems we get instead.
This document summarizes a 10-day training session on installing, configuring, and managing Dell PowerEdge blade server and VMware software. The training covers blade server components, installation and setup, storage configuration, iDRAC and CMC management, virtualization with VMware vSphere and vCenter, and installing Windows and Linux operating systems on virtual machines. The goal is to provide hands-on instruction on administering Dell blade server hardware and virtualization platforms.
This document lists over 200 IT and technical courses taken by an individual over their career working in various roles within several companies including IBM, Credicard, Proceda, and Unibanco. The courses cover a wide range of topics including various operating systems, databases, programming languages, networking, cloud computing, analytics, architecture and more.
Global Operations with Docker for the Enterprise - Nico Kabar, DockerDocker, Inc.
Enterprises often have hundreds or even thousands of applications spread across hundreds of development teams, business units and geographies. This presents challenges to IT teams as they architect an environment to run Docker apps on globally distributed hybrid cloud infrastructure, developed by distributed dev teams and consumed by customers around the world. Docker Datacenter provides the technology and framework to implement a global software supply chain. This session will dig into the design considerations, tools and best practices to address this type of environment with Docker Datacenter. And there will be data, demos and tools! Results from various performance tests will be presented in conjunction with recommendations for high-availability configurations, content cache use cases for faster developer workflow and scheduling strategies for improving application resilience.
Global Operations with Docker EnterpriseNicola Kabar
Enterprises often have hundreds or even thousands of applications spread across hundreds of development teams, business units and geographies. This presents challenges to IT teams as they architect an environment to run Docker apps on globally distributed hybrid cloud infrastructure, developed by distributed dev teams and consumed by customers around the world. Docker Datacenter provides the technology and framework to implement a global software supply chain. This session will dig into the design considerations, tools and best practices to address this type of environment with Docker Datacenter. And there will be data, demos and tools! Results from various performance tests will be presented in conjunction with recommendations for high-availability configurations, content cache use cases for faster developer workflow and scheduling strategies for improving application resilience.
Microsoft's Azure is one of the top requested cloud platforms for communications solutions and a popular choice for hosting ProSBC and other cloud communications applications. During this session, we'll give a step-by-step tutorial and demonstration of the process to activate and configure ProSBC on Azure. At the conclusion of the tutorial, attendees will be prepared to provision ProSBC into their Azure communications solutions.
Topics covered in this session:
- Ordering process
- Selecting a processor
- Preparing the Azure account
- Preparing and Loading the ProSBC on Azure Image
- Activating the ProSBC Image
- Configuring and operating ProSBC
- Roadmap for ProSBC on Azure
- Your Questions
Microsoft's Azure is one of the top requested cloud platforms for communications solutions and a popular choice for hosting ProSBC and other cloud communications applications. During this session, we'll give a step-by-step tutorial and demonstration of the process to activate and configure ProSBC on Azure. At the conclusion of the tutorial, attendees will be prepared to provision ProSBC into their Azure communications solutions.
Topics covered in this session:
- Ordering process
- Selecting a processor
- Preparing the Azure account
- Preparing and Loading the ProSBC on Azure Image
- Activating the ProSBC Image
- Configuring and operating ProSBC
- Roadmap for ProSBC on Azure
- Your Questions
Microsoft's Production Configurable Cloud leverages FPGAs and a programmable infrastructure to provide accelerated computing capabilities. Key aspects include:
- Using FPGAs on servers and smartNICs to accelerate networking, storage, security and other functions through reconfigurable hardware.
- Developing a pod architecture that connects multiple FPGAs within a rack for low-latency sharing of resources.
- Creating a programmable "configurable cloud" infrastructure that allows workloads to be accelerated locally, through infrastructure enhancements, or remotely on other servers' FPGAs.
- Early FPGA applications provided significant query latency and throughput improvements for Bing search functions. The approach is now used broadly in
The document discusses installing Oracle Enterprise Manager 12c. It covers the architecture, concepts, installation types, requirements, and process. The process involves installing the Oracle database software, creating a database, and then installing Enterprise Manager. It also discusses alternatives for installation like using Oracle VM templates or configuration management tools like Puppet and Chef.
The document discusses the key capabilities of the Steeltoe framework for building .NET applications that can run on Cloud Foundry. It describes how Steeltoe provides integration for common operations like pushing code, service discovery, circuit breakers, security, and management endpoints. It also provides examples of how to configure applications using Steeltoe for tasks like adding configuration from a Spring config server and registering with a service registry.
Dell EMC uses Ansible for automating various tasks including network switch configuration, OpenStack configuration, out-of-band server management, and OpenShift deployment. Ansible provides agentless automation and configuration management through playbooks, templates, and roles. Dell EMC has developed networking roles and Ansible modules to manage switches, servers, and OpenStack configurations. Examples shown include configuring Dell switches, deploying OpenStack projects and users, getting server health/logs through Redfish, and automating an OpenShift reference architecture.
Citrix Synergy 2014 - Syn233 Building and operating a Dev Ops cloud: best pra...Citrix
- InMobi moved to using Citrix CloudPlatform to power their private cloud, replacing their home-built systems which were brittle and difficult to scale.
- Their architecture with CloudPlatform includes management servers, MySQL servers, primary storage using NexentaStor on JBODs, and secondary GlusterFS storage for redundancy.
- This allows InMobi to easily provision development and test environments in a self-service manner while maintaining isolation, security, and scalability as their needs grow.
Inayatullah Fayyaz Syed is seeking a challenging position as a systems administrator. He has over 7 years of experience administering Windows environments and 2 years experience with Linux. He is proficient in technologies like Windows Server 2012, VMware vSphere, and has certifications in Cisco CCNA, VMware VCA-DCV, and Microsoft Administering Windows Server 2012. He has worked as a systems administrator in Saudi Arabia and India, managing servers, storage, backups and virtualization platforms.
The document outlines a 35-hour training course on Oracle Golden Gate that covers topics such as installation, configuration, architecture, data replication, troubleshooting, and performance tuning. The course aims to provide students with knowledge of Oracle Golden Gate's replication solutions and how to implement them between Oracle databases. It also lists other technical training courses offered related to databases, programming languages, and business intelligence tools.
The document outlines a 35-hour training course on Oracle Golden Gate that covers topics such as installation, configuration, architecture, data replication, troubleshooting, and performance tuning. It aims to provide students with knowledge of Oracle Golden Gate replication solutions for high availability, data warehousing, and live reporting. The course also lists other technology training offerings.
VMworld 2013: Virtualizing Mission Critical Oracle RAC with vSphere and vCOPSVMworld
VMworld 2013
Steven Jones, VMware
Charles Kim, Viscosity North America
Kannan Mani, VMware
George Trujillo, Hortonworks
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
Using Jenkins in nower days you have to learn all about using Pipelines. This presentation shows how to user Jenkins Pipelines inside Oracle Projects.
The Presentation was held on the DOAG Conference 2019 in nuremberg.
DCEU 18: Docker Enterprise Platform and ArchitectureDocker, Inc.
Jean Rouge - Sr. Software Engineer, Docker
David Yu - Product Manager, Docker
Docker Enterprise is an enterprise container platform for developers and IT admins building and managing container applications. The platform includes integrated orchestration (Swarm and Kubernetes), advanced private image registry, and centralized admin console to secure, troubleshoot, and manage containerized applications. This talk will focus on the Docker Enterprise platform's technical architecture, key features and use cases it is designed to support. Key areas covered in this session: -Latest features and enhancements -Security and Compliance - how to ensure oversight and validate applications for different compliance regulations -Operational Insight - how to identify and troubleshoot issues in your container environment -Integrated Technology - the technologies are supported and can be run with Docker Enterprise -Policy-based Automation - how to scale container environments through automated policies.
Similar to Episode 2 Installation Triton Slides (20)
This document discusses a security issue that occurred when improperly configuring DB2 federation. Specifically:
1. A client site configured DB2-LDAP federation but also enabled the FED_NOAUTH parameter, bypassing authentication.
2. This meant any user could connect to the database as any other user without providing the correct password.
3. If the database owner username was guessed, full access to all data could be obtained, potentially exposing the database to a major security breach.
The issue was caused by incorrectly enabling the FED_NOAUTH parameter when federation was set up. Proper authentication should have occurred at the database rather than being bypassed. The moral is to not enable
What do you do when disaster strikes? In part 9 of our DB2 Support Nightmare series we look at another DB2 disaster scenario and how it was resolved by the experts at Triton Consulting.
Number 8 in our Top 10 DB2 Support Nightmares series. This month we take a look at what happens when organisations are not able to keep up to date with the latest DB2 technology.
Imagine the scene – a broken database on an unsupported version of DB2, with no backups or log files to recover the database.
Yes – this one really was the stuff of nightmares!
Download if you dare! In part six of our DB2 Nightmares series we see what can happen when an experienced DBA goes on holiday leaving the Junior DBA in charge with no support.
Consultancy on Demand is a specially designed service for customers who need varying levels of DB2 support throughout the year.
You purchase a block of 20, 50 or 100 hours. You can then call off hours as and when you need them. No commitment required!
A Time Traveller's Guide to DB2: Technology Themes for 2014 and BeyondLaura Hood
This document discusses technology themes for DB2 in 2014 and beyond, including cost reduction, high availability, in-memory computing, skills availability, database commoditization, and big data. It summarizes DB2's focus on these areas today and potential future directions, such as further optimization to reduce software licensing fees, expanded data sharing capabilities, increased memory capacities, evolving skills needs, and continued integration with big data platforms. The document aims to help DB2 professionals consider strategies for addressing these themes.
A junior DBA accidentally deleted all rows from a critical table in a pre-production environment. The DBA had connected to the wrong system and used the instance owner userid. The system administrator had enabled the FED_NOAUTH parameter, which bypasses authentication at the instance level. This meant any user could connect as any other user without the correct password and impact the database. The moral is that unintended consequences can occur from small configuration changes and it is important to get skilled DB2 support.
Db2 10 memory management uk db2 user group june 2013 [read-only]Laura Hood
DB2 10 provides significant enhancements to memory management that allow for much greater scalability. Key changes include moving most objects above the 2GB bar, enabling larger buffer pools through 1MB page support, and enhanced real storage monitoring. Migrating to DB2 10 requires ensuring sufficient real storage is available, monitoring real storage usage, and addressing other limiting factors before taking advantage of new features to further scale vertically.
DbB 10 Webcast #3 The Secrets Of ScalabilityLaura Hood
The third in the Migration Month webcast series looking at DB2 10 migration planning. This webcast goes into the scalability benefits available in DB2 10, with Julian Stuhler of Triton Consulting & Jeff Josten of IBM.
DB2 10 Webcast #2 - Justifying The UpgradeLaura Hood
This document discusses justifying an upgrade from DB2 9 or 8 to DB2 10 for z/OS. It outlines potential CPU, productivity, and availability savings from the upgrade. CPU savings can come from improved performance in conversion mode through features like high performance database application transition support. Productivity savings may result from features that improve plan stability and temporal tables. Availability improvements like online reorganization of LOBs can reduce downtime costs. The presentation recommends using IBM's DB2 10 Business Value Assessment Estimator Tool to quantify specific savings for an organization.
DB2 10 for z/OS introduced temporal data support which allows applications to query data as it existed at different points in time. The document discusses system temporal tables, business temporal tables, and bi-temporal tables. It provides examples of temporal DDL, SELECT extensions for querying historical data, and discusses early experiences and performance considerations with temporal data in DB2 10.
DB2DART is a tool that allows DBAs to inspect, format, and repair DB2 databases and objects. It can be used to handle storage reclamation issues by lowering high water marks, detect and repair index corruption, extract data from corrupt tables, and remove backup pending states. DB2DART provides granular analysis at the database, tablespace, and table level and its repair capabilities save DBAs from having to call support or restore from backups in many cases.
Temporal And Other DB2 10 For Z Os HighlightsLaura Hood
The document discusses DB2 10 for z/OS and its new temporal data support feature. It provides an overview of DB2 10, describing new features such as temporal data, virtual storage enhancements, and optimizer enhancements. It then discusses temporal data concepts in more detail, including temporal tables, periods, business temporal tables and system temporal tables. The document provides examples and explains how to implement temporal tables in DB2 10. It concludes by listing further reading materials on DB2 10.
DB210 Smarter Database IBM Tech Forum 2011Laura Hood
DB2 10 for z/OS is a new version of IBM's database software that provides significant performance improvements, new security and temporal data features, and easier migration paths from prior versions. Key enhancements in DB2 10 include 5-20% CPU reductions, up to 10x more threads per subsystem due to virtual storage improvements, row and column access controls, and built-in support for tracking historical data. Customers running DB2 8 or 9 can upgrade directly to DB2 10 using new "skip migration" functionality, or upgrade sequentially from earlier versions. Migrating to DB2 10 requires meeting prerequisites and following steps to move to conversion mode and then normal mode.
This article takes a look at some of the reasons behind this data explosion, and some of the possible effects if the growth is not managed. We’ll also examine some of the ways in which these problems can be avoided.
Managing the financial services data explosionLaura Hood
This document summarizes the causes and effects of rapidly growing data in the financial services sector and strategies for coping with large volumes of data. It discusses how mergers and acquisitions, regulations like the Data Protection Act, and industry trends are contributing to more data. The rapid growth is causing higher costs, performance issues, and restricted data availability. Coping strategies include database partitioning, compression, and purchasing more storage, but these only address symptoms. Implementing an application data archiving strategy can significantly lower storage costs by moving older data to less expensive storage and reducing database sizes to improve performance.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
GraphSummit Singapore | The Future of Agility: Supercharging Digital Transfor...Neo4j
Leonard Jayamohan, Partner & Generative AI Lead, Deloitte
This keynote will reveal how Deloitte leverages Neo4j’s graph power for groundbreaking digital twin solutions, achieving a staggering 100x performance boost. Discover the essential role knowledge graphs play in successful generative AI implementations. Plus, get an exclusive look at an innovative Neo4j + Generative AI solution Deloitte is developing in-house.
AI 101: An Introduction to the Basics and Impact of Artificial IntelligenceIndexBug
Imagine a world where machines not only perform tasks but also learn, adapt, and make decisions. This is the promise of Artificial Intelligence (AI), a technology that's not just enhancing our lives but revolutionizing entire industries.
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
Maruthi Prithivirajan, Head of ASEAN & IN Solution Architecture, Neo4j
Get an inside look at the latest Neo4j innovations that enable relationship-driven intelligence at scale. Learn more about the newest cloud integrations and product enhancements that make Neo4j an essential choice for developers building apps with interconnected data and generative AI.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
Infrastructure Challenges in Scaling RAG with Custom AI modelsZilliz
Building Retrieval-Augmented Generation (RAG) systems with open-source and custom AI models is a complex task. This talk explores the challenges in productionizing RAG systems, including retrieval performance, response synthesis, and evaluation. We’ll discuss how to leverage open-source models like text embeddings, language models, and custom fine-tuned models to enhance RAG performance. Additionally, we’ll cover how BentoML can help orchestrate and scale these AI components efficiently, ensuring seamless deployment and management of RAG systems in the cloud.
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
4. Triton’s Commodity Cluster
• Objectives
Undertake basic validation of IBM’s performance &
scalability claims
The Information Management Specialists
scalability claims
Build technical experience in a pureScale environment
and establish platform for ongoing R&D
Assist IBM with early beta testing
• Constraints
Budget < £1K
Easily portable for customer demos etc.
5. Triton’s Commodity Cluster
• 2 member nodes and one CF
• Each node:
Intel D510M0 (Dual core 1GHz Atom)
The Information Management Specialists
Intel D510M0 (Dual core 1GHz Atom)
4GB RAM
40GB SDD
• Shared disk
iSCSI 1TB (QNAP TS110)
• DB2 9.8 pureScale FP2 development image
• Technology Explorer used for workload and monitoring
www.sourceforge.net/projects/db2mc
6. Install Experiences 1
• Installation experiences
SLES 10
The Information Management Specialists
SLES 10
► Wrong version of libstdc++.so.5
– remove libstdc++33-3.3.3-7.8.1
– install compat-libstdc++-5.0.7-22.2.x86_64.rpm
► NTP
► ssh keyring setup
► Ensure FQDN names in /etc/hosts
7. Install Experiences - 2
db2cluster resolves iSCSI mount issues
db2cluster creates shared disk mount points
The Information Management Specialists
db2cluster creates shared disk mount points
8. Triton pureScale Experiences
• Performance
Technology Explorer
The Information Management Specialists
Technology Explorer
► WMD Java workload driver (WLB enabled)
► 2.5M row table
► Vanilla installation
► 32 threads, 25ms think time
Delivered 1000tps @ 95%CPU load
9. Feedback / Questions
James Gill – james.gill@triton.co.uk
The Information Management Specialists
James Gill – james.gill@triton.co.uk
www.triton.co.uk
10. Don’t miss!
12th Oct - Episode 3 – DB2 pureScale. Availability & data recovery
19th Oct - Episode 4 – DB2 pureScale. Performance & tuning
9th Nov - Season Finale! – DB2 pureScale Vs Oracle RAC
The Information Management Specialists
9th Nov - Season Finale! – DB2 pureScale Vs Oracle RAC
Register here - http://www.triton.co.uk/DB2purescalewebcasts/
laura.hood@triton.co.uk