Oracle Solaris 11 as a BIG Data Platform Apache Hadoop Use CaseOrgad Kimchi
The document discusses using Oracle Solaris technologies for an Apache Hadoop cluster. It describes how Oracle Solaris Zones and ZFS provide benefits like fast provisioning of cluster nodes, high network throughput, large data capacity, and optimized I/O performance for Hadoop deployments. Various Oracle Solaris tools are also highlighted that can help monitor resource usage and troubleshoot performance issues for Hadoop workloads.
Oracle Solaris 11 is the first operating system engineered with cloud computing in mind. So what's new in Oracle Solaris 11, and how does that connect to the cloud? If you`re involved in Application Life-cycle Management, Configuration Management,
Cloud Deployment, Big Data Design and Application or Infrastructure Scaling - You will learn how to leverage the Solaris 11 technologies in order to build your Cloud infrastructure.
For more information see: http://www.oracle.com/technetwork/systems/hands-on-labs/hol-oracle-solaris-remote-lab-1894053.html
Ansible Tower can be used for efficient IT automation. The document discusses the importance of service portals and provides examples of using Ansible for Windows updates and deploying an application on OpenStack. It describes how Ansible works using playbooks to automate tasks across public and private clouds. Use cases demonstrated include patching Windows systems and provisioning a full application stack on OpenStack with databases, application servers and web servers using Ansible playbooks to automate the process.
Linux performance tuning & stabilization tips (mysqlconf2010)Yoshinori Matsunobu
This document provides tips for optimizing Linux performance and stability when running MySQL. It discusses managing memory and swap space, including keeping hot application data cached in RAM. Direct I/O is recommended over buffered I/O to fully utilize memory. The document warns against allocating too much memory or disabling swap completely, as this could trigger the out-of-memory killer to crash processes. Backup operations are noted as a potential cause of swapping, and adjusting swappiness is suggested.
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
Keeper Technology develops a Ceph appliance called keeperSAFE for secure storage environments. The keeperSAFE appliance provides a preconfigured Linux distribution, automated installation using Ansible, enclosure management tools, a graphical user interface for monitoring and configuration, data collection and analytics, encryption capabilities, and extensive testing. It is designed for environments that require high availability, no single points of failure, easy management, and auditability. The keeperSAFE appliance addresses the challenges of deploying and managing Ceph at scale in restricted, mission critical environments.
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Novell
SUSE Linux Enterprise Server High Availability Extension provides a range of modules that can be assembled in multiple ways to build high availability clusters to host your critical business services. This session will examine some of the most common solutions and discuss best practices for setting up a new cluster.
More in detail, this session will discuss in a second step how to prepare a cluster with the SUSE Linux Enterprise High Availability Extension to make an SAP application highly available as certified by Novell and SAP. This scenario is not only from interest for companies looking into making their SAP environment highly available, but also for those that want to migrate from Unix to the SUSE Linux Enterprise platform.
The document summarizes Walla's plan to upgrade its 600TB Solaris ZFS storage environment from Solaris 10 to Solaris 11. The plan involves migrating the ZFS pools from old SPARC hardware to new X86 hardware using ZFS send and receive. Pools larger than 4TB will be split into 4TB luns and rebuilt on the new hardware. All servers will be upgraded to Solaris 11.1 and ZFS user properties will be added to help manage the pools. The upgrades aim to replace aging hardware, improve storage utilization, and restore high availability.
Oracle Solaris 11 as a BIG Data Platform Apache Hadoop Use CaseOrgad Kimchi
The document discusses using Oracle Solaris technologies for an Apache Hadoop cluster. It describes how Oracle Solaris Zones and ZFS provide benefits like fast provisioning of cluster nodes, high network throughput, large data capacity, and optimized I/O performance for Hadoop deployments. Various Oracle Solaris tools are also highlighted that can help monitor resource usage and troubleshoot performance issues for Hadoop workloads.
Oracle Solaris 11 is the first operating system engineered with cloud computing in mind. So what's new in Oracle Solaris 11, and how does that connect to the cloud? If you`re involved in Application Life-cycle Management, Configuration Management,
Cloud Deployment, Big Data Design and Application or Infrastructure Scaling - You will learn how to leverage the Solaris 11 technologies in order to build your Cloud infrastructure.
For more information see: http://www.oracle.com/technetwork/systems/hands-on-labs/hol-oracle-solaris-remote-lab-1894053.html
Ansible Tower can be used for efficient IT automation. The document discusses the importance of service portals and provides examples of using Ansible for Windows updates and deploying an application on OpenStack. It describes how Ansible works using playbooks to automate tasks across public and private clouds. Use cases demonstrated include patching Windows systems and provisioning a full application stack on OpenStack with databases, application servers and web servers using Ansible playbooks to automate the process.
Linux performance tuning & stabilization tips (mysqlconf2010)Yoshinori Matsunobu
This document provides tips for optimizing Linux performance and stability when running MySQL. It discusses managing memory and swap space, including keeping hot application data cached in RAM. Direct I/O is recommended over buffered I/O to fully utilize memory. The document warns against allocating too much memory or disabling swap completely, as this could trigger the out-of-memory killer to crash processes. Backup operations are noted as a potential cause of swapping, and adjusting swappiness is suggested.
Developing a Ceph Appliance for Secure EnvironmentsCeph Community
Keeper Technology develops a Ceph appliance called keeperSAFE for secure storage environments. The keeperSAFE appliance provides a preconfigured Linux distribution, automated installation using Ansible, enclosure management tools, a graphical user interface for monitoring and configuration, data collection and analytics, encryption capabilities, and extensive testing. It is designed for environments that require high availability, no single points of failure, easy management, and auditability. The keeperSAFE appliance addresses the challenges of deploying and managing Ceph at scale in restricted, mission critical environments.
Building High Availability Clusters with SUSE Linux Enterprise High Availabil...Novell
SUSE Linux Enterprise Server High Availability Extension provides a range of modules that can be assembled in multiple ways to build high availability clusters to host your critical business services. This session will examine some of the most common solutions and discuss best practices for setting up a new cluster.
More in detail, this session will discuss in a second step how to prepare a cluster with the SUSE Linux Enterprise High Availability Extension to make an SAP application highly available as certified by Novell and SAP. This scenario is not only from interest for companies looking into making their SAP environment highly available, but also for those that want to migrate from Unix to the SUSE Linux Enterprise platform.
The document summarizes Walla's plan to upgrade its 600TB Solaris ZFS storage environment from Solaris 10 to Solaris 11. The plan involves migrating the ZFS pools from old SPARC hardware to new X86 hardware using ZFS send and receive. Pools larger than 4TB will be split into 4TB luns and rebuilt on the new hardware. All servers will be upgraded to Solaris 11.1 and ZFS user properties will be added to help manage the pools. The upgrades aim to replace aging hardware, improve storage utilization, and restore high availability.
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
VSM (Virtual Storage Manager) is an open source tool developed by Intel to simplify Ceph storage cluster management. It includes a controller that runs on a dedicated server and manages Ceph through agents on each Ceph node. The VSM makes it easier to deploy, maintain, and monitor Ceph clusters, and also integrates with OpenStack for storage orchestration.
This document discusses migrating an Oracle Database Appliance (ODA) from a bare metal to a virtualized platform. It outlines the initial situation, desired target, challenges, and solution approach. The key challenges included system downtime during the migration, backup/restore processes, using external storage, and database reorganizations. The solution involved first converting to a virtual platform and then upgrading, using backup/restore, attaching an NGENSTOR Hurricane storage appliance for direct attached storage, and moving database reorganizations to a separate maintenance window. It also discusses the odaback-API tool created to help automate and standardize the migration process.
This document provides information on building a high performance computing cluster, including definitions of supercomputers, why they are needed, types of supercomputers, and steps for building a cluster. It outlines identifying the application, selecting hardware and software components, installation, configuration, testing, and maintenance. Homemade and commercial clusters are compared, and opportunities for generating revenue from clusters are discussed. Additional online resources for learning more are provided at the end.
This document discusses using iSCSI to provide access to Ceph RADOS Block Device (RBD) images from heterogeneous operating systems and applications. It describes how the Linux IO Target (LIO) can be configured as an iSCSI target with the RBD storage backend to export Ceph RBD images. This allows standard iSCSI initiators to access RBD images without requiring Ceph-aware clients. It also explains how LIO and Lrbd can be used to configure multiple iSCSI gateways for high availability and redundancy.
The document summarizes a benchmarking study conducted by Altoros Systems to compare the performance of Couchbase Server, MongoDB, and Cassandra. It outlines the benchmark goals of having a reproducible workload, using a realistic scenario, and comparing latency and throughput. It describes the benchmarking tools, scenario details involving data size, operations, and hardware configuration. Configuration details are provided for each database, including cluster specifications and parameter settings.
This document provides an overview of NetApp's general product direction and upcoming features for clustered Data ONTAP. However, it does not constitute a commitment by NetApp and the details may change without notice. NetApp makes no guarantees about future functionality, timelines or products. The development and release of any mentioned features is at NetApp's sole discretion.
My experience with embedding PostgreSQLJignesh Shah
At my current company, we embed PostgreSQL based technologies in various applications shipped as shrink-wrapped software. In this session we talk about the experience of embedding PostgreSQL where it is not directly exposed to end-user and the issues encountered on how they were resolved.
We will talk about business reasons,technical architecture of deployments, upgrades, security processes on how to work with embedded PostgreSQL databases.
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
The document summarizes a presentation given by representatives from various companies on optimizing Ceph for high-performance solid state drives. It discusses testing a real workload on a Ceph cluster with 50 SSD nodes that achieved over 280,000 read and write IOPS. Areas for further optimization were identified, such as reducing latency spikes and improving single-threaded performance. Various companies then described their contributions to Ceph performance, such as Intel providing hardware for testing and Samsung discussing SSD interface improvements.
Automated Out-of-Band management with Ansible and RedfishJose De La Rosa
Ansible is an open source automation engine that automates complex IT tasks such as cloud provisioning, application deployment and a wide variety of system administration tasks. It is a one-to-many agentless mechanism where complex deployment tasks can be controlled and monitored from a central control machine.
Redfish is an open industry-standard specification and schema designed for modern and secure management of platform hardware. On Dell EMC PowerEdge servers the Redfish management APIs are available via the integrated Dell Remote Access Controller (iDRAC), which can be used by IT administrators to easily monitor and manage at scale their entire infrastructure using a wide array of clients on devices such as laptops, tablets and smart phones.
Together, Ansible and Redfish can be used by system administrators to fully automate at large scale server monitoring, provisioning and update tasks from one central location, significantly reducing complexity and helping improve the productivity and efficiency of IT administrators.
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
The document discusses scale and performance challenges in providing storage infrastructure for research computing. It describes Monash University's implementation of the Ceph distributed storage system across multiple clusters to provide a "fabric" for researchers' storage needs in a flexible, scalable way. Key points include:
- Ceph provides software-defined storage that is scalable and can integrate with other systems like OpenStack.
- Multiple Ceph clusters have been implemented at Monash of varying sizes and purposes, including dedicated clusters for research data storage.
- The infrastructure provides different "tiers" of storage with varying performance and cost characteristics to meet different research needs.
- Ongoing work involves expanding capacity and upgrading hardware to improve performance
Best Practices with PostgreSQL on SolarisJignesh Shah
This document provides best practices for deploying PostgreSQL on Solaris, including:
- Using Solaris 10 or latest Solaris Express for support and features
- Separating PostgreSQL data files onto different file systems tuned for each type of IO
- Tuning Solaris parameters like maxphys, klustsize, and UFS buffer cache size
- Configuring PostgreSQL parameters like fdatasync, commit_delay, wal_buffers
- Monitoring key metrics like memory, CPU, and IO usage at the Solaris and PostgreSQL level
Real questions for Network Appliance NS0-156 Data ONTAP Cluster-Mode Administrator Exam from pass4sure with unlimited access of 2500+ Exams for Life time. Pass your NS0-156 Network Appliance Specialist Exam with 100% Guaranteed or we will refund your Money.
This document outlines the steps for building a SQL Server cluster for high availability, including planning considerations, required hardware, installing Windows clustering features, configuring storage, installing and configuring SQL Server across nodes, and testing the cluster configuration. Key aspects that are discussed include defining recovery time and point objectives, installing SQL Server using the "Create New Failover Cluster" option, installing SQL on each node to enable failover, and performing backups and restores from cluster-owned drives. Testing the applications on the clustered environment is also emphasized.
Fujitsu m10 server features and capabilitiessolarisyougood
This document provides an overview of the Fujitsu M10 server product line. It describes the hardware features and capabilities of the Fujitsu M10-1, M10-4, and M10-4S servers including their processors, memory, I/O, storage, and virtualization support. It also discusses the reliability, availability, and serviceability features, and performance advantages for running Oracle databases and SAP workloads on the Fujitsu M10 servers.
Dell EMC uses Ansible for automating various tasks including network switch configuration, OpenStack configuration, out-of-band server management, and OpenShift deployment. Ansible provides agentless automation and configuration management through playbooks, templates, and roles. Dell EMC has developed networking roles and Ansible modules to manage switches, servers, and OpenStack configurations. Examples shown include configuring Dell switches, deploying OpenStack projects and users, getting server health/logs through Redfish, and automating an OpenShift reference architecture.
This document discusses tuning DB2 in a Solaris environment. It provides background on the presenters, Tom Bauch from IBM and Jignesh Shah from Sun Microsystems. The agenda covers general considerations, memory usage and bottlenecks, disk I/O considerations and bottlenecks, and tuning DB2 V8.1 specifically in Solaris 9. It discusses supported Solaris versions, kernel settings, required patches, installation methods, and the configuration wizard. Specific topics covered in more depth include the Data Partitioning Feature, DB2 Enterprise Server Edition, and analyzing and addressing potential memory bottlenecks.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses database deployment automation. It begins with introductions and an example of a problematic Friday deployment. It then reviews the concept of automation and different visions of it within an organization. Potential tools and frameworks for automation are discussed, along with common pitfalls. Basic deployment workflows using Oracle Cloud Control are demonstrated, including setting credentials, creating a proxy user, adding target properties, and using a job template. The document concludes by emphasizing that database deployment automation is possible but requires effort from multiple teams.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
This document discusses architecting Oracle databases on VMware vSphere 5 with NetApp storage. It begins with objectives such as understanding how to provision NetApp storage for an Oracle database to take advantage of VMware and NetApp technologies. It then covers topics like using Oracle with vSphere 5, recommendations for vSphere 5, virtualizing Oracle with NetApp, reference architectures, and where to learn more. The presenters are experts on Oracle and virtualization technologies looking to provide best practices on implementing Oracle databases with VMware and NetApp.
VMware vSphere 5.5 with features like Flash Read Cache (vFRC) can improve performance of virtualized Oracle 12c databases without impacting reliability functions like VMotion. Testing showed vFRC decreased time to complete an OLAP workload by 14% and allowed seamless migration of vFRC-enabled VMs during VMotion. The combination of VMware, Cisco, and EMC technologies provided reliable virtualization and storage with increased Oracle 12c performance using vFRC.
IT-AAC Defense IT Reform Report to the Sec 809 PanelJohn Weiler
Today, 1/12/17, the IT-AAC briefed the Panel on Streamlining and Codifying Acquisition Regulations (NDAA Sec 809). These recommendations are the results of an 8 year study that included the review of over 40 major studies, over 40 leadership workshops, and root cause analysis of over 40 major IT program failures.
The document discusses structural, electrical, and thermoelectric properties of CrSi2 thin films. It describes how 1 μm and 0.1 μm CrSi2 thin films were prepared by RF sputtering onto quartz substrates under various conditions. Various characterization techniques were used to analyze the structural and compositional properties of the thin films, including XRD, SEM, and EDAX. Seebeck coefficient measurements of the thin films found values ranging from 30-80 μV/K depending on annealing temperature and film thickness. Overall the document examines how processing conditions affect the properties of CrSi2 thin films and their potential for thermoelectric applications.
Simplifying Ceph Management with Virtual Storage Manager (VSM)Ceph Community
VSM (Virtual Storage Manager) is an open source tool developed by Intel to simplify Ceph storage cluster management. It includes a controller that runs on a dedicated server and manages Ceph through agents on each Ceph node. The VSM makes it easier to deploy, maintain, and monitor Ceph clusters, and also integrates with OpenStack for storage orchestration.
This document discusses migrating an Oracle Database Appliance (ODA) from a bare metal to a virtualized platform. It outlines the initial situation, desired target, challenges, and solution approach. The key challenges included system downtime during the migration, backup/restore processes, using external storage, and database reorganizations. The solution involved first converting to a virtual platform and then upgrading, using backup/restore, attaching an NGENSTOR Hurricane storage appliance for direct attached storage, and moving database reorganizations to a separate maintenance window. It also discusses the odaback-API tool created to help automate and standardize the migration process.
This document provides information on building a high performance computing cluster, including definitions of supercomputers, why they are needed, types of supercomputers, and steps for building a cluster. It outlines identifying the application, selecting hardware and software components, installation, configuration, testing, and maintenance. Homemade and commercial clusters are compared, and opportunities for generating revenue from clusters are discussed. Additional online resources for learning more are provided at the end.
This document discusses using iSCSI to provide access to Ceph RADOS Block Device (RBD) images from heterogeneous operating systems and applications. It describes how the Linux IO Target (LIO) can be configured as an iSCSI target with the RBD storage backend to export Ceph RBD images. This allows standard iSCSI initiators to access RBD images without requiring Ceph-aware clients. It also explains how LIO and Lrbd can be used to configure multiple iSCSI gateways for high availability and redundancy.
The document summarizes a benchmarking study conducted by Altoros Systems to compare the performance of Couchbase Server, MongoDB, and Cassandra. It outlines the benchmark goals of having a reproducible workload, using a realistic scenario, and comparing latency and throughput. It describes the benchmarking tools, scenario details involving data size, operations, and hardware configuration. Configuration details are provided for each database, including cluster specifications and parameter settings.
This document provides an overview of NetApp's general product direction and upcoming features for clustered Data ONTAP. However, it does not constitute a commitment by NetApp and the details may change without notice. NetApp makes no guarantees about future functionality, timelines or products. The development and release of any mentioned features is at NetApp's sole discretion.
My experience with embedding PostgreSQLJignesh Shah
At my current company, we embed PostgreSQL based technologies in various applications shipped as shrink-wrapped software. In this session we talk about the experience of embedding PostgreSQL where it is not directly exposed to end-user and the issues encountered on how they were resolved.
We will talk about business reasons,technical architecture of deployments, upgrades, security processes on how to work with embedded PostgreSQL databases.
Ceph Community Talk on High-Performance Solid Sate Ceph Ceph Community
The document summarizes a presentation given by representatives from various companies on optimizing Ceph for high-performance solid state drives. It discusses testing a real workload on a Ceph cluster with 50 SSD nodes that achieved over 280,000 read and write IOPS. Areas for further optimization were identified, such as reducing latency spikes and improving single-threaded performance. Various companies then described their contributions to Ceph performance, such as Intel providing hardware for testing and Samsung discussing SSD interface improvements.
Automated Out-of-Band management with Ansible and RedfishJose De La Rosa
Ansible is an open source automation engine that automates complex IT tasks such as cloud provisioning, application deployment and a wide variety of system administration tasks. It is a one-to-many agentless mechanism where complex deployment tasks can be controlled and monitored from a central control machine.
Redfish is an open industry-standard specification and schema designed for modern and secure management of platform hardware. On Dell EMC PowerEdge servers the Redfish management APIs are available via the integrated Dell Remote Access Controller (iDRAC), which can be used by IT administrators to easily monitor and manage at scale their entire infrastructure using a wide array of clients on devices such as laptops, tablets and smart phones.
Together, Ansible and Redfish can be used by system administrators to fully automate at large scale server monitoring, provisioning and update tasks from one central location, significantly reducing complexity and helping improve the productivity and efficiency of IT administrators.
Ceph Day Melbourne - Scale and performance: Servicing the Fabric and the Work...Ceph Community
The document discusses scale and performance challenges in providing storage infrastructure for research computing. It describes Monash University's implementation of the Ceph distributed storage system across multiple clusters to provide a "fabric" for researchers' storage needs in a flexible, scalable way. Key points include:
- Ceph provides software-defined storage that is scalable and can integrate with other systems like OpenStack.
- Multiple Ceph clusters have been implemented at Monash of varying sizes and purposes, including dedicated clusters for research data storage.
- The infrastructure provides different "tiers" of storage with varying performance and cost characteristics to meet different research needs.
- Ongoing work involves expanding capacity and upgrading hardware to improve performance
Best Practices with PostgreSQL on SolarisJignesh Shah
This document provides best practices for deploying PostgreSQL on Solaris, including:
- Using Solaris 10 or latest Solaris Express for support and features
- Separating PostgreSQL data files onto different file systems tuned for each type of IO
- Tuning Solaris parameters like maxphys, klustsize, and UFS buffer cache size
- Configuring PostgreSQL parameters like fdatasync, commit_delay, wal_buffers
- Monitoring key metrics like memory, CPU, and IO usage at the Solaris and PostgreSQL level
Real questions for Network Appliance NS0-156 Data ONTAP Cluster-Mode Administrator Exam from pass4sure with unlimited access of 2500+ Exams for Life time. Pass your NS0-156 Network Appliance Specialist Exam with 100% Guaranteed or we will refund your Money.
This document outlines the steps for building a SQL Server cluster for high availability, including planning considerations, required hardware, installing Windows clustering features, configuring storage, installing and configuring SQL Server across nodes, and testing the cluster configuration. Key aspects that are discussed include defining recovery time and point objectives, installing SQL Server using the "Create New Failover Cluster" option, installing SQL on each node to enable failover, and performing backups and restores from cluster-owned drives. Testing the applications on the clustered environment is also emphasized.
Fujitsu m10 server features and capabilitiessolarisyougood
This document provides an overview of the Fujitsu M10 server product line. It describes the hardware features and capabilities of the Fujitsu M10-1, M10-4, and M10-4S servers including their processors, memory, I/O, storage, and virtualization support. It also discusses the reliability, availability, and serviceability features, and performance advantages for running Oracle databases and SAP workloads on the Fujitsu M10 servers.
Dell EMC uses Ansible for automating various tasks including network switch configuration, OpenStack configuration, out-of-band server management, and OpenShift deployment. Ansible provides agentless automation and configuration management through playbooks, templates, and roles. Dell EMC has developed networking roles and Ansible modules to manage switches, servers, and OpenStack configurations. Examples shown include configuring Dell switches, deploying OpenStack projects and users, getting server health/logs through Redfish, and automating an OpenShift reference architecture.
This document discusses tuning DB2 in a Solaris environment. It provides background on the presenters, Tom Bauch from IBM and Jignesh Shah from Sun Microsystems. The agenda covers general considerations, memory usage and bottlenecks, disk I/O considerations and bottlenecks, and tuning DB2 V8.1 specifically in Solaris 9. It discusses supported Solaris versions, kernel settings, required patches, installation methods, and the configuration wizard. Specific topics covered in more depth include the Data Partitioning Feature, DB2 Enterprise Server Edition, and analyzing and addressing potential memory bottlenecks.
This document summarizes a presentation about FlashGrid, an alternative to Oracle Exadata that aims to achieve similar performance levels using commodity hardware. It discusses the key components of FlashGrid including the Linux kernel, networking protocols like Infiniband and NVMe, and hardware. Benchmarks show FlashGrid achieving comparable IOPS and throughput to Exadata on a single server. While Exadata has proprietary advantages, FlashGrid offers excellent raw performance at lower cost and with simpler maintenance through the use of standard technologies.
This document discusses database deployment automation. It begins with introductions and an example of a problematic Friday deployment. It then reviews the concept of automation and different visions of it within an organization. Potential tools and frameworks for automation are discussed, along with common pitfalls. Basic deployment workflows using Oracle Cloud Control are demonstrated, including setting credentials, creating a proxy user, adding target properties, and using a job template. The document concludes by emphasizing that database deployment automation is possible but requires effort from multiple teams.
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp StorageVMworld
This document discusses architecting Oracle databases on VMware vSphere 5 with NetApp storage. It begins with objectives such as understanding how to provision NetApp storage for an Oracle database to take advantage of VMware and NetApp technologies. It then covers topics like using Oracle with vSphere 5, recommendations for vSphere 5, virtualizing Oracle with NetApp, reference architectures, and where to learn more. The presenters are experts on Oracle and virtualization technologies looking to provide best practices on implementing Oracle databases with VMware and NetApp.
VMware vSphere 5.5 with features like Flash Read Cache (vFRC) can improve performance of virtualized Oracle 12c databases without impacting reliability functions like VMotion. Testing showed vFRC decreased time to complete an OLAP workload by 14% and allowed seamless migration of vFRC-enabled VMs during VMotion. The combination of VMware, Cisco, and EMC technologies provided reliable virtualization and storage with increased Oracle 12c performance using vFRC.
IT-AAC Defense IT Reform Report to the Sec 809 PanelJohn Weiler
Today, 1/12/17, the IT-AAC briefed the Panel on Streamlining and Codifying Acquisition Regulations (NDAA Sec 809). These recommendations are the results of an 8 year study that included the review of over 40 major studies, over 40 leadership workshops, and root cause analysis of over 40 major IT program failures.
The document discusses structural, electrical, and thermoelectric properties of CrSi2 thin films. It describes how 1 μm and 0.1 μm CrSi2 thin films were prepared by RF sputtering onto quartz substrates under various conditions. Various characterization techniques were used to analyze the structural and compositional properties of the thin films, including XRD, SEM, and EDAX. Seebeck coefficient measurements of the thin films found values ranging from 30-80 μV/K depending on annealing temperature and film thickness. Overall the document examines how processing conditions affect the properties of CrSi2 thin films and their potential for thermoelectric applications.
The briefing discusses the need for new cybersecurity legislation to address gaps unaddressed by existing policies like PPD21. It argues that legislation is necessary to give authorities like the NSA and FBI new proactive powers to prevent cyber attacks, and to apply jurisdiction over both military and civilian cyber attacks. It suggests new laws should address transparency, privacy protections from government and private sector surveillance, and encourage more collaboration between government and private sector on critical infrastructure protection.
This document discusses grid modernization and the need for operational technology vision. It outlines the current state of limited visibility and control across transmission, distribution and customer levels. External pressures from legislation, distributed generation and technology changes are driving the need for a new vision. The vision would provide improved situational awareness, adaptability, flexibility and education through a defined cybersecurity posture, robust communication networks, edge computing and data aggregation. This would be achieved through forming an operational technology group and deploying new technology solutions across generation, distribution and other operational departments.
Cross Domain Solutions for SolarWinds from Sterling ComputersDLT Solutions
This document provides an overview and demonstration of Sterling Computers' CrossWatch solution for providing cross domain situational awareness using SolarWinds products. CrossWatch allows Orion servers running in different security domains to push monitoring data to a centralized Enterprise Operations Console, giving operations staff a single dashboard view of the status of IT assets across multiple domains. The demonstration shows how CrossWatch adapts the EOC's "pull" model to a cross-domain "push" model, caching and formatting data from low domain Orion servers for display in the high domain EOC.
Carahsoft technology interview questions and answersKeisukeHonda66
This document provides tips, questions, and answers for job interviews at Carahsoft Technology. It includes responses to common interview questions like "What is your greatest weakness?" and "Why should we hire you?". It also lists additional resources for interview preparation, such as sample behavioral and situational questions. The document emphasizes researching the company, linking experiences to the role, and portraying enthusiasm when answering questions.
Presidio is a networking and IT infrastructure company with over 750 employees and $750M in annual revenue. They focus exclusively on select technology partners to develop expertise in areas like networking, storage, servers and communications. Presidio provides comprehensive technical services including design, implementation, support and staffing through their technical services organization of over 50 professionals with advanced certifications.
This document provides a summary of Arthit Kliangprom's background and experience. It outlines his education, including a bachelor's degree in electrical engineering, and over 15 years of experience in industrial automation projects across various sectors. His expertise includes programming languages for PLCs and DCS systems from manufacturers such as Siemens, Allen-Bradley, Honeywell, and ABB. He has extensive experience in electrical design, system configuration, testing, and commissioning of large automation projects.
The document provides instructions for setting up an ODROID board to boot its root file system from an external USB drive rather than the internal eMMC or SD card. It involves modifying the bootloader configuration file to point to the external drive, updating the initial ramdisk image to include USB storage modules, preparing a partition on the external drive for the root file system, copying over the root file system files and changing its label, and rebooting so the board boots from the external drive instead of the internal memory. The goal is to keep the boot files on the internal memory but run the full operating system from the higher-capacity external USB drive.
DLT Solutions interview questions and answersgetbrid665
This document provides tips and sample answers for common interview questions for a position at DLT Solutions. It includes responses to questions about previous employment, interest in the company, knowledge of the company, why the applicant should be hired, what they can offer, salary requirements, and questions to ask the interviewer. Suggestions include staying positive when discussing past jobs, highlighting how the applicant's values align with the company's, conducting research on the company beforehand, emphasizing relevant skills and experience, letting the interviewer provide the salary range first if asked, and asking questions focused on development opportunities rather than compensation.
The document summarizes Presidio's approach to transforming technology into innovative business solutions through professional and managed services. It provides an overview of Presidio's value drivers, networked solutions, managed networks, and technology capital offerings. Key points include Presidio's expertise in unified communications, data center transformation, security, and lifecycle management to design customized solutions that deliver long-term benefits.
This document provides information on various pumps and pumping systems for transferring liquids from containers. It describes lever action, rotary, electric, and air powered pumps that are compatible with drums and barrels in sizes from 5 to 55 gallons. The pumps discussed transfer materials like oils, chemicals, fuels, and water and are made from materials like polyethylene, PVC, stainless steel, and carbon steel to suit the liquid being pumped. Safety features are highlighted for some pump models.
Master Source-to-Pay with Cloud and Business Networks [Stockholm]SAP Ariba
In their initial phase, business networks were all about connecting companies more efficiently to perform a discreet process – buying, selling, invoicing, etc. Today, Ariba is so much more - a platform for innovation for companies of all sizes, to harness insights and intelligence to break down the barriers to collaboration and enable competitive advantage. But this is a new Ariba - smarter, faster , more accessible, and more global than ever. And we can help you transform your Procurement and Finance processes in ways never thought possible.
Bradley McKinney is a U.S. Navy Captain with over 10 years of experience in command and senior leadership positions, including roles in EOD operations, naval expeditionary warfare, weapons of mass destruction, and special operations. He is seeking a new role as a program manager that leverages this experience. His background includes serving as the Director of the U.S. Special Operations Command CWMD Support Program and as the Commanding Officer of the Center for Explosive Ordnance and Diving Training. He has a Master's degree in National Security Strategy.
Oracle and Cast Iron Systems: Delivering an Integrated CRM ExperienceSean O'Connell
The document discusses how Oracle, Cast Iron Systems, and Xchange Technology Group helped integrate Oracle CRM On Demand with Epicor ERP to provide a 360 degree view of customers for Xchange. Xchange was facing data silos and manual updates between its CRM and ERP systems, but using Cast Iron's integration platform automated real-time synchronization and improved sales productivity at 50% lower cost than custom code. The presentation outlines Xchange's experience and future plans to integrate additional systems like Cisco, Eloqua, and Oracle EBS using Cast Iron's configuration-based approach.
AMA commercial presentation-PASU-R4 2015Ross McLendon
The document introduces the Aero Metals Alliance (AMA), a partnership between several aerospace metal suppliers including Amari Aerospace, Gould Alloys, PASU, SCA, Sunshine Metals, and Wilsons. The AMA aims to enhance customer service by providing a single point of contact, system integration, and inventory and processing capabilities globally. It will reduce waste in the supply chain and allow partners to offer services to global customers. Profiles of each partner company are provided, outlining their products and services. The AMA's purchasing strategy seeks to create value for suppliers through strategic relationships and coordination to improve forecasting, lead times, and efficiencies.
this is summary about smart building. i got it from many literature, in this summary you can know what is smart building, the definition, the characteristic of smart building, what is the point of smart building and many others.
The document summarizes the harmonized microbial limit tests established in 2006 by the USP, EP, and JP pharmacopeias. The tests include microbial enumeration tests to determine total aerobic microbial count and total yeast and mold count, as well as tests for specified microorganisms like E. coli, Salmonella species, and Candida albicans. The tests involve preparing samples, incubating them in various growth media, and observing colonies to quantify microbes and identify pathogens based on standardized methods, limits, and interpretations. The harmonization aligned the structure, methods, and acceptance criteria used across different pharmacopeias to ensure microbial safety of non-sterile pharmaceutical products.
Factored Operating Systems (fos) - The Case for a Scalable Operating System for Multicores - Designing a new operating system targeting manycore
systems with scalability as the primary design constraint,
where space sharing replaces time sharing to increase
scalability.
Leveraging OpenStack Cinder for Peak Application PerformanceNetApp
Deploying performance sensitive, database-driven applications in OpenStack can be tenuous if you are unsure how to utilize the Cinder API to get the most out of your OpenStack block storage.
This presentation:
Introduces Cinder, the OpenStack block storage service
Talks about the unique attributes of performance-sensitive applications and what this means in OpenStack
Walks you through how to use Cinder volume types and extra specs to guarantee performance to your various cloud workloads
Discusses OpenStack Trove and what it means for running database as a service in your OpenStack cloud
This document provides a high-level overview of key considerations for building a computer cluster, including:
- Gathering requirements for operations, dataflow, and compute needs.
- Designing for reliability, scalability, and failure tolerance.
- Choosing appropriate rack servers and network switches.
- Using configuration management tools to automate server provisioning and updates.
- Implementing monitoring and metrics collection to detect and diagnose issues.
- Deploying software in a controlled, repeatable manner across integration, test, and production environments.
MySQL is commonly used as the default database in OpenStack. It provides high availability through options like Galera and MySQL Group Replication. Galera is a third party active/active cluster that provides synchronous replication, while Group Replication is a native MySQL plugin that also enables active/active clusters with built-in conflict detection. MySQL NDB Cluster is an alternative that provides in-memory data storage with automatic sharding and strong consistency across shards. Both Galera/Group Replication and NDB Cluster can be used to implement highly available MySQL services in OpenStack environments.
OpenStack Days East -- MySQL Options in OpenStackMatt Lord
In most production OpenStack installations, you want the backing metadata store to be highly available. For this, the de facto standard has become MySQL+Galera. In order to help you meet this basic use case even better, I will introduce you to the brand new native MySQL HA solution called MySQL Group Replication. This allows you to easily go from a single instance of MySQL to a MySQL service that's natively distributed and highly available, while eliminating the need for any third party library and implementations.
If you have an extremely large OpenStack installation in production, then you are likely to eventually run into write scaling issues and the metadata store itself can become a bottleneck. For this use case, MySQL NDB Cluster can allow you to linearly scale the metadata store as your needs grow. I will introduce you to the core features of MySQL NDB Cluster--which include in-memory OLTP, transparent sharding, and support for active/active multi-datacenter clusters--that will allow you to meet even the most demanding of use cases with ease.
Percona Live 4/14/15: Leveraging open stack cinder for peak application perfo...Tesora
In this session, speakers Amrith Kumar (Tesora), Steven Walchek (SolidFire), and Chris Merz (SolidFire) discuss Cinder, the OpenStack block storage service, and OpenStack Trove.
Data Lake and the rise of the microservicesBigstep
By simply looking at structured and unstructured data, Data Lakes enable companies to understand correlations between existing and new external data - such as social media - in ways traditional Business Intelligence tools cannot.
For this you need to find out the most efficient way to store and access structured or unstructured petabyte-sized data across your entire infrastructure.
In this meetup we’ll give answers on the next questions:
1. Why would someone use a Data Lake?
2. Is it hard to build a Data Lake?
3. What are the main features that a Data Lake should bring in?
4. What’s the role of the microservices in the big data world?
Kudu is an open source storage layer developed by Cloudera that provides low latency queries on large datasets. It uses a columnar storage format for fast scans and an embedded B-tree index for fast random access. Kudu tables are partitioned into tablets that are distributed and replicated across a cluster. The Raft consensus algorithm ensures consistency during replication. Kudu is suitable for applications requiring real-time analytics on streaming data and time-series queries across large datasets.
Get the Facts: Oracle's Unbreakable Enterprise KernelTerry Wang
1) Oracle introduced the Unbreakable Enterprise Kernel for Oracle Linux, which is optimized for Oracle software and provides significant performance gains over the Red Hat compatible kernel.
2) The Unbreakable Enterprise Kernel includes many new features like improved power management, data integrity, and diagnostic tools.
3) Oracle recommends customers use the Unbreakable Enterprise Kernel for all Oracle software on Linux, though it will continue to support the Red Hat compatible kernel.
Storage Requirements and Options for Running Spark on KubernetesDataWorks Summit
In a world of serverless computing users tend to be frugal when it comes to expenditure on compute, storage and other resources. Paying for the same when they aren’t in use becomes a significant factor. Offering Spark as service on cloud presents very unique challenges. Running Spark on Kubernetes presents a lot of challenges especially around storage and persistence. Spark workloads have very unique requirements of Storage for intermediate data, long time persistence, Share file system and requirements become very tight when it same need to be offered as a service for enterprise to mange GDPR and other compliance like ISO 27001 and HIPAA certifications.
This talk covers challenges involved in providing Serverless Spark Clusters share the specific issues one can encounter when running large Kubernetes clusters in production especially covering the scenarios related to persistence.
This talk will help people using Kubernetes or docker runtime in production and help them understand various storage options available and which is more suitable for running Spark workloads on Kubernetes and what more can be done
The document provides an overview of the Linux operating system, including:
- An introduction to Linux and its history as an open-source clone of UNIX.
- Descriptions of Linux's core functionality like multi-user support and virtual memory.
- Discussions of key Linux components like kernels, distributions, packages, and updates.
- Explanations of enterprise-level Linux features around performance, scalability, and reliability.
This document discusses storage requirements for running Spark workloads on Kubernetes. It recommends using a distributed file system like HDFS or DBFS for distributed storage and emptyDir or NFS for local temp scratch space. Logs can be stored in emptyDir or pushed to object storage. Features that would improve Spark on Kubernetes include image volumes, flexible PV to PVC mappings, encrypted volumes, and clean deletion for compliance. The document provides an overview of Spark, Kubernetes benefits, and typical Spark deployments.
Oracle will continue investing in both Solaris and Linux operating systems. It will optimize both OSes for applications through disk and deliver world-class support at the lowest total cost of ownership. Oracle's virtualization strategy offers comprehensive virtualization from desktop to data center, including Oracle VM Server, Oracle VM VirtualBox, and Oracle Virtual Desktop Infrastructure.
Sanger, upcoming Openstack for Bio-informaticiansPeter Clapham
Delivery of a new Bio-informatics infrastructure at the Wellcome Trust Sanger Center. We include how to programatically create, manage and provide providence for images used both at Sanger and elsewhere using open source tools and continuous integration.
packageFor certain workloads and environments: Consolidation on large virtualized servers raises utilization, reduces core requirements, and lowers cost per workload
The document summarizes the advantages of IBM LinuxONE systems over traditional x86 servers for running Linux workloads. LinuxONE systems provide massive scale with high performance, throughput, and security across many workloads like MongoDB, Docker containers, and virtual machines. They also have significantly lower total cost of ownership compared to solutions on x86 servers due to higher utilization rates and lower management costs.
This document provides an overview of how to create your own cloud using Apache CloudStack. It discusses the key characteristics of clouds, different cloud service and deployment models supported by CloudStack, and the core components that make up a CloudStack deployment including zones, pods, clusters, primary and secondary storage, virtual routers, hypervisors, and the management server. The document also touches on CloudStack's networking, security, high availability, resource allocation, and usage accounting features.
HPC and cloud distributed computing, as a journeyPeter Clapham
Introducing an internal cloud brings new paradigms, tools and infrastructure management. When placed alongside traditional HPC the new opportunities are significant But getting to the new world with micro-services, autoscaling and autodialing is a journey that cannot be achieved in a single step.
GlobalLogic Java Community Webinar #18 “How to Improve Web Application Perfor...GlobalLogic Ukraine
Під час доповіді відповімо на питання, навіщо потрібно підвищувати продуктивність аплікації і які є найефективніші способи для цього. А також поговоримо про те, що таке кеш, які його види бувають та, основне — як знайти performance bottleneck?
Відео та деталі заходу: https://bit.ly/45tILxj
From Natural Language to Structured Solr Queries using LLMsSease
This talk draws on experimentation to enable AI applications with Solr. One important use case is to use AI for better accessibility and discoverability of the data: while User eXperience techniques, lexical search improvements, and data harmonization can take organizations to a good level of accessibility, a structural (or “cognitive” gap) remains between the data user needs and the data producer constraints.
That is where AI – and most importantly, Natural Language Processing and Large Language Model techniques – could make a difference. This natural language, conversational engine could facilitate access and usage of the data leveraging the semantics of any data source.
The objective of the presentation is to propose a technical approach and a way forward to achieve this goal.
The key concept is to enable users to express their search queries in natural language, which the LLM then enriches, interprets, and translates into structured queries based on the Solr index’s metadata.
This approach leverages the LLM’s ability to understand the nuances of natural language and the structure of documents within Apache Solr.
The LLM acts as an intermediary agent, offering a transparent experience to users automatically and potentially uncovering relevant documents that conventional search methods might overlook. The presentation will include the results of this experimental work, lessons learned, best practices, and the scope of future work that should improve the approach and make it production-ready.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
How information systems are built or acquired puts information, which is what they should be about, in a secondary place. Our language adapted accordingly, and we no longer talk about information systems but applications. Applications evolved in a way to break data into diverse fragments, tightly coupled with applications and expensive to integrate. The result is technical debt, which is re-paid by taking even bigger "loans", resulting in an ever-increasing technical debt. Software engineering and procurement practices work in sync with market forces to maintain this trend. This talk demonstrates how natural this situation is. The question is: can something be done to reverse the trend?
In the realm of cybersecurity, offensive security practices act as a critical shield. By simulating real-world attacks in a controlled environment, these techniques expose vulnerabilities before malicious actors can exploit them. This proactive approach allows manufacturers to identify and fix weaknesses, significantly enhancing system security.
This presentation delves into the development of a system designed to mimic Galileo's Open Service signal using software-defined radio (SDR) technology. We'll begin with a foundational overview of both Global Navigation Satellite Systems (GNSS) and the intricacies of digital signal processing.
The presentation culminates in a live demonstration. We'll showcase the manipulation of Galileo's Open Service pilot signal, simulating an attack on various software and hardware systems. This practical demonstration serves to highlight the potential consequences of unaddressed vulnerabilities, emphasizing the importance of offensive security practices in safeguarding critical infrastructure.
"$10 thousand per minute of downtime: architecture, queues, streaming and fin...Fwdays
Direct losses from downtime in 1 minute = $5-$10 thousand dollars. Reputation is priceless.
As part of the talk, we will consider the architectural strategies necessary for the development of highly loaded fintech solutions. We will focus on using queues and streaming to efficiently work and manage large amounts of data in real-time and to minimize latency.
We will focus special attention on the architectural patterns used in the design of the fintech system, microservices and event-driven architecture, which ensure scalability, fault tolerance, and consistency of the entire system.
Dandelion Hashtable: beyond billion requests per second on a commodity serverAntonios Katsarakis
This slide deck presents DLHT, a concurrent in-memory hashtable. Despite efforts to optimize hashtables, that go as far as sacrificing core functionality, state-of-the-art designs still incur multiple memory accesses per request and block request processing in three cases. First, most hashtables block while waiting for data to be retrieved from memory. Second, open-addressing designs, which represent the current state-of-the-art, either cannot free index slots on deletes or must block all requests to do so. Third, index resizes block every request until all objects are copied to the new index. Defying folklore wisdom, DLHT forgoes open-addressing and adopts a fully-featured and memory-aware closed-addressing design based on bounded cache-line-chaining. This design offers lock-free index operations and deletes that free slots instantly, (2) completes most requests with a single memory access, (3) utilizes software prefetching to hide memory latencies, and (4) employs a novel non-blocking and parallel resizing. In a commodity server and a memory-resident workload, DLHT surpasses 1.6B requests per second and provides 3.5x (12x) the throughput of the state-of-the-art closed-addressing (open-addressing) resizable hashtable on Gets (Deletes).
This talk will cover ScyllaDB Architecture from the cluster-level view and zoom in on data distribution and internal node architecture. In the process, we will learn the secret sauce used to get ScyllaDB's high availability and superior performance. We will also touch on the upcoming changes to ScyllaDB architecture, moving to strongly consistent metadata and tablets.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
The Department of Veteran Affairs (VA) invited Taylor Paschal, Knowledge & Information Management Consultant at Enterprise Knowledge, to speak at a Knowledge Management Lunch and Learn hosted on June 12, 2024. All Office of Administration staff were invited to attend and received professional development credit for participating in the voluntary event.
The objectives of the Lunch and Learn presentation were to:
- Review what KM ‘is’ and ‘isn’t’
- Understand the value of KM and the benefits of engaging
- Define and reflect on your “what’s in it for me?”
- Share actionable ways you can participate in Knowledge - - Capture & Transfer
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation F...AlexanderRichford
QR Secure: A Hybrid Approach Using Machine Learning and Security Validation Functions to Prevent Interaction with Malicious QR Codes.
Aim of the Study: The goal of this research was to develop a robust hybrid approach for identifying malicious and insecure URLs derived from QR codes, ensuring safe interactions.
This is achieved through:
Machine Learning Model: Predicts the likelihood of a URL being malicious.
Security Validation Functions: Ensures the derived URL has a valid certificate and proper URL format.
This innovative blend of technology aims to enhance cybersecurity measures and protect users from potential threats hidden within QR codes 🖥 🔒
This study was my first introduction to using ML which has shown me the immense potential of ML in creating more secure digital environments!
The Microsoft 365 Migration Tutorial For Beginner.pptxoperationspcvita
This presentation will help you understand the power of Microsoft 365. However, we have mentioned every productivity app included in Office 365. Additionally, we have suggested the migration situation related to Office 365 and how we can help you.
You can also read: https://www.systoolsgroup.com/updates/office-365-tenant-to-tenant-migration-step-by-step-complete-guide/
Must Know Postgres Extension for DBA and Developer during MigrationMydbops
Mydbops Opensource Database Meetup 16
Topic: Must-Know PostgreSQL Extensions for Developers and DBAs During Migration
Speaker: Deepak Mahto, Founder of DataCloudGaze Consulting
Date & Time: 8th June | 10 AM - 1 PM IST
Venue: Bangalore International Centre, Bangalore
Abstract: Discover how PostgreSQL extensions can be your secret weapon! This talk explores how key extensions enhance database capabilities and streamline the migration process for users moving from other relational databases like Oracle.
Key Takeaways:
* Learn about crucial extensions like oracle_fdw, pgtt, and pg_audit that ease migration complexities.
* Gain valuable strategies for implementing these extensions in PostgreSQL to achieve license freedom.
* Discover how these key extensions can empower both developers and DBAs during the migration process.
* Don't miss this chance to gain practical knowledge from an industry expert and stay updated on the latest open-source database trends.
Mydbops Managed Services specializes in taking the pain out of database management while optimizing performance. Since 2015, we have been providing top-notch support and assistance for the top three open-source databases: MySQL, MongoDB, and PostgreSQL.
Our team offers a wide range of services, including assistance, support, consulting, 24/7 operations, and expertise in all relevant technologies. We help organizations improve their database's performance, scalability, efficiency, and availability.
Contact us: info@mydbops.com
Visit: https://www.mydbops.com/
Follow us on LinkedIn: https://in.linkedin.com/company/mydbops
For more details and updates, please follow up the below links.
Meetup Page : https://www.meetup.com/mydbops-databa...
Twitter: https://twitter.com/mydbopsofficial
Blogs: https://www.mydbops.com/blog/
Facebook(Meta): https://www.facebook.com/mydbops/
"NATO Hackathon Winner: AI-Powered Drug Search", Taras KlobaFwdays
This is a session that details how PostgreSQL's features and Azure AI Services can be effectively used to significantly enhance the search functionality in any application.
In this session, we'll share insights on how we used PostgreSQL to facilitate precise searches across multiple fields in our mobile application. The techniques include using LIKE and ILIKE operators and integrating a trigram-based search to handle potential misspellings, thereby increasing the search accuracy.
We'll also discuss how the azure_ai extension on PostgreSQL databases in Azure and Azure AI Services were utilized to create vectors from user input, a feature beneficial when users wish to find specific items based on text prompts. While our application's case study involves a drug search, the techniques and principles shared in this session can be adapted to improve search functionality in a wide range of applications. Join us to learn how PostgreSQL and Azure AI can be harnessed to enhance your application's search capability.
2. Disclaimer
This lecture describes my solely personal opinion. The information
might not be accurate and might be subject to changes at any
time.
It does not project any opinion from any other company or
institute which I am affiliated with.
You are encouraged to participate in the lecture and to reflect
your own opinion.
3. How to compare between OS’s ?
In order to compare between Solaris and Linux Operating systems
we need to declare several things -
What is the purpose of the operating system ?
Goal
Who is using the operating system ?
Usability
How the operating system is built ?
Quality
5. Solaris vs. Linux
Purpose
Linux
Solaris
• Embedded
• Tablet/Phones
• Server X86/X86_64
• Growing application
coverage
• Good support for DB
• No availability
• No Availability
• Server X86/X86_64 (Intel)
• Large ISV install base
• Better support for DB
• Heavy duty (Mainframe,
Itanium)
• Minimal ISV install base
• Poor support for DB
• Heavy duty – SPARC
• Large ISV install base
• Better support for DB
7. Solaris vs. Linux
Role
Demand
Linux
Solaris
Managers
Consistency
High system
throughput
• Good stability
• Excellent
stability
End users
Low application
response time
• Good HW/SW
Integration.
• Excellent
HW/SW
Integration.
Programmers
Fast access to
system resources
• Excellent API’s
• Good binary
compatibility
• Good API’s
• Excellent binary
compatibility
System
Administrators
Ability to install
and administer
the system easily
• Good
administration
Ability
• Excellent
Administration
Ability
8. Solaris vs. Linux
Quality
Hardware Integration
Intel, SPARC
vs.
Kernel
Well engineered
vs.
File-system
ZFS
vs.
Networking
Network virtualization vs.
Scheduling
Scheduling classes
vs.
IO & Storage
Multipathing/COMSTAR vs.
Virtualization
Zones
OVM for Sparc
Installation
Jumpstart/AI
Packaging
IPS
Services
SMF
Intel/Mainframe
Well developed
ext4/btrfs
Regular network
Optional API’s
Standard device
mechanism
vs.
LXC
SW hypervisor
vs.
Kickstart
vs.
RPM
vs.
SVR4
9. Hardware Integration – Solaris X86
Integration with Intel CPU’s
Sun Microsystem and Intel are collaborating since 2007.
11. Hardware Integration – Solaris SPARC
SPARC – The fastest Microprocessor in the world
Best of breed architecture
CPU features:
• Accelerated Cryptography – Cryptography is done by hardware.
• Critical Thread optimization – Ability to utilize a core in 2 ways:
• 8 hardware threads - when multithreaded behavior is needed.
or
• 1 hardware thread in case single thread intensive processing is
needed.
• A Multithreaded Hypervisor – allows to utilize the Virtual environment in
Oracle VM for SPARC better, by splitting the hypervisor operations to
several hardware threads.
12. Hardware Integration – LINUX X86
CentOS
RedHat
Oracle Linux
Oracle Solaris
Suse
Ubuntu
HP
ORACLE
DELL
IBM
Where as most Linux distribution require complex matrix of support to
other HW vendors, Oracle Linux and Oracle Solaris are adjusted to Oracle
Hardware better.
13. Kernel - Solaris
Well Engineered
• Binary compatibility
• Kernel Debugger in
real time and for
postmortem (mdb,
crash analysis)
• Security (RBAC aware)
• Well defined APIs
vs.
•
•
•
•
•
Well Developed
18K lines in one day.
Much more feature rich
Scheduling
Security (RBAC aware)
Constant changes in API’s
14. File System
ZFS
vs.
ext4/btrfs
• Matured
• Ext4 – very old, btrfs - still
• Ease of administration
new not implemented in
• No evacuation of disk
most of the distributions.
(until BPR is
• Use the old UNIX/POSIX
implemented).
command semantics.
• ZFS integrated with
• It sometimes takes 1 zfs
DTRACE for better
command to be
observation, monitoring
implemented in 2-4 btrfs
and analysis.
commands.
• Integrated with Image
Packaging System
More info:
http://www.seedsofgenius.net/solaris/zfs-vs-btrfs-a-reference
15. Networking
Network virtualization
vs.
• Allows Virtual objects –
VNICS, Virtual Switches.
• Well engineered.
• Structured driver model –
the hardware driver layer is
separated from other layers.
• Structured administration
model(dladm, ipadm)
• Move from files to DB
configuration.
• Configuration is object
driven (e.g: addresses are
now objects) and not text
driven (using files).
• Flow(QoS) administration
• The network configuration is
implemented as a service.
With Dependency
mechanism.
Regular network
• Basic Network configuration
with no virtualization.
• Driver have one static
implementation for all the
functionality of the driver.
• Configuration is in old text
files.
• Most of the configuration is
spread over several files.
16. Scheduling
Scheduling classes
vs.
• Variety of Scheduling
classes (dispadmin –l)
• FSS – Fair Share
Scheduler.
• Ability to configure
Scheduling class if
needed.
• Ability to use – Realtime
and Fixed priority classes
very easy with no need of
programming skills.
Optional API’s
• Basic Scheduling
• Nice for configuring
priorities.
17. IO & Storage
Multipathing COMSTAR
• Rich Multipathing support
MP supports cross
protocols.
• Wider support for:
• Infiniband
• FC
• FCoE
• Iscsi
• COMSTAR –
• Ability to create
software defined
storage – with lun
provisioning
vs.
Standard
• Standard IO ability
18. Virtualization
Local Virtualization (Zones ) or HW virtualization
vs.
Local Virtualization (LXC) or SW Hypervisor
• Zone –
• Well engineered
• Well embraced
• Rich resource
management ability
• LXC – not yet embraced.
• OVM for SPARC–
• Hypervisor on chip
• Enterprise class
virtualization
• Supports Oracle stack.
hypervisors –
Variety of Linux based
hypervisors.
XEN/Vmware/KVM based.
20. Packaging
IPS
vs.
RPM
• Feature rich
• Matured packaging
packaging system
system
• Integrated with ZFS • Introduced dependency
• Contains dependency
facility
facility.
• Integrated patch
mechanism into
packaging system.
21. Services
SMF
• Feature rich Services
Mechanism
• DB driven with xml
configuration
semantics.
• Allows dependencies.
• Allows to administer
services
configuration. And
rollback from a
configuration if
needed.
vs.
SVR4
• Very old services
mechanism.
• Text based.
• No dependency.
• No ability to rollback
services configuration.