The document provides an overview of Sun Cluster 3.0 including its main components, architecture, application support, basic concepts, installation process, and administration commands. It describes the global and userland components, differentiates Sun Cluster 3.0 from other cluster software, and outlines the steps for installing and configuring a Sun Cluster 3.0 system.
Juniper Chassis Cluster Configuration with SRX-1500sAshutosh Patel
This article identifies resources for understanding, configuring and verifying the "High availability or Chassis cluster" (in Juniper's term) on Juniper's SRX 1500 Series firewall. You can use this article as a reference to configuring the chassis cluster on your SRX firewalls. This configuration has been tested and proven to be working as expected. I hope this help you.
Conga is a centralized tool for configuring and managing Red Hat clusters and storage through a web interface. It has three main components - luci server, ricci agents, and a database. Luci provides tools on different tabs for adding/removing systems, configuring clusters and storage. Command line tools like ccs_tool and clustat can also manage clusters. The Piranha Configuration Tool is a web GUI for configuring Linux Virtual Server load balancing through settings like interfaces, redundancy, and virtual servers.
This document provides instructions for setting up a Linux high-performance computing cluster. It details how to configure the master node to function as a DHCP and TFTP server to provide network installation files to client nodes. The document is divided into sections covering master node configuration such as network setup, DHCP, NFS, and key configuration; software installation including compilers, job schedulers, and scientific packages; and client node installation using PXE network boot.
Deep dive into Quantum
1. Quantum is the network connectivity service for OpenStack that provides an API to dynamically request and configure virtual networks. It integrates virtual networks with other OpenStack services.
2. The Open vSwitch plugin uses a quantum agent to poll the local Open vSwitch instance and configure flows to implement the logical network model defined in the central database.
3. Plugins hide the backend network technology and provide a generic tenant API for creating and configuring virtual networks, while agents perform the actual network configuration on each physical host.
High availability clusters use redundant systems and components to minimize downtime from failures. An SRX cluster provides redundancy by grouping two SRX devices to act as a single device. Key components include the control plane, data plane, and redundancy groups. The control plane ensures only one configuration between nodes. Redundancy groups contain objects that fail over together if monitoring detects failures.
This document summarizes the architecture of Quantum, the network service for OpenStack. It discusses the key components of Quantum including the Quantum server, plugins, agents, and databases. It describes the network models in Quantum including tenant networks, provider networks, and floating IPs. It also outlines the communication between Quantum components using AMQP messaging.
Red Hat GFS (Global File System) is a cluster file system that allows nodes in a cluster to simultaneously access a shared block storage device. It employs distributed metadata and multiple journals to operate optimally in a cluster. GFS uses a lock manager to coordinate I/O and maintain file system integrity. It provides benefits like simplified data infrastructure management, maximized storage resource use, seamless cluster scaling, and high performance access to data. GFS can be deployed with different configurations to suit various needs for performance, scalability, and cost. It provides data sharing, a consistent namespace, and features required for enterprise environments.
Implementation of multicast communication in internet
Individual hosts are configured as members of different multicast groups
One particular user may a member of many multicast groups
For a one multicast can be few members/nodes
IP Multicast group is identified by Class D address (224.0.0.0 – 239.255.255.255)
Every IP datagram send to a multicast group is transferred to all members of group
Juniper Chassis Cluster Configuration with SRX-1500sAshutosh Patel
This article identifies resources for understanding, configuring and verifying the "High availability or Chassis cluster" (in Juniper's term) on Juniper's SRX 1500 Series firewall. You can use this article as a reference to configuring the chassis cluster on your SRX firewalls. This configuration has been tested and proven to be working as expected. I hope this help you.
Conga is a centralized tool for configuring and managing Red Hat clusters and storage through a web interface. It has three main components - luci server, ricci agents, and a database. Luci provides tools on different tabs for adding/removing systems, configuring clusters and storage. Command line tools like ccs_tool and clustat can also manage clusters. The Piranha Configuration Tool is a web GUI for configuring Linux Virtual Server load balancing through settings like interfaces, redundancy, and virtual servers.
This document provides instructions for setting up a Linux high-performance computing cluster. It details how to configure the master node to function as a DHCP and TFTP server to provide network installation files to client nodes. The document is divided into sections covering master node configuration such as network setup, DHCP, NFS, and key configuration; software installation including compilers, job schedulers, and scientific packages; and client node installation using PXE network boot.
Deep dive into Quantum
1. Quantum is the network connectivity service for OpenStack that provides an API to dynamically request and configure virtual networks. It integrates virtual networks with other OpenStack services.
2. The Open vSwitch plugin uses a quantum agent to poll the local Open vSwitch instance and configure flows to implement the logical network model defined in the central database.
3. Plugins hide the backend network technology and provide a generic tenant API for creating and configuring virtual networks, while agents perform the actual network configuration on each physical host.
High availability clusters use redundant systems and components to minimize downtime from failures. An SRX cluster provides redundancy by grouping two SRX devices to act as a single device. Key components include the control plane, data plane, and redundancy groups. The control plane ensures only one configuration between nodes. Redundancy groups contain objects that fail over together if monitoring detects failures.
This document summarizes the architecture of Quantum, the network service for OpenStack. It discusses the key components of Quantum including the Quantum server, plugins, agents, and databases. It describes the network models in Quantum including tenant networks, provider networks, and floating IPs. It also outlines the communication between Quantum components using AMQP messaging.
Red Hat GFS (Global File System) is a cluster file system that allows nodes in a cluster to simultaneously access a shared block storage device. It employs distributed metadata and multiple journals to operate optimally in a cluster. GFS uses a lock manager to coordinate I/O and maintain file system integrity. It provides benefits like simplified data infrastructure management, maximized storage resource use, seamless cluster scaling, and high performance access to data. GFS can be deployed with different configurations to suit various needs for performance, scalability, and cost. It provides data sharing, a consistent namespace, and features required for enterprise environments.
Implementation of multicast communication in internet
Individual hosts are configured as members of different multicast groups
One particular user may a member of many multicast groups
For a one multicast can be few members/nodes
IP Multicast group is identified by Class D address (224.0.0.0 – 239.255.255.255)
Every IP datagram send to a multicast group is transferred to all members of group
Seven years ago at LCA, Van Jacobsen introduced the concept of net channels but since then the concept of user mode networking has not hit the mainstream. There are several different user mode networking environments: Intel DPDK, BSD netmap, and Solarflare OpenOnload. Each of these provides higher performance than standard Linux kernel networking; but also creates new problems. This talk will explore the issues created by user space networking including performance, internal architecture, security and licensing.
Is OpenStack Neutron production ready for large scale deployments?Елена Ежова
The document discusses the results of testing the scalability of OpenStack Neutron in large deployments. Two hardware labs with 378 and 200 nodes were used. Rally and Shaker tools tested the control and data planes. Over 24500 VMs were launched on the 200-node lab with no loss of data plane connectivity. Near line-rate throughput was achieved in data plane tests. Some issues were encountered and fixed, such as bugs and Ceph failure. The outcomes indicate Neutron can scale to large deployments.
Interop Tokyo 2014 SDI (Software Defined Infrustructure) ShowCase Seminoar Presentation. The presentation covers Neutron API models (L2/L3 and Advanced Network services), Neutron Icehouse Update and Juno topics.
The document discusses network automation tools including EVE-NG for lab automation, Ansible for configuration management, and A-Frame for bulk configuration. It provides details on setting up EVE-NG with supported images, using Ansible modules for automation tasks, and demonstrates the workflow for using A-Frame's interface to automate and deploy configurations across multiple devices.
OpenStack DVR (Distributed Virtual Router) allows L3 routing functions to be distributed across compute nodes by creating router namespaces on each compute node. This avoids bottlenecks and single points of failure at network nodes. DVR supports east-west inter-subnet routing, SNAT for external access without floating IPs, and floating IPs associated with internal VMs for direct external access. Traffic flows are encapsulated in VXLAN/GRE tunnels between compute nodes and routed appropriately within each node's router namespace.
ONOS SDN Controller - Clustering Tests & Experiments Eueung Mulyana
The document describes setting up an ONOS cluster experiment including the target machines, management VM, and manual ONOS installation process. It discusses preparing the target machines by installing dependencies, Java, and manually extracting the ONOS binary. It also covers preparing the management VM by cloning the ONOS source code from Gerrit, checking out the 1.12.0 version, building ONOS, and installing additional tools for management.
The document provides information about National Chung Cheng University's 2016 Mobile All-IP Networking Laboratory and OpenStack cloud computing software. It defines OpenStack as open source cloud software that is mostly deployed as an infrastructure-as-a-service and combines compute, network and storage resources through a web portal and APIs. It also lists some major OpenStack releases and their included components, as well as examples of OpenStack usage by CERN and Yahoo Japan.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
The document discusses developing network device drivers for embedded Linux. It covers key topics like socket buffers, network devices, communicating with network protocols and PHYs, buffer management, and differences between Ethernet and WiFi drivers. The outline lists these topics and others like throughput and considerations. Prerequisites include C skills, Linux knowledge, and an understanding of networking and embedded driver development.
Tremashark is a tool for network debugging that collects event logs from multiple sources like packet captures, syslog outputs, and console logs. It combines these logs into a single timeline of events and allows users to analyze the logs using Wireshark. Tremashark is useful for debugging Trema-based OpenFlow controllers as it can collect packet data, system logs, and internal IPC messages between Trema modules.
DPDK is a set of drivers and libraries that allow applications to bypass the Linux kernel and access network interface cards directly for very high performance packet processing. It is commonly used for software routers, switches, and other network applications. DPDK can achieve over 11 times higher packet forwarding rates than applications using the Linux kernel network stack alone. While it provides best-in-class performance, DPDK also has disadvantages like reduced security and isolation from standard Linux services.
Design and Performance Characteristics of Tap-as-a-Servicesoichi shigeta
Tap-as-a-Service (TaaS) is an OpenStack extension that offers an API allowing tenants to monitor Neutron ports by mirroring traffic from source ports to a destination port. TaaS preserves tenant isolation by ensuring source and destination ports belong to the same tenant. It can be used for troubleshooting, security, and data analytics. The TaaS workflow involves creating a tap service instance with a destination port, then adding tap flows to associate source ports. Performance evaluation showed the isolated underlay design improved throughput with larger packet sizes compared to the shared underlay.
Developing production OpenFlow controller with TremaYasunobu Chiba
This document provides tips and common mistakes to avoid when developing an OpenFlow controller using the Trema framework. It discusses key things to know about OpenFlow and Trema, such as Trema being a programming framework rather than a full controller itself. It then presents a use case of developing a production OpenFlow controller to manage virtual networks across thousands of switches and hosts. The design uses a load balancer and three-tier architecture. An evaluation showed the controller could manage over 400 switches and 16,000 virtual networks per controller instance.
This document discusses deploying IPv6 on OpenStack. It provides an overview of IPv6, including that IPv6 addresses the shortage of IPv4 addresses by providing a vastly larger 128-bit address space. It describes IPv6 address types and allocation methods. It also discusses IPv6 configuration modes in OpenStack, including stateless address autoconfiguration (SLAAC) and DHCPv6 stateless and stateful modes. Additionally, it covers deployment options for IPv6 on OpenStack like dual stack, NAT64/DNS64, and network tunnels. It provides details on IPv6 address and router advertisement configuration in OpenStack.
ONOS provides the control plane for software-defined networks, managing network components and running applications. It can run distributed across servers for high availability and scalability. The document introduces ONOS and its architecture, and provides steps to install ONOS, run it with Mininet, and interact with its REST API. Key applications like reactive forwarding are demonstrated.
This document summarizes the history and current state of Linux bridging. It discusses how bridging has evolved from Ethernet bridging in 1985 to modern standards like 802.1aq Shortest Path Bridging. It also outlines key topics in bridging including tunneling protocols, the spanning tree protocol for avoiding loops, and security features. The status section notes that VXLAN support is in the mainline kernel but that updates to spanning tree and additional security features may be added in future kernel versions.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
This presentation introduces clustering and RedHat clustering. It defines a cluster as two or more computers that work together to perform a task. It distinguishes between hardware and software clusters, with hardware clusters being more expensive. The major software cluster types are high availability, load balancing, and high performance. The presentation concludes by advising attendees to download free documentation from RedHat's website to get started with RedHat clustering.
Solaris cluster roadshow day 1 technical presentationxKinAnx
This document provides an overview and agenda for Part 1 of the Solaris Cluster Roadshow in January 2007. It discusses the Solaris Cluster architecture, algorithms, and data services. Specifically, it covers the Sun Cluster building blocks, resource management infrastructure, agent development, manageability, and disaster recovery capabilities. It also summarizes the heartbeats, membership, configuration repository, quorum, and disk fencing algorithms used in Solaris Cluster. Finally, it describes failover and scalable data services.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
Seven years ago at LCA, Van Jacobsen introduced the concept of net channels but since then the concept of user mode networking has not hit the mainstream. There are several different user mode networking environments: Intel DPDK, BSD netmap, and Solarflare OpenOnload. Each of these provides higher performance than standard Linux kernel networking; but also creates new problems. This talk will explore the issues created by user space networking including performance, internal architecture, security and licensing.
Is OpenStack Neutron production ready for large scale deployments?Елена Ежова
The document discusses the results of testing the scalability of OpenStack Neutron in large deployments. Two hardware labs with 378 and 200 nodes were used. Rally and Shaker tools tested the control and data planes. Over 24500 VMs were launched on the 200-node lab with no loss of data plane connectivity. Near line-rate throughput was achieved in data plane tests. Some issues were encountered and fixed, such as bugs and Ceph failure. The outcomes indicate Neutron can scale to large deployments.
Interop Tokyo 2014 SDI (Software Defined Infrustructure) ShowCase Seminoar Presentation. The presentation covers Neutron API models (L2/L3 and Advanced Network services), Neutron Icehouse Update and Juno topics.
The document discusses network automation tools including EVE-NG for lab automation, Ansible for configuration management, and A-Frame for bulk configuration. It provides details on setting up EVE-NG with supported images, using Ansible modules for automation tasks, and demonstrates the workflow for using A-Frame's interface to automate and deploy configurations across multiple devices.
OpenStack DVR (Distributed Virtual Router) allows L3 routing functions to be distributed across compute nodes by creating router namespaces on each compute node. This avoids bottlenecks and single points of failure at network nodes. DVR supports east-west inter-subnet routing, SNAT for external access without floating IPs, and floating IPs associated with internal VMs for direct external access. Traffic flows are encapsulated in VXLAN/GRE tunnels between compute nodes and routed appropriately within each node's router namespace.
ONOS SDN Controller - Clustering Tests & Experiments Eueung Mulyana
The document describes setting up an ONOS cluster experiment including the target machines, management VM, and manual ONOS installation process. It discusses preparing the target machines by installing dependencies, Java, and manually extracting the ONOS binary. It also covers preparing the management VM by cloning the ONOS source code from Gerrit, checking out the 1.12.0 version, building ONOS, and installing additional tools for management.
The document provides information about National Chung Cheng University's 2016 Mobile All-IP Networking Laboratory and OpenStack cloud computing software. It defines OpenStack as open source cloud software that is mostly deployed as an infrastructure-as-a-service and combines compute, network and storage resources through a web portal and APIs. It also lists some major OpenStack releases and their included components, as well as examples of OpenStack usage by CERN and Yahoo Japan.
Overview of Distributed Virtual Router (DVR) in Openstack/Neutronvivekkonnect
The document discusses distributed virtual routers (DVR) in OpenStack Neutron. It describes the high-level architecture of DVR, which distributes routing functions from network nodes to compute nodes to improve performance and scalability compared to legacy centralized routing. Key aspects covered include east-west and north-south routing mechanisms, configuration, agent operation modes, database extensions, scheduling, and support for services. Plans are outlined for enhancing DVR in upcoming OpenStack releases.
The document discusses developing network device drivers for embedded Linux. It covers key topics like socket buffers, network devices, communicating with network protocols and PHYs, buffer management, and differences between Ethernet and WiFi drivers. The outline lists these topics and others like throughput and considerations. Prerequisites include C skills, Linux knowledge, and an understanding of networking and embedded driver development.
Tremashark is a tool for network debugging that collects event logs from multiple sources like packet captures, syslog outputs, and console logs. It combines these logs into a single timeline of events and allows users to analyze the logs using Wireshark. Tremashark is useful for debugging Trema-based OpenFlow controllers as it can collect packet data, system logs, and internal IPC messages between Trema modules.
DPDK is a set of drivers and libraries that allow applications to bypass the Linux kernel and access network interface cards directly for very high performance packet processing. It is commonly used for software routers, switches, and other network applications. DPDK can achieve over 11 times higher packet forwarding rates than applications using the Linux kernel network stack alone. While it provides best-in-class performance, DPDK also has disadvantages like reduced security and isolation from standard Linux services.
Design and Performance Characteristics of Tap-as-a-Servicesoichi shigeta
Tap-as-a-Service (TaaS) is an OpenStack extension that offers an API allowing tenants to monitor Neutron ports by mirroring traffic from source ports to a destination port. TaaS preserves tenant isolation by ensuring source and destination ports belong to the same tenant. It can be used for troubleshooting, security, and data analytics. The TaaS workflow involves creating a tap service instance with a destination port, then adding tap flows to associate source ports. Performance evaluation showed the isolated underlay design improved throughput with larger packet sizes compared to the shared underlay.
Developing production OpenFlow controller with TremaYasunobu Chiba
This document provides tips and common mistakes to avoid when developing an OpenFlow controller using the Trema framework. It discusses key things to know about OpenFlow and Trema, such as Trema being a programming framework rather than a full controller itself. It then presents a use case of developing a production OpenFlow controller to manage virtual networks across thousands of switches and hosts. The design uses a load balancer and three-tier architecture. An evaluation showed the controller could manage over 400 switches and 16,000 virtual networks per controller instance.
This document discusses deploying IPv6 on OpenStack. It provides an overview of IPv6, including that IPv6 addresses the shortage of IPv4 addresses by providing a vastly larger 128-bit address space. It describes IPv6 address types and allocation methods. It also discusses IPv6 configuration modes in OpenStack, including stateless address autoconfiguration (SLAAC) and DHCPv6 stateless and stateful modes. Additionally, it covers deployment options for IPv6 on OpenStack like dual stack, NAT64/DNS64, and network tunnels. It provides details on IPv6 address and router advertisement configuration in OpenStack.
ONOS provides the control plane for software-defined networks, managing network components and running applications. It can run distributed across servers for high availability and scalability. The document introduces ONOS and its architecture, and provides steps to install ONOS, run it with Mininet, and interact with its REST API. Key applications like reactive forwarding are demonstrated.
This document summarizes the history and current state of Linux bridging. It discusses how bridging has evolved from Ethernet bridging in 1985 to modern standards like 802.1aq Shortest Path Bridging. It also outlines key topics in bridging including tunneling protocols, the spanning tree protocol for avoiding loops, and security features. The status section notes that VXLAN support is in the mainline kernel but that updates to spanning tree and additional security features may be added in future kernel versions.
VMworld 2013
Lenin Singaravelu, VMware
Haoqiang Zheng, VMware
Learn more about VMworld and register at http://www.vmworld.com/index.jspa?src=socmed-vmworld-slideshare
This presentation was shown at the OpenStack Online Meetup session on August 28, 2014. It is an update to the 2013 sessions, and adds content on Services Plugin, Modular plugins, as well as an Outlook to some Juno features like DVR, HA and IPv6 Support
This presentation introduces clustering and RedHat clustering. It defines a cluster as two or more computers that work together to perform a task. It distinguishes between hardware and software clusters, with hardware clusters being more expensive. The major software cluster types are high availability, load balancing, and high performance. The presentation concludes by advising attendees to download free documentation from RedHat's website to get started with RedHat clustering.
Solaris cluster roadshow day 1 technical presentationxKinAnx
This document provides an overview and agenda for Part 1 of the Solaris Cluster Roadshow in January 2007. It discusses the Solaris Cluster architecture, algorithms, and data services. Specifically, it covers the Sun Cluster building blocks, resource management infrastructure, agent development, manageability, and disaster recovery capabilities. It also summarizes the heartbeats, membership, configuration repository, quorum, and disk fencing algorithms used in Solaris Cluster. Finally, it describes failover and scalable data services.
Real Application Cluster (RAC) allows multiple computers to simultaneously run Oracle RDBMS while accessing a single database, providing clustering. RAC provides high availability, scalability, and ease of administration by making multiple instances transparent to users. Nodes must have identical environments. Oracle Clusterware manages node additions and removals. Instances from different nodes write to the same physical database. The presentation covers RAC architecture, components, startup sequence, single instance configuration, node eviction, and tips for monitoring and improving the RAC environment.
[발표자료] 오픈소스 기반 고가용성 Pacemaker 소개 및 적용 사례_20230703_v1.1F.pptxssuserf8b8bd1
The document provides an overview of Pacemaker, an open source high availability cluster software. It discusses Pacemaker's architecture and components, including the messaging layer (Corosync), resource allocation layer, resource agents, and user interfaces. It also provides examples of using Pacemaker for applications like PostgreSQL and virtual machines. Finally, it briefly discusses the Kronosnet project and the future of Pacemaker 2.0.
This document provides best practices for implementing and operating Oracle Real Application Clusters (RAC) with Oracle 10g. It covers planning best practices such as understanding the architecture, setting expectations, defining objectives, and project planning. Implementation best practices include installation, configuration, database creation, and application considerations. Operational best practices address backup/recovery, performance monitoring, and production migrations.
SFBay Area Solr Meetup - June 18th: Benchmarking Solr PerformanceLucidworks (Archived)
The document discusses benchmarking the performance of SolrCloud clusters. It describes Timothy Potter's experience operating a large SolrCloud cluster at Dachis Group. It outlines an methodology for benchmarking indexing performance by varying the number of servers, shards, and replicas. Results show near-linear scalability as nodes are added. The document also introduces the Solr Scale Toolkit for deploying and managing SolrCloud clusters using Python and AWS. It demonstrates integrating Solr with tools like Logstash and Kibana for log aggregation and dashboards.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, supports fault tolerance through replication, and provides security through user accounts and encryption. Performance tests show Sector/Sphere outperforms Hadoop for sorting and malware analysis benchmarks by processing data locally.
Sector is a distributed file system that stores files on local disks of nodes without splitting files. Sphere is a parallel data processing engine that processes data locally using user-defined functions like MapReduce. Sector/Sphere is open source, written in C++, and provides high performance distributed storage and processing for large datasets across wide areas using techniques like UDT for fast data transfer. Experimental results show it outperforms Hadoop for certain applications by exploiting data locality.
OpenStack Neutron Havana Overview - Oct 2013Edgar Magana
Presentation about OpenStack Neutron Overview presented during three meet-ups in NYC, Connecticut and Philadelphia during October 2013 by Edgar Magana from PLUMgrid
Learning From Real Practice of Providing Highly Available Hybrid Cloud Servic...LF Events
Fujitsu applies OpenStack for providing hybrid cloud service.
In this presentation, Miyashita will introduce learning from real practice of providing highly
available hybrid cloud service with OpenStack Neutron.
He will talk issues and solutions which Fujitsu faced through providing
hybrid(public/private) cloud service.
- How to build multiple OpenStack-based datacenters for public cloud with high availability
- How to build hybrid cloud environment(Connecting public cloud and on-premise datacenters)
- High available functionality spanning multiple datacenters(ex.loadbalancing service, security group)
This presentation was delivered at LinuxCon Japan 2016 by Kazuhiro Miyashita
A domain services cluster provides centralized services like the Grid Infrastructure Management Repository (GIMR), Trace File Analyzer Collector (TFA), and storage management through ASM or IO services to member clusters. Installing a domain services cluster is similar to a standard cluster with additional configuration for optional services like Rapid Home Provisioning. Member clusters are then installed using a manifest file to connect them to the domain services. This allows for centralized management of multiple clusters and optimized storage usage.
Solr Compute Cloud - An Elastic SolrCloud Infrastructure Nitin S
Scaling search platforms for serving hundreds of millions of documents with low latency and high throughput workloads at an optimized cost is an extremely hard problem. BloomReach has implemented Sc2, which is an elastic Solr infrastructure for Big Data applications, supporting heterogeneous workloads and hosted in the cloud. It dynamically grows/shrinks search servers to provide application and pipeline level isolation, NRT search and indexing, latency guarantees, and application-specific performance tuning. In addition, it provides various high availability features such as differential real-time streaming, disaster recovery, context aware replication, and automatic shard and replica rebalancing, all with a zero downtime guarantee for all consumers. This infrastructure currently serves hundreds of millions of documents in millisecond response times with a load ranging in the order of 200-300K QPS.
This presentation will describe an innovate implementation of scaling Solr in an elastic fashion. It will review the architecture and take a deep dive into how each of these components interact to make the infrastructure truly elastic, real time, and robust while serving latency needs.
The document discusses Solr Compute Cloud (SC2), an elastic Solr infrastructure developed by BloomReach to address challenges of scaling search platforms for big data applications. SC2 dynamically provisions Solr clusters in the cloud for pipelines and indexing jobs, providing isolation. It ensures latency guarantees, dynamic scaling, high availability and disaster recovery. SC2 addresses issues BloomReach faced with a shared cluster approach like throughput limitations, stability problems and indexing challenges.
Solr Compute Cloud – An Elastic Solr Infrastructure: Presented by Nitin Sharm...Lucidworks
Solr Compute Cloud (SC2) is an elastic Solr infrastructure that allows for dynamic provisioning of Solr clusters on demand. This allows each search pipeline or job to have its own isolated cluster, improving stability, throughput, and cost optimization. The key benefits of SC2 are pipeline isolation, dynamic scaling, production cluster safeguards, and built-in high availability and disaster recovery features through technologies like the Solr HAFT service.
This document discusses Docker networking and provides an overview of its control plane and data plane components. The control plane uses a gossip-based protocol for decentralized event dissemination and failure detection across nodes. The data plane uses overlay networking with Linux bridges and VXLAN interfaces to provide network connectivity between containers on different Docker hosts. Load balancing for internal and external traffic is implemented using IPVS for virtual IP addresses associated with Docker services.
Docker Networking: Control plane and Data planeDocker, Inc.
The document discusses Docker networking and provides an overview of its control plane and data plane components. The control plane uses a gossip-based protocol for decentralized event dissemination and failure detection across nodes. The data plane uses overlay networking with Linux bridges and VXLAN interfaces to provide network connectivity between containers on different Docker hosts. Load balancing for internal and external traffic is implemented using IPVS for virtual IP addresses associated with Docker services.
Zimbra Single Server Cluster Installation Guidegerd moser
The document provides instructions for configuring a single-node Red Hat cluster with Zimbra Collaboration Suite (ZCS) for high availability. Key steps include:
1. Installing cluster software and configuring users/groups on the active and standby nodes.
2. Installing ZCS on both nodes, configuring for the cluster hostname rather than node hostnames.
3. Running postinstall scripts on the standby node first to prepare the installation, then the active node to move data to shared storage.
4. Using the cluster configuration script on the active node to generate configuration files and install them on both nodes.
Kubernetes Multitenancy Karl Isenberg - KubeCon NA 2019Karl Isenberg
Cruise has been working on self-driving cars for six years and growing exponentially for most of that time. Two years ago they started using Kubernetes, betting on namespace-level multitenancy to provide isolation between teams and projects. Today they have over 40 internal tenants, 100,000 pods, 4,000 nodes, and… an embarrassing number of KubeDNS replicas.
This session will take you through the motivations, story, and results of migrating to multitenant Kubernetes, along with some hard-earned Pro Tips from the trenches.
You’ll also learn about the open source tooling they built around Spinnaker, Vault, Google Cloud, and Istio in order to integrate with our multitenant Kubernetes.
Come see how they went from barely isolated to very isolated and saved a few million dollars doing it!
Container technologies use namespaces and cgroups to provide isolation between processes and limit resource usage. Docker builds on these technologies using a client-server model and additional features like images, containers, and volumes to package and run applications reliably and at scale. Kubernetes builds on Docker to provide a platform for automating deployment, scaling, and operations of containerized applications across clusters of hosts. It uses labels and pods to group related containers together and services to provide discovery and load balancing for pods.
Software Defined Networking is seeing a lot of momentum these days. With server virtualization solving the virtual machines problem, and large scale object storage solving the distributed storage challenge, SDN is seen as key in virtual networking.
In this talk we don't try to define SDN but rather dive straight into what in our opinion is the core enabled of SDN: the virtual switch OVS.
OVS can help manage VLAN for guest network isolation, it can re-route any traffic at L2-L4 by keeping forwarding tables controlled by a remote controller (Openfow controller). We show these few OVS capabilities and highlight how they are used in CloudStack and Xen.
Xen Summit presentation of CloudStack and Software Defined Networks. OpenVswitch is the default bridge in Xen and supported in XenServer and Xen Cloud Platform
Networking in Docker EE 2.0 with Kubernetes and SwarmAbhinandan P.b
The presentation is about the operator goal from networking perspective and how it is influenced by both swarm and kubernetes on the Docker EE platform
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Unlocking Productivity: Leveraging the Potential of Copilot in Microsoft 365, a presentation by Christoforos Vlachos, Senior Solutions Manager – Modern Workplace, Uni Systems
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
How to Get CNIC Information System with Paksim Ga.pptxdanishmna97
Pakdata Cf is a groundbreaking system designed to streamline and facilitate access to CNIC information. This innovative platform leverages advanced technology to provide users with efficient and secure access to their CNIC details.
Things to Consider When Choosing a Website Developer for your Website | FODUUFODUU
Choosing the right website developer is crucial for your business. This article covers essential factors to consider, including experience, portfolio, technical skills, communication, pricing, reputation & reviews, cost and budget considerations and post-launch support. Make an informed decision to ensure your website meets your business goals.
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Unlock the Future of Search with MongoDB Atlas_ Vector Search Unleashed.pdfMalak Abu Hammad
Discover how MongoDB Atlas and vector search technology can revolutionize your application's search capabilities. This comprehensive presentation covers:
* What is Vector Search?
* Importance and benefits of vector search
* Practical use cases across various industries
* Step-by-step implementation guide
* Live demos with code snippets
* Enhancing LLM capabilities with vector search
* Best practices and optimization strategies
Perfect for developers, AI enthusiasts, and tech leaders. Learn how to leverage MongoDB Atlas to deliver highly relevant, context-aware search results, transforming your data retrieval process. Stay ahead in tech innovation and maximize the potential of your applications.
#MongoDB #VectorSearch #AI #SemanticSearch #TechInnovation #DataScience #LLM #MachineLearning #SearchTechnology
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Full-RAG: A modern architecture for hyper-personalizationZilliz
Mike Del Balso, CEO & Co-Founder at Tecton, presents "Full RAG," a novel approach to AI recommendation systems, aiming to push beyond the limitations of traditional models through a deep integration of contextual insights and real-time data, leveraging the Retrieval-Augmented Generation architecture. This talk will outline Full RAG's potential to significantly enhance personalization, address engineering challenges such as data management and model training, and introduce data enrichment with reranking as a key solution. Attendees will gain crucial insights into the importance of hyperpersonalization in AI, the capabilities of Full RAG for advanced personalization, and strategies for managing complex data integrations for deploying cutting-edge AI solutions.
GraphRAG for Life Science to increase LLM accuracyTomaz Bratanic
GraphRAG for life science domain, where you retriever information from biomedical knowledge graphs using LLMs to increase the accuracy and performance of generated answers
GraphRAG for Life Science to increase LLM accuracy
Sun cluster 3.0 introduce
1. Sun Cluster 3.0 Introduce
Yong Yan
Sun Support Engineer
Sun Services North China
2. Pre-introduce
•
How long do you support Sun ?
•
Do you have SunCluster
experience ?
•
Install cluster (2.2 or 3.0 or
others)?
•
What do you expect this session ?
Sun Proprietary/Confidential: Internal Use Only
3. Sun Cluster 3.0
•
Sun Cluster 3.0 Overview
•
Sun Cluster 3.0 Basic Concept
•
Sun Cluster 3.0 Install
•
Sun Cluster 3.0 Admin command
•
Difference between 3.0 & 3.1
Sun Proprietary/Confidential: Internal Use Only
4. Sun Cluster 3.0 Overview
SunCluster 3.0 Main
Component
SunCluster 3.0 Architecture
Sun Cluster Application
Support
Sun Proprietary/Confidential: Internal Use Only
5. SunCluster Main Component
Solaris 8
OE
HA Framework
Global
Components
Userland
Components
Sun Proprietary/Confidential: Internal Use Only
6. HA Framework Components
Communication between domains
Heartbeat / Data / Application level
mesg
Persistence of cluster state
Consistent view of cluster
configuration
Cluster membership
Quorum / Fencing of faulted
Sun Proprietary/Confidential: Internal Use Only
7. Cluster Global Components
Global Devices
Cluster-wide namespace
•
Global File Service
Cluster-wide file service
•
Global Network Service
Single IP address for cluster
Scalable service
Load balancing
Sun Proprietary/Confidential: Internal Use Only
8. Cluster Userland Components
●
Command line interfaces
●
SunPlex Manager – Adminstration tool
●
Sun Management Center module – Monitoring
tool
●
Agents
●
Development libraries API
●
SunPlex Agent Builder – Development tool
●
Utilities – scvxinstall, diagnostic toolkit
Sun Proprietary/Confidential: Internal Use Only
9. Different with Other Cluster
●
SunCluster 3.0 is tightly coupled cluster .
●
It is different other cluster software over
solaris ,
such as VCS , SunCluster2.2
VCS & SunCluster2.2 is userland software
SunCluster3.0 is integrated with Solaris , is
extendent of Solaris
Sun Proprietary/Confidential: Internal Use Only
10. Different with Other Cluster
●
Interconnects – low latency , high-bandwidth
links
Type of Interconnects technology
Fast Ethernet, Gigabit Ethernet , SCI
Number of interconnects between nodes
Sun Cluster 3.0 : (min 2 , max 6)
Sun Cluster 2.2 : (min and max 2)
VCS 1.x : (min 1 and max 2)
Sun Proprietary/Confidential: Internal Use Only
11. SunCluster3.0 Architecture
Agents
API
Public Network
Resource Group Mgr
Monitor
User
Kernel
Global Network TCP/IP
Service N/W
stack
Cluster Membership Cluster
Monitor Transport
Other
Nodes
Cluster Configuration
Repository Global Device
Global File Service
Access
Volume Mgt
HA
Storag Framework
e
Sun Proprietary/Confidential: Internal Use Only
12. Cluster H/W Components
Redundant Servers / Domain
Redundant Storage
Redundant Public network access
Redundant Private communications
Sun Proprietary/Confidential: Internal Use Only
13. Cluster H/W Components
Public Network
Heart Beat
host-A host-B
Channel
SW-A SW-B
Channel
Storage-A Storage-B
mirror
RAID RAID
Sun Proprietary/Confidential: Internal Use Only
14. Sun Cluster Application Support
Highly Available Data Service Support
Oracle, Informix, and Sybase databases
NFS
SAP
SunONE Proxy Server
SunONE Directory Server
SunONE Web Server
Apache Web Server
….
Sun Proprietary/Confidential: Internal Use Only
15. Sun Cluster Application Support
Scalable Data Service Support
SunONE Web Server
Apache Web Server
SAP
Broadvision
…
Parallel Database Support
●
Oracle OPS/RAC , Sybase
General Data Service
Sun Proprietary/Confidential: Internal Use Only
16. Sun Cluster3.0 Basic Concepts
Resource Type C++ Class
(RT)
Resource
•
C++ Object
(Instance of RT)
Resource Group
•
C++ Structure
(collection of Resources)
Application Services C++ Program
•
(collection of Resource Group: RG)
Sun Proprietary/Confidential: Internal Use Only
17. Sun Cluster3.0 Basic Concepts
Data Service Agent
(collection of Resource Type)
•
GFS/PxFS/CFS
global file system ,
one new feature of sc3.0,
mount with global option
•
Global Device
unique name of one device in the cluster server
Sun Proprietary/Confidential: Internal Use Only
18. Sun Cluster3.0 Basic Concepts
Device Group
(Management for Disks , are independent of
Resource group )
Sun Proprietary/Confidential: Internal Use Only
19. SunCluster 3.0 Install
●
SunCluster Server/Storage Matrix
Server & Storage Matrix
●
Network(interconnect , public network)
Network Matrix
●
SunCluster3.x S/W Matrix
SunCluster 3.x S/W Matrix
Sun Proprietary/Confidential: Internal Use Only
20. SunCluster 3.0 Install
●
SunCluster 3.x Install step
1. Install admin station
2. Install TC
3. Install Cluster node
4. Install Cluster Framework on the nodes
Sun Proprietary/Confidential: Internal Use Only
21. SunCluster 3.x Install (cont)
●
SunCluster 3.x Install step
5. Install Vxvm
6. Install SDS or SVM
7. Install Data Service Agent
8. Config Data Service
9. encapsulate the root disk and mirror the root
disk
SunClusterSun Proprietary/Confidential: Internal Use Only
3.x Install cookbook
22. SunCluster 3.x Admin Command
Scinstall
Scrgadm
Scswitch
Scsetup
Scconf
Scstat
Pnmset
scshutdown
Sun Proprietary/Confidential: Internal Use Only
23. SunCluster3.x Admin command
Scinstall.
# scinstall -- Install Software
# scinstall –pv -- display the release and
package versioning information for the
SunCluster software installed on the node
•
Scrgadm
Config and Manager the Resource and
Resource Group
Sun Proprietary/Confidential: Internal Use Only
24. SunCluster3.x Admin command
Scsetup
interactive cluster configuration tool
Scconf
Update the SunCluster software
configuration
# scconf –pvv
# scconf –c –q reset
# scconf –a –T .
Sun Proprietary/Confidential: Internal Use Only
27. SunCluster3.x Admin command
Scstat
Check the cluster status , run on any node
# scinstall
-- Cluster Nodes -- 目前 cluster 中各节点状态
Node name Status
--------- ------
Cluster node: erp-db1 Online
Cluster node: erp-db2 Online
•
online 状态为节点已加入集群
• offline 的节点不在集群软件控制之下
Sun Proprietary/Confidential: Internal Use Only
28. SunCluster3.x Admin command
Scstat (cont)
-- Cluster Transport Paths -- 心跳连接的状态
Endpoint Endpoint Status
-------- -------- ------
Transport path: erp-db1:hme1 erp-db2:hme1 Path online
Transport path: erp-db1:hme0 erp-db2:hme0 Path online
Sun Proprietary/Confidential: Internal Use Only
29. SunCluster3.x Admin command
Scstat (cont)
-- Quorum Summary -- 仲裁设备的状态
Quorum votes possible: 3
Quorum votes needed: 2
Quorum votes present: 3
-- Quorum Votes by Node --
Node Name Present Possible Status
--------- ------- -------- ------
Node votes: erp-db1 1 1 Online
Node votes: erp-db2 1 1 Online
-- Quorum Votes by Device -- 已设置的仲裁设备信息
Device Name Present Possible Status
----------- ------- -------- ------
Device votes: /dev/did/rdsk/d9s2 1 1 Online
------------------------------------------------------------------
Sun Proprietary/Confidential: Internal Use Only
30. SunCluster3.x Admin command
Scstat (cont)
--- Device Group Servers -- 磁盘组的状态
Device Group Primary Secondary
------------ ------- ---------
Device group servers: rmt/1 - -
Device group servers: rmt/2 - -
Device group servers: rmt/3 - -
Device group servers: erpora erp-db1 erp-db2
Device group servers: erpapp erp-db2 erp-db1
• erpora 目前在 erp-db1 激活;
• erpapp 目前在 erp-db2 激活;
Sun Proprietary/Confidential: Internal Use Only
31. SunCluster3.x Admin command
Scstat (cont)
-- Device Group Status --
Device Group Status
------------ ------
Device group status: rmt/1 Offline
Device group status: rmt/2 Offline
Device group status: rmt/3 Offline
Device group status: erpora Online
Device group status: erpapp Online
------------------------------------------------------------------
Sun Proprietary/Confidential: Internal Use Only
32. SunCluster3.x Admin command
Scstat (cont)
-- Resource Groups and Resources -- 资源和资源组设置信息
Group Name Resources
---------- ---------
Resources: nfs-rg erp-db nfs-res oracle-listener oracle-prod
hastorage applprod
Resources: app-rg erp-app app-res hastorage-app
每个资源组包含的资源列表。
-- Resource Groups -- 资源组状态
Group Name Node Name State
---------- --------- -----
Group: nfs-rg erp-db1 Online
Group: nfs-rg erp-db2 Offline
Group: app-rg erp-db2 Online
Group: app-rg erp-db1 Offline
------------------------------------------------------------------ Use Only
Sun Proprietary/Confidential: Internal
33. SunCluster3.x Admin command
Scstat (cont)
--- Resources --
Resource Name Node Name State Status Message
------------- --------- ----- --------------
Resource: erp-db erp-db1 Online Online - LogicalH.
Resource: erp-db erp-db2 Offline Offline - Logical.
Resource: nfs-res erp-db1 Online Online - Service .
Resource: nfs-res erp-db2 Offline Offline-Complete
Sun Proprietary/Confidential: Internal Use Only
37. Sun Cluster 3.0 Admin Command
scshutdown
# scshutdown (run only on one node)
shutdown all the nodes in the cluster
Shutdown one node steps
# scswitch –S –h <nodename>
# shutdown –i0 –g0 -y
Sun Proprietary/Confidential: Internal Use Only
38. Sun Cluster 3.0 Admin Command
SunCluster Boot
SunCluster will auto run with OS boot
up
Boot Node to Non-Cluster mode
ok> boot -x
Sun Proprietary/Confidential: Internal Use Only
39. Difference Between 3.0 & 3.1
public network
SC3.0 : nafo ( local-mac-address?
=false)
SC3.1 : ipmp ( local-mac-address?
=true)
●
More Data Service Agent
●
Expand Fuction of Scsetup
●
More Feature of Agent
Sun Proprietary/Confidential: Internal Use Only
40. Related directory, file
/opt/SUNWcluster (client)
/etc/cluster/conf
/usr/lib/sc
/var/opt/cluster
…
Sun Proprietary/Confidential: Internal Use Only
41. References URL
For more information …
●
SunCluster 3.0 Concepts Guide at :
●
http://docs.sun.com
●
Architecture and API Whitepapers :
●
http://www.sun.com/clusters
●
Solaris software information :
●
http://www.sun.com/Solaris
●
BluePrint : Desiging Enterprise
Solutions with Sun [tm] Cluster 3.0
ISBN
●
http://www.sun.com/blueprints
Sun Proprietary/Confidential: Internal Use Only
42. Q&A
Q&A
Sun Proprietary/Confidential: Internal Use Only