• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Practical Tips for Novell Cluster Services
 

Practical Tips for Novell Cluster Services

on

  • 7,775 views

This session will use Novell Open Enterprise Server 2 SP2 to demonstrate how to cluster critical services—from NSS and Novell iPrint to Novell GroupWise, AFP and beyond. We'll cover the new features ...

This session will use Novell Open Enterprise Server 2 SP2 to demonstrate how to cluster critical services—from NSS and Novell iPrint to Novell GroupWise, AFP and beyond. We'll cover the new features of Novell Cluster Services in the latest release of Novell Open Enterprise Server, and we'll show you how you can ensure consistency by using AutoYaST to build your nodes. This will be a practical session, so be prepared for a few thrills and spills along the way!

Speakers:
Tim Heywood CTO NDS 8
Mark Robinson CTO Linux NDS8

Statistics

Views

Total Views
7,775
Views on SlideShare
7,372
Embed Views
403

Actions

Likes
1
Downloads
253
Comments
0

19 Embeds 403

http://buckyplace.blogspot.com 305
http://www.slideshare.net 63
http://buckyplace.blogspot.de 5
http://buckyplace.blogspot.it 5
http://buckyplace.blogspot.com.au 3
http://buckyplace.blogspot.com.br 3
http://buckyplace.blogspot.ca 3
http://buckyplace.blogspot.co.nz 2
http://buckyplace.blogspot.sg 2
http://webcache.googleusercontent.com 2
http://translate.googleusercontent.com 2
http://buckyplace.blogspot.ie 1
http://buckyplace.blogspot.sk 1
http://buckyplace.blogspot.in 1
http://buckyplace.blogspot.com.es 1
http://buckyplace.blogspot.mx 1
http://buckyplace.blogspot.jp 1
http://buckyplace.blogspot.com.ar 1
http://buckyplace.blogspot.ro 1
More...

Accessibility

Categories

Upload Details

Uploaded via as OpenOffice

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Explain about building combined iso
  • Discuss flags for vmware-vdiskmanager – especially -t Disk types: 0 : single growable virtual disk 1 : growable virtual disk split in 2GB files 2 : preallocated virtual disk 3 : preallocated virtual disk split in 2GB files 4 : preallocated ESX-type virtual disk 5 : compressed disk optimized for streaming Discuss path to SAN virtual disks Mention different VMware versions
  • Tell audience that the autoyast build may take a while – we'll concentrate on existing two nodes for most of the demo.
  • Offer copy of autoyast profiles used in demos.
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Talk about creating the two pools and why. DEMO: Create pool1/vol1 Create pool1_shd/vol1_shd
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource
  • Script logs are rolling logs now – show all operations on this node for a particular resource

Practical Tips for Novell Cluster Services Practical Tips for Novell Cluster Services Presentation Transcript

  • Practical Tips for Novell ® Cluster Services Mark Robinson CTO Linux, NDS8 [email_address] Tim Heywood CTO, NDS8 [email_address]
  • Agenda
    • Introduction
    • Cluster Services in OES2
    • Our Environment
    • AutoYaST
    • Cluster Build Methodology
    • Creating Resources
    • Cluster Management
    • Troubleshooting
  • Introduction
  • Introduction
    • Mark Robinson
      • Linux Geek
      • Working with SUSE ® since 1998
      • Working with OES since OES1 Beta 5
      • CLP, CLE, NCE ES, CNI, etc.
      • Ex-SysOp
    • Tim Heywood
      • Working with Novell ® since ????
      • Working with OES since OES1 Beta 5
      • CNE, MCNE, CNI(ish)
      • Novell Knowledge Partner (SysOp)
  • Introduction
    • NDS8 Network Design and Support Ltd.
      • Platinum Consulting Partner
      • Based in Edinburgh, work worldwide
      • Specialities:
        • Linux
        • Workgroup
        • SRM
  • Cluster Services in OES2
  • Cluster Services in OES2
    • New features are Linux only
    • New from OES2 FCS on:
      • Resource monitoring
      • XEN virtualization support
      • x86_64 platform support
        • Including mixed 32/64 bit node support
      • Dynamic Storage Technology
  • What's new in SP1/2?
    • Major rewrite of cluster code for SP2
      • Removed NetWare ® translation layer
      • Much faster
      • Much lower system load
      • Typical load average of 0.2!
    • New/improved clustering for:
      • iFolder 3
      • AFP
    • NCP virtual server for POSIX filesystem resources
  • Types of Clusters
    • Traditional cluster
      • Servers (nodes)
      • Resources
        • NSS
        • GroupWise ®
        • iPrint
    • XEN cluster
      • Dom0 hosts (nodes)
      • XEN guests (DomU) resources
      • Each resource is a server in its own right
      • Live migration with para-virtualised DomU
  • XEN Cluster Architecture OCFS2 LUN DomU Files Cluster Node Xen Dom0 Cluster Node Xen Dom0 Cluster Node Xen Dom0 Resource DomU Linux iPrint Resource DomU Linux iPrint Resource DomU Linux iFolder Resource DomU Linux GroupWise Resource DomU NetWare pCounter Live Migrate Live Migrate
  • Our Environment
  • Our Environment
    • VMware Workstation based
    • VMware shared disk as an alternative to iSCSI
    • Virtual Machines
      • Resource Server
      • Node 1 (built, in the cluster)
      • Node 2 (to be joined to cluster)
      • Node 3 (to be built)
    • SUSE ® Linux Enterprise Server 10 SP3/OES2 SP2 combined iso
  • VMware Setup
    • Create disks standalone
    • Add config to node vmx files
    disk.locking = "false" diskLib.dataCacheMaxSize = "0" scsi1.present = "TRUE" scsi1.sharedBus = "none" scsi1.virtualDev = "lsilogic" scsi1.pciSlotNumber = "35" scsi1:0.present = "TRUE" scsi1:0.fileName = "cluster-lun0.vmdk" scsi1:0.mode = "independent-persistent" scsi1:0.redo = "" scsi1:1.present = "TRUE" scsi1:1.fileName = "cluster-lun1.vmdk" scsi1:1.mode = "independent-persistent" scsi1:1.redo = "" vmware-vdiskmanager -c -s 100mb -a lsilogic -t 2 cluster-lun0.vmdk vmware-vdiskmanager -c -s 1gb -a lsilogic -t 2 cluster-lun1.vmdk
  • Our Environment Resource Server (Tree master, iManager, Installation services, AutoYaST, SMT) Storage VMware Shared Disks OES2 SP2 Nodes
  • Our Environment AutoYaST Third Node Resource Server (Tree master, iManager, Installation services, AutoYaST, SMT) Storage VMware Shared Disks OES2 SP2 Nodes
  • AutoYaST
  • Why AutoYaST?
    • Repeatable (exactly)
      • No “human element”
    • XML forms part of Documentation
    • Drink coffee (or suitable non-caffeinated beverage) while server builds itself!
    • Multiple simultaneous builds
      • Stagger by at least 15 minutes
    • Easy to expand cluster with new nodes
    • Helps with DR
  • Why AutoYaST?
    • What will AutoYaST do?
      • Disk partitioning
      • Software patterns
      • Network configuration (including VLAN, bonding etc)
      • OES services
        • eDirectory ™ – new or existing tree
        • NSS
        • NCS
      • Security lockdown
      • Scripts/Complete config files to do the rest
      • At the end of the install we will migrate an NSS resource to the new node with no additional configuration!
  • AutoYaST – New Cluster <ncs> <admin_context> cn=admin.o=novell </admin_context> <admin_password> novell </admin_password> <cluster_dn> cn=cluster,ou=resources,o=novell </cluster_dn> <cluster_ip> 10.0.0.100 </cluster_ip> <config_type> New Cluster </config_type> <ldap_ip_address> node IP,LDAP server IP </ldap_ip_address> <ldap_secure_port config:type=&quot;integer&quot;> 636 </ldap_secure_port> <server_name> nodename </server_name> <start> Later </start> <sbd_dev> sdx </sbd_dev> <sbd_dev2> sdy </sbd_dev2> </ncs>
  • AutoYaST – Existing Cluster <ncs> <admin_context> cn=admin.o=novell </admin_context> <admin_password> novell </admin_password> <cluster_dn> cn=cluster,ou=resources,o=novell </cluster_dn> <cluster_ip></cluster_ip> <config_type> Existing Cluster </config_type> <ldap_ip_address> node IP,LDAP server IP </ldap_ip_address> <ldap_secure_port config:type=&quot;integer&quot;> 636 </ldap_secure_port> <server_name> nodename </server_name> <start> Later </start> <sbd_dev></sbd_dev> <sbd_dev2></sbd_dev2> </ncs>
  • Demo AutoYaST
  • Cluster Build Methodology
  • Cluster Build Methodology
    • Start with a Resource Server
      • iManager
      • Network Installation Server – HTTP or NFS
      • AutoYaST repository (can be password protected on HTTP)
      • SMT for patching
      • Magic PiXiEs server
  • Cluster Build Methodology
    • Create a “template node”
      • NodeZ
      • Use it to
        • Create AutoYaST template using “Clone this system...”
        • Test the shared disk
        • Create the cluster
        • Create the SBD
        • Creating resources
      • This build will NOT be part of final production cluster
  • Cluster Build Methodology
    • Copy this XML for additional nodes
      • Modify
        • server name
        • IP address(es)
      • Use diffuse to compare XML files
    • Build the other nodes
      • Use the XML created above
      • If not 100% right, whack it, modify XML and start build again
      • Remember these are now commodity items
    • Whack NodeZ and rebuild to complete the system
  • Cluster Build Methodology
    • Implement NIC bonding
      • NIC driver independent
      • 7 different methods – some require switch support
      • Link state vs arp monitoring – blades often cannot lose local link!
      • Configurable with AutoYaST
    • Implement Multipath (MPIO)
      • Very simple to configure – mainly autodetect
      • Wide range of SAN support
      • Friendly LUN naming
      • Configuration file can be used (put in place with autoYaST)
  • Creating Resources
  • File Sharing Resources
    • An NSS pool
      • Use iManager
      • Will end up as Primary for DST pair
    • Another NSS pool
      • Use NSSMU (just because we can)
      • Will end up as Shadow for DST pair
    • Combine them into one resource
      • Delete resource for shadow
      • Modify load script for primary
  • File Sharing Resources
    • POSIX filesystem based resource with NCP
      • Easier than Samba to access files
      • Can be used for iPrint, DHCP etc
      • Use evmsgui to create and format the volume
      • Create the resource in iManager
      • Script to create NCP virtual server
  • File Sharing Resources
    • Add resource monitoring
    • Add NFS access
      • LUM enablement of target users
      • NSS/POSIX rights
      • exportfs in load script rather then /etc/exports on nodes
      • Use fsid=x for NSS
  • NFS access SHARED1 Virtual Server SHARED1 Volume NFSaccess Iface UID: 1012 Mis-dweeb UID: 1004 LUM NSS Rights Dweeb Gromit Wallace NFS FPC1 FPC2 FPC3 FPC4 FPC5 fpc.server.novell Mis UID: 1010 Oracle UID: 60003 Mis UID: 1010 Oracle UID: 60003 Iface UID: 1012 Mis UID: 1010 Oracle UID: 60003 Iface UID: 1012 Mis UID: 1010 Oracle UID: 60003 Iface UID: 1012
  • iPrint
    • Create iPrint on NSS
    • Run iprint_nss_relocate on each node with volume in place
    • NB: only one iPrint resource may run on a node
    • Need to accept certificates in iManager for each node
  • iFolder
    • Create iFolder on POSIX
      • /mnt/cluster/ifolder
    • Run /opt/novell/ifolder3/bin/ifolder_cluster_setup on each node
      • Copy /etc/sysconfig/novell/ifldr3_2_sp2 to nodes first
    • NB: Only one iFolder resource may run on a node
  • DNS
    • DNS must be on NSS as NCP server required for eDirectory ™ integration
    • Check NCP:NCPServer objects
    • LUM user required for NSS rights
  • DHCP
    • Create DHCP on NSS
    • Leases file on NSS volume
    • Log file on NSS volume
      • Syslog-ng configuration
      • Logrotate configuration
      • Default AppArmor configuration will not allow logging to here!
  • GroupWise
    • Create PO on NSS
    • Set namespace in load script
      • /opt=ns=long
    • Disable atime/diratime on volume
      • Open nsscon
      • Run /noatime=volname
  • OCFS2 Shared Storage
    • Shared disk! Multi-mount, read/write with distributed lock management
    • /etc/ocfs2/cluster.conf automagically created by NCS
    • Fstab mounting uses /etc/init.d/ocfs2 service
  • Cluster Management
  • Cluster Management
    • iManager
      • The ONLY way to create/delete/edit resources
      • View event log
    • Cluster command
      • Same as NetWare ®
      • No cvb rebuild or device scan. These are not required on Linux as EVMS does it for you
    • Console One
  • The cluster Command
    • The usual suspects
      • cluster online/offline/migrate
      • cluster join/leave
      • cluster status/resources/view/info
    • More interesting
      • cluster stats display – check heartbeat/SBD ticks
      • cluster pools – check NSS pools and location
      • cluster set – modify heartbeat etc
      • cluster exec – potentially very dangerous
    • Lots of BCC commands
  • Troubleshooting
  • Useful Linux Tools
    • ip command – manage TCP/IP on Linux
      • ip addess show/add/del
      • ip route show
    • ethtool – NIC settings
    • cat /proc/net/bonding/bondX
    • netstat – network communication status
      • Check which ports services are listening on
      • Check IP based connections to node
    • nmap – network/port scanner
    • multipath -ll
  • LUN Identification
    • Which LUN is which?
    • lsscsi – shows LUN ID numbers
    • ls -l /dev/disk/by-id
      • scsi-360a98000 4334616f6b5a55572d625550
      • Need to find ID on SAN. (Netapp uses ASCII!)
    • Multipathing will show the ID as the multipath name
      • Use friendly naming
  • Useful Tools
    • sbdutil – create/check/modify the SBD
      • sbdutil -f to find the SBD
      • sbdutil -v to view the current state of the SBD
    • /opt/novell/ncs/bin/ncs-configd.py
      • -init option to pull down load scripts, fix node names etc
    • cifsPool.py to fix CIFS attributes (TID #7005192)
    • OES2 NCS Master Reference TID, FAQ and Troubleshooting – TID #7001433
    • NSA – Novell ® Support Advisor
      • Many patterns for NCS
  • File Locations
    • Cluster configuration file
      • /etc/opt/novell/ncs/clstrlib.conf
    • Load/Unload scripts
      • /var/run/ncs (run from here)
      • /var/opt/novell/ncs
    • Load script output logs
      • /var/opt/novell/log/ncs
    • System Log
      • /var/log/messages
  • File Locations
    • Admin filesystem – virtual filesystem for NCS management
      • /admin/Novell/Cluster
    • Proc filesystem – virtual filesystem for Linux/NCS management
      • /proc/ncs
    • Cluster event log
      • iManager
      • /admin/Novell/Cluster/EventLog.xml
  • /proc/ncs Magic
    • Enable serious debugging!
      • echo -n &quot;TRACE ON&quot; > /proc/ncs/vll
      • echo -n &quot;TRACE SBD ON&quot; > /proc/ncs/vll
      • echo -n &quot;TRACE GIPC ON&quot; > /proc/ncs/vll
      • echo -n &quot;TRACE MCAST ON&quot; > /proc/ncs/vll
      • echo -n &quot;TRACE CVB ON&quot; > /proc/ncs/cluster
      • Can be made permanent by editing /opt/novell/ncs/bin/ldncs
    • Find the SBD
      • cat /proc/ncs/sbdlib
  • AdminFS Magic
    • Two type of file in /admin/Novell/Cluster
      • *.xml – contain cluster/state information
      • *.cmd – “write then read” files for issuing cluster commands
  • Known issues
    • EVMS issue with no NSS
      • Unpatched SP2 nodes without NSS cannot load SBD kernel moduled
    • iFolder shutdown script
      • Doesn't shut down components if names have been changed during configuration
    • IP address problem
      • Unpatched nodes can allow duplicate IP addresses on network
    • Resources in NDS sync state
      • Check replica rings/referrals
      • Check case of cluster DN in clstrlib.conf
  • www.nds8.co.uk
  •  
  • Unpublished Work of Novell, Inc. All Rights Reserved. This work is an unpublished work and contains confidential, proprietary, and trade secret information of Novell, Inc. Access to this work is restricted to Novell employees who have a need to know to perform tasks within the scope of their assignments. No part of this work may be practiced, performed, copied, distributed, revised, modified, translated, abridged, condensed, expanded, collected, or adapted without the prior written consent of Novell, Inc. Any use or exploitation of this work without authorization could subject the perpetrator to criminal and civil liability. General Disclaimer This document is not to be construed as a promise by any participating company to develop, deliver, or market a product. It is not a commitment to deliver any material, code, or functionality, and should not be relied upon in making purchasing decisions. Novell, Inc. makes no representations or warranties with respect to the contents of this document, and specifically disclaims any express or implied warranties of merchantability or fitness for any particular purpose. The development, release, and timing of features or functionality described for Novell products remains at the sole discretion of Novell. Further, Novell, Inc. reserves the right to revise this document and to make changes to its content, at any time, without obligation to notify any person or entity of such revisions or changes. All Novell marks referenced in this presentation are trademarks or registered trademarks of Novell, Inc. in the United States and other countries. All third-party trademarks are the property of their respective owners.