• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic
 

HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic

on

  • 186 views

HP-UX Dynamic Root Disk Solaris Live Upgrade AIX Multibos comparison by Dusan Baljevic

HP-UX Dynamic Root Disk Solaris Live Upgrade AIX Multibos comparison by Dusan Baljevic

Statistics

Views

Total Views
186
Views on SlideShare
186
Embed Views
0

Actions

Likes
0
Downloads
10
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • My humble attempt to summarise best-known features at the present time. <br /> Even after 23 years of Unix experience I cannot claim I know everything! <br />
  • Courtesy of HP Education Training Materials in HE776 course <br />
  • * The following steps work for file systems other than the boot <br /> (/stand) file system: <br /> After creating the clone, execute the command: <br /> # /opt/drd/bin/drd mount <br /> 2. Choose the file system on the clone to expand. For this example, we <br /> are using /opt. The logical volume is /dev/drd00/lvol6 mounted at <br /> /var/opt/drd/mnts/sysimage_001/opt. The size of the vxfs file system <br /> is increased to 999 extents. Execute the following commands to expand /opt: <br /> # /usr/sbin/umount /dev/drd00/lvol6 <br /> # /usr/sbin/lvextend –l 999 /dev/drd00/lvol6 <br /> # /usr/sbin/extendfs –F vxfs /dev/drd00/rlvol6 <br /> # /usr/sbin/mount /dev/drd00/lvol6 <br /> /var/opt/drd/mnts/sysimage_001/opt <br /> 3. Run bdf to check that the /var/opt/drd/mnts/sysimage_001/opt file system <br /> now has the desired size. <br /> ** When drd runcmd finds the file systems in the clone already mounted, it does <br /> not unmount them (nor will it export the volume group) at the completion of <br /> the runcmd operation. <br />
  • * Refer to ITRC document mmr_na-197095-3. <br />
  • * To notify DRD that all logical volumes in the root group have been manually <br /> mirrored using LVM or VxVM commands to disk /dev/dsk/c1t2d0 <br /> ** To notify DRD that all logical volumes in the root group have been manually <br /> un-mirrored using LVM or VxVM commands <br />
  • * Here is the Known Issue that we are publishing to docs.hp.com/en/DRD in <br /> June 2009. <br />
  • Full example of lucompare(1) on xlsansun.cxo.hp.com on 12th of June 2009: <br /> Determining the configuration of BE2 ... <br /> &lt; BE1 <br /> &gt; BE2 <br /> Processing Global Zone <br /> Comparing / ... <br /> Links differ <br /> 01 &lt; /:root:root:33:16877:DIR: <br /> 02 &gt; /:root:root:30:16877:DIR: <br /> Sizes differ <br /> 01 &lt; /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144: <br /> 02 &gt; /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880: <br /> Sizes differ <br /> 01 &lt; /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1:root:bin:1:33261:REGFIL:6888: <br /> 02 &gt; /platform/sun4u-us3/lib/sparcv9/libc_psr.so.1:root:bin:1:33261:REGFIL:0: <br /> Sizes differ <br /> 01 &lt; /platform/sun4u-us3/lib/libc_psr.so.1:root:bin:1:33261:REGFIL:6600: <br /> 02 &gt; /platform/sun4u-us3/lib/libc_psr.so.1:root:bin:1:33261:REGFIL:0: <br /> Sizes differ <br /> 01 &lt; /kernel/drv/fp.conf:root:sys:1:33188:REGFIL:2848: <br /> 02 &gt; /kernel/drv/fp.conf:root:sys:1:33188:REGFIL:2774: <br /> Sizes differ <br /> 01 &lt; /kernel/drv/scsi_vhci.conf:root:sys:1:33188:REGFIL:975: <br /> 02 &gt; /kernel/drv/scsi_vhci.conf:root:sys:1:33188:REGFIL:944: <br /> Sizes differ <br /> 01 &lt; /boot/solaris/filestat.ramdisk:root:root:1:33188:REGFIL:100648: <br /> 02 &gt; /boot/solaris/filestat.ramdisk:root:root:1:33188:REGFIL:101144: <br /> 02 &gt; /BE2 does not exist <br /> 02 &gt; /etc/svc/repository-boot-20090302_135833 does not exist <br /> 02 &gt; /etc/svc/repository-boot-20090612_122601 does not exist <br /> 02 &gt; /etc/svc/repository-boot-20090302_162009 does not exist <br /> Symbolic links are to different files <br /> Symbolic links are to different files <br /> 01 &lt; /etc/svc/repository-boot:root:root:1:41471:SYMLINK:31: <br /> 02 &gt; /etc/svc/repository-boot:root:root:1:41471:SYMLINK:31: <br /> 02 &gt; /etc/svc/repository-boot-20090611_091912 does not exist <br /> Sizes differ <br /> 01 &lt; /etc/svc/repository.db:root:sys:1:33152:REGFIL:3778560: <br /> 02 &gt; /etc/svc/repository.db:root:sys:1:33152:REGFIL:3775488: <br /> Sizes differ <br /> 01 &lt; /etc/zfs/zpool.cache:root:root:1:33188:REGFIL:1648: <br /> 02 &gt; /etc/zfs/zpool.cache:root:root:1:33188:REGFIL:3648: <br /> Sizes differ <br /> 01 &lt; /etc/path_to_inst:root:root:1:33060:REGFIL:7774: <br /> 02 &gt; /etc/path_to_inst:root:root:1:33060:REGFIL:6447: <br /> 02 &gt; /etc/rc1.d/K13hprsmha does not exist <br /> 02 &gt; /etc/rc1.d/K13hprsmha_wd does not exist <br /> Checksums differ <br /> 01 &lt; /etc/logadm.conf:root:sys:1:33188:REGFIL:1485:3674435182: <br /> 02 &gt; /etc/logadm.conf:root:sys:1:33188:REGFIL:1485:114050809: <br /> 02 &gt; /etc/rc2.d/S80hprsmha does not exist <br /> 02 &gt; /etc/rc2.d/S80hprsmha_wd does not exist <br /> 02 &gt; /etc/lu/COPY_LOCK does not exist <br /> 02 &gt; /etc/lu/lustartup.log does not exist <br /> 02 &gt; /etc/lu/sync.log does not exist <br /> Checksums differ <br /> 01 &lt; /etc/lu/.BE_CONFIG:root:root:1:33060:REGFIL:89:1143091087: <br /> 02 &gt; /etc/lu/.BE_CONFIG:root:root:1:33060:REGFIL:89:2658615630: <br /> 02 &gt; /etc/.cpr_config does not exist <br /> 02 &gt; /etc/default/hprsmha does not exist <br /> Checksums differ <br /> 01 &lt; /etc/shadow:root:sys:1:33024:REGFIL:384:161700006: <br /> 02 &gt; /etc/shadow:root:sys:1:33024:REGFIL:384:1617970827: <br /> Sizes differ <br /> 01 &lt; /etc/mail/statistics:root:root:1:33188:REGFIL:728: <br /> 02 &gt; /etc/mail/statistics:root:root:1:33188:REGFIL:0: <br /> Checksums differ <br /> 01 &lt; /etc/saf/zsmon/_pid:root:root:1:33188:REGFIL:4:1108548913: <br /> 02 &gt; /etc/saf/zsmon/_pid:root:root:1:33188:REGFIL:4:3964579771: <br /> 02 &gt; /etc/init.d/hprsmha_wd does not exist <br /> 02 &gt; /etc/init.d/hprsmha does not exist <br /> Sizes differ <br /> 01 &lt; /etc/devices/devid_cache:root:root:1:33060:REGFIL:900: <br /> 02 &gt; /etc/devices/devid_cache:root:root:1:33060:REGFIL:5108: <br /> Sizes differ <br /> 01 &lt; /etc/devices/snapshot_cache:root:root:1:33060:REGFIL:130336: <br /> 02 &gt; /etc/devices/snapshot_cache:root:root:1:33060:REGFIL:138568: <br /> Sizes differ <br /> 01 &lt; /etc/devices/mdi_scsi_vhci_cache:root:root:1:33060:REGFIL:1752: <br /> 02 &gt; /etc/devices/mdi_scsi_vhci_cache:root:root:1:33060:REGFIL:2588: <br /> Sizes differ <br /> 01 &lt; /etc/path_to_inst.old:root:root:1:33060:REGFIL:7714: <br /> 02 &gt; /etc/path_to_inst.old:root:root:1:33060:REGFIL:6179: <br /> 02 &gt; /zpool_history.txt does not exist <br /> 02 &gt; /ph does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_ko.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_es.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_zh_CN.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_sv.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_it.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_zh_TW.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_de.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/messages_ja.properties does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/resources/miniSplash.jpg does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws.jar does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javawsbin does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_zh_TW.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_sv.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_ko.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_zh_CN.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/sunlogo64x30.gif does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_es.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_fr.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_ja.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws-l10n.jar does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_de.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws.policy does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/readme_it.html does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javaws_launcher.dt does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_jvm/javaws/javalogo52x88.gif does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs/SunStartErr.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs/SunStartOut.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_instlogs/install.txt does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_uninst does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/uninstall.bin does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/assembly.dat does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/run.inf does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/uninstall.jar does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/_uninst/uninstall.dat does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hbainfo.cfg does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/wd.pid does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hatrigger does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/wdtrigger does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/server.cert does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/src does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/src/SNIAHBAAPI.zip does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/HAMessages.bin does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/ha.pid does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hprsmha does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090523-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090607-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090611-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090514-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090518-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090606-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090522-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090610-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090515-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090519-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090605-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090521-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090609-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090516-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090520-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090604-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090608-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090517-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090612-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090603-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090527-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090531-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090526-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090602-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090530-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090529-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090525-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090601-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090512-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090528-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090524-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/logs/RSMHA-090513-1.log does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/libcrypto.so.0.9.7 does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hprsmha_wd does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/hprsmha.cfg does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/server.pkey does not exist <br /> 02 &gt; /opt/Hewlett-Packard/RSMHostSW/RSMHA/libssl.so.0.9.7 does not exist <br /> 01 &lt; /devices/pseudo/cvcredir@0:cvcredir does not exist <br /> 01 &lt; /devices/pseudo/cvc@0:cvc does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134b,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134b,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134b,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe150041349,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe150041349,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134d,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134d,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134f,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe150041349,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134d,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134f,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@3,1/fp@0,0/ssd@w50001fe15004134f,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134e,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe150041348,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134e,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134e,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe150041348,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe150041348,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134c,1 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134a,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134a,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134c,2 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134c,8 does not exist <br /> 01 &lt; /devices/pci@8,700000/SUNW,qlc@2,1/fp@0,0/ssd@w50001fe15004134a,1 does not exist <br /> 01 &lt; /devices/scsi_vhci/ssd@g20000004cfdf8179 does not exist <br /> 01 &lt; /devices/scsi_vhci/ssd@g600508b400102e8e0001300003000000 does not exist <br /> 01 &lt; /devices/scsi_vhci/ssd@g2000002037e35629 does not exist <br /> 01 &lt; /etc/svc/repository-boot-20081202_211723 does not exist <br /> 01 &lt; /etc/svc/repository-boot-20081202_215828 does not exist <br /> 01 &lt; /etc/lu/.SYNCKEY does not exist <br /> 01 &lt; /etc/lu/INODE.2 does not exist <br /> 01 &lt; /dev/dsk/c7t2000002037E35629d0s4 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed8s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d8s0 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad2s2 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd5s4 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd8s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad1s1 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd4s5 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd5s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d8s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd2s3 does not exist <br /> 01 &lt; /dev/dsk/c7t600508B400102E8E0001300003000000d0s6 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d8 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd1s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd3s4 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad5s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd2s5 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd1s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd3s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad4s4 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd2s6 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d1s3 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d3s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd3s1 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d2s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed1s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd5s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d1s5 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed2s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d3s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed3s7 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd4s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd1s5 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d2s0 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad8s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed4s6 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd3s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d5s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd8s5 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd2s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed5s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d4s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd8s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd4s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd1s4 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d5s7 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd5s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d4s0 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd8s4 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd4s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d5s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd1s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd5s0 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d4s7 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd3s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed4s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d5s6 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd8s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed5s6 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd2s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d4s1 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed2s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd8 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d1s2 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd5s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d3s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd4s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed3s0 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d2s7 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd1s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad8s5 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE150041349d1s4 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd2s1 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d3s6 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd3s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ed1s4 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d2s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd2s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad5s4 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd1s7 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad4s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd3s5 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Fd8s0 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd2s4 does not exist <br /> 01 &lt; /dev/dsk/c7t600508B400102E8E0001300003000000d0s1 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd1s1 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd3s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd8s1 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad1s6 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd4s2 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Cd5s5 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE150041348d8s1 does not exist <br /> 01 &lt; /dev/dsk/c7t20000004CFDF8179d0s4 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad3s2 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd4s4 does not exist <br /> 01 &lt; /dev/dsk/c7t2000002037E35629d0s3 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Dd8 does not exist <br /> 01 &lt; /dev/dsk/c2t50001FE15004134Bd5s3 does not exist <br /> 01 &lt; /dev/dsk/c1t50001FE15004134Ad2s5 does not exist <br /> Compare complete for /. <br /> Comparing /var ... <br /> Sizes differ <br /> 01 &lt; /var/cacao/instances/default/audits/audit-cacao.0:root:sys:1:33188:REGFIL:282558: <br /> 02 &gt; /var/cacao/instances/default/audits/audit-cacao.0:root:sys:1:33188:REGFIL:137571: <br /> 02 &gt; /var/sadm/system/uuid_state does not exist <br /> Sizes differ <br /> 01 &lt; /var/sadm/system/data/locales.list:root:root:1:33188:REGFIL:276: <br /> 02 &gt; /var/sadm/system/data/locales.list:root:root:1:33188:REGFIL:281: <br /> Sizes differ <br /> 01 &lt; /var/sadm/system/data/locales.list.bak:root:root:1:33188:REGFIL:276: <br /> 02 &gt; /var/sadm/system/data/locales.list.bak:root:root:1:33188:REGFIL:281: <br /> Links differ <br /> 01 &lt; /var/sadm/pkg:root:sys:1199:16749:DIR: <br /> 02 &gt; /var/sadm/pkg:root:sys:1198:16749:DIR: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/milestone-multi-user-server:default.log:root:root:1:33188:REGFIL:22432: <br /> 02 &gt; /var/svc/log/milestone-multi-user-server:default.log:root:root:1:33188:REGFIL:2537: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/network-routing-bgp:quagga.log:root:root:1:33188:REGFIL:6015: <br /> 02 &gt; /var/svc/log/network-routing-bgp:quagga.log:root:root:1:33188:REGFIL:435: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/network-smtp:sendmail.log:root:root:1:33188:REGFIL:4394: <br /> 02 &gt; /var/svc/log/network-smtp:sendmail.log:root:root:1:33188:REGFIL:447: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/network-routing-rip:quagga.log:root:root:1:33188:REGFIL:4530: <br /> 02 &gt; /var/svc/log/network-routing-rip:quagga.log:root:root:1:33188:REGFIL:390: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/system-filesystem-volfs:default.log:root:root:1:33188:REGFIL:4124: <br /> 02 &gt; /var/svc/log/system-filesystem-volfs:default.log:root:root:1:33188:REGFIL:374: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/application-graphical-login-cde-login:default.log:root:root:1:33188:REGFIL:4024: <br /> 02 &gt; /var/svc/log/application-graphical-login-cde-login:default.log:root:root:1:33188:REGFIL:409: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/system-filesystem-local:default.log:root:root:1:33188:REGFIL:3861: <br /> 02 &gt; /var/svc/log/system-filesystem-local:default.log:root:root:1:33188:REGFIL:301: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/system-console-login:default.log:root:root:1:33188:REGFIL:2542: <br /> 02 &gt; /var/svc/log/system-console-login:default.log:root:root:1:33188:REGFIL:358: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/application-print-ppd-cache-update:default.log:root:root:1:33188:REGFIL:4154: <br /> 02 &gt; /var/svc/log/application-print-ppd-cache-update:default.log:root:root:1:33188:REGFIL:388: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/system-fmd:default.log:root:root:1:33188:REGFIL:3746: <br /> 02 &gt; /var/svc/log/system-fmd:default.log:root:root:1:33188:REGFIL:320: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/system-sysidtool:net.log:root:root:1:33188:REGFIL:6723: <br /> 02 &gt; /var/svc/log/system-sysidtool:net.log:root:root:1:33188:REGFIL:436: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/application-print-ipp-listener:default.log:root:root:1:33188:REGFIL:134: <br /> 02 &gt; /var/svc/log/application-print-ipp-listener:default.log:root:root:1:33188:REGFIL:104: <br /> Sizes differ <br /> 01 &lt; /var/svc/log/network-nfs-client:default.log:root:root:1:33188:REGFIL:4236: <br /> 02 &gt; /var/svc/log/network-nfs-client:default.log:root:root:1:33188:REGFIL:376: <br /> Sizes differ <br /> 01 &lt; /var/log/syslog:root:sys:1:33188:REGFIL:1025: <br /> 02 &gt; /var/log/syslog:root:sys:1:33188:REGFIL:0: <br /> 02 &gt; /var/log/syslog.3 does not exist <br /> 02 &gt; /var/log/syslog.4 does not exist <br /> Sizes differ <br /> 01 &lt; /var/log/webconsole/console/console_debug_log:noaccess:root:1:33188:REGFIL:78215: <br /> 02 &gt; /var/log/webconsole/console/console_debug_log:noaccess:root:1:33188:REGFIL:4831: <br /> Sizes differ <br /> 01 &lt; /var/log/webconsole/console/console_config_log:root:root:1:33188:REGFIL:28359: <br /> 02 &gt; /var/log/webconsole/console/console_config_log:root:root:1:33188:REGFIL:2425: <br /> Sizes differ <br /> 01 &lt; /var/log/syslog.0:root:sys:1:33188:REGFIL:770: <br /> 02 &gt; /var/log/syslog.0:root:sys:1:33188:REGFIL:694: <br /> 02 &gt; /var/log/syslog.7 does not exist <br /> 02 &gt; /var/log/syslog.5 does not exist <br /> 02 &gt; /var/log/syslog.2 does not exist <br /> 02 &gt; /var/log/syslog.6 does not exist <br /> 02 &gt; /var/log/syslog.1 does not exist <br /> Sizes differ <br /> 01 &lt; /var/saf/zsmon/log:root:sys:1:33188:REGFIL:19008: <br /> 02 &gt; /var/saf/zsmon/log:root:sys:1:33188:REGFIL:1215: <br /> Sizes differ <br /> 01 &lt; /var/saf/_log:root:root:1:33188:REGFIL:6053: <br /> 02 &gt; /var/saf/_log:root:root:1:33188:REGFIL:545: <br /> Sizes differ <br /> 01 &lt; /var/dmi/db/1l.tbl:root:root:1:33188:REGFIL:5077: <br /> 02 &gt; /var/dmi/db/1l.tbl:root:root:1:33188:REGFIL:595: <br /> Sizes differ <br /> 01 &lt; /var/snmp/snmpdx.st.old:root:root:1:33188:REGFIL:34: <br /> 02 &gt; /var/snmp/snmpdx.st.old:root:root:1:33188:REGFIL:35: <br /> Checksums differ <br /> 01 &lt; /var/snmp/snmpdx.st:root:root:1:33188:REGFIL:34:2082392013: <br /> 02 &gt; /var/snmp/snmpdx.st:root:root:1:33188:REGFIL:34:430863272: <br /> 02 &gt; /var/dt/A:0-sQa4zb does not exist <br /> 02 &gt; /var/dt/A:0-.qa4zb does not exist <br /> 02 &gt; /var/dt/A:0-xHaWzb does not exist <br /> 02 &gt; /var/dt/A:0-_8aazb does not exist <br /> 02 &gt; /var/dt/A:0-fZaGBb does not exist <br /> 02 &gt; /var/dt/A:0-hgayzb does not exist <br /> 02 &gt; /var/dt/A:0-jua4zb does not exist <br /> Links differ <br /> 01 &lt; /var/dt/appconfig/appmanager:root:root:2:16877:DIR: <br /> 02 &gt; /var/dt/appconfig/appmanager:root:root:3:16877:DIR: <br /> 02 &gt; /var/dt/A:0-2Iaizb does not exist <br /> 02 &gt; /var/dt/A:0-c6aivb does not exist <br /> 02 &gt; /var/dt/A:0-nJayAb does not exist <br /> 02 &gt; /var/dt/A:0-QjaaAb does not exist <br /> 02 &gt; /var/dt/A:0-AbaaAb does not exist <br /> 02 &gt; /var/dt/A:0-PZaaAb does not exist <br /> Checksums differ <br /> 01 &lt; /var/dt/Xpid:root:root:1:33188:REGFIL:4:1426526467: <br /> 02 &gt; /var/dt/Xpid:root:root:1:33188:REGFIL:4:1793701281: <br /> 02 &gt; /var/dt/A:0-ulaqAb does not exist <br /> 02 &gt; /var/dt/A:0-M9ayub does not exist <br /> 02 &gt; /var/dt/A:0-t5a4yb does not exist <br /> 02 &gt; /var/dt/A:0-UDaaAb does not exist <br /> 02 &gt; /var/dt/A:0-b5aiAb does not exist <br /> 02 &gt; /var/dt/A:0-UDayub does not exist <br /> 02 &gt; /var/dt/A:0-w.aiAb does not exist <br /> 02 &gt; /var/dt/A:0-h.aGAb does not exist <br /> Links differ <br /> 01 &lt; /var/dt/tmp:root:root:2:17407:DIR: <br /> 02 &gt; /var/dt/tmp:root:root:3:17407:DIR: <br /> 02 &gt; /var/dt/A:0-xjaaAb does not exist <br /> 02 &gt; /var/dt/A:0-Jxaqzb does not exist <br /> 02 &gt; /var/dt/A:0-YTaqAb does not exist <br /> 02 &gt; /var/dt/A:0-eoa4zb does not exist <br /> 02 &gt; /var/dt/A:0-LhaaAb does not exist <br /> 02 &gt; /var/dt/A:0-0aaaAb does not exist <br /> 02 &gt; /var/sma_snmp/fmd-trapgen.conf does not exist <br /> Sizes differ <br /> 01 &lt; /var/sma_snmp/snmpd.conf:root:root:1:33152:REGFIL:315: <br /> 02 &gt; /var/sma_snmp/snmpd.conf:root:root:1:33152:REGFIL:314: <br /> Checksums differ <br /> 01 &lt; /var/statmon/state:daemon:daemon:1:33188:REGFIL:10:3815787447: <br /> 02 &gt; /var/statmon/state:daemon:daemon:1:33188:REGFIL:10:1588317337: <br /> 01 &lt; /var/tmp/wscon-:0-kxa4sb does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0 does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/smc does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/System_Admin does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/updatemanager does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Information does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Desktop_Tools does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Desktop_Controls does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Desktop_Apps does not exist <br /> 01 &lt; /var/dt/appconfig/appmanager/root-xlsansun-0/Trusted_Extensions does not exist <br /> 01 &lt; /var/dt/tmp/root-xlsansun-0 does not exist <br /> Compare complete for /var. <br />
  • * The -c assigns the specified name to the current boot environment. <br /> The -m specifies the location of root slice (/) going to be copied to <br /> /dev/dsk/c0d0s3 (/altroot). <br /> The -n specifies the name of the live upgrade boot environment. <br /> ** Detaches a concatenation (containing c0t0d0s0) from one mirror (d10) <br /> and attaches it to another (d20), preserving its contents. <br /> *** Creates the mirror d10 and establishes this mirror as the receptacle <br /> for the root file system. Attaches c0t0d0s0 and c0t1d0s0 to single-slice <br /> concatenations d1 and d2, respectively. The specification of these volumes <br /> is optional. Attaches the concatenations associated with c0t0d0s0 and <br /> c0t1d0s0 to mirror d10. Copies the current BE&apos;s root file system to mirror <br /> d10 and overwrite any d10 contents. <br />
  • * Indirect method: <br /> # lvrmboot -s drd00 <br /> # lvremove -f /dev/drd00/lvol2 <br /> # lvrmboot -d lvol3 /dev/drd00 <br /> # lvremove -f /dev/drd00/lvol3 <br /> # lvrmboot -r drd00 <br /> # lvremove -f /dev/drd00/lvol4 <br /> ** Only a full copy of data from the primary BE is possible. <br /> # vgremove drd00 <br />
  • * Full listing of file systems (opening a shell into the newly creates alternate BOS <br /> image to explore. Note: All files (even SMIT use) are available to use /, /usr, /var, <br /> /opt, and /home. /proc is private. /tmp FS is shared - by default): <br /> Filesystem 512-blocks Free %Used Iused %Iused Mounted on <br /> /dev/hd4 1966080 1198800 40% 3364 1% / <br /> /dev/hd2 3670016 299344 92% 42697 10% /usr <br /> /dev/hd9var 655360 594456 10% 674 1% /var <br /> /dev/hd3 262144 250776 5% 64 1% /tmp <br /> /dev/hd1 1966080 1198800 40% 3364 1% /home <br /> /proc 1966080 1198800 40% 3364 1% /proc <br /> /dev/hd10opt 393216 123592 69% 2545 6% /opt <br /> /dev/bos_hd4 1966080 1198800 40% 3364 1% /bos_inst <br /> /dev/bos_hd2 3670016 299344 92% 42697 10% /bos_inst/usr <br /> /dev/bos_hd9var 655360 594456 10% 674 1% /bos_inst/var <br /> /dev/bos_hd10opt 393216 123592 69% 2545 6% /bos_inst/opt <br /> /usr/lib 3670016 299384 92% 42701 10% /bos_inst/usr/lib/multibos_chroot/usr/lib <br /> /usr/ccs/lib 3670016 299384 92% 42701 10% /bos_inst/usr/lib/multibos_chroot/usr/ccs/lib <br /> /tmp 262144 250776 5% 64 1% /bos_inst/tmp <br />
  • * Perform multibos operations on several servers at once by using <br /> Multibos and dsh commands. <br />
  • * The latest IBM CPU is POWER6. <br />
  • * For example, name /usr explicitly and it will be split from / root file system: <br /> # lucreate -n disk1 <br /> -m /:/dev/dsk/c0t8d0s0:ufs <br /> -m -:/dev/dsk/c0t8d0s1:swap <br /> -m /usr:/dev/dsk/c0t8d0s3:ufs <br /> ** The multibos “–X” flag auto-expansion feature allows for automatic file system <br /> expansion, if space is necessary to perform multibos-related tasks. One should <br /> execute all multibos operations with this flag. <br />
  • * The customization operation requires an image source (-l device or directory flag) and at least one installation option (installation by bundle, installation by fix, or update_all). The customization operation performs the following steps: <br />   <br /> 1. The standby BOS file systems are mounted, if not already mounted. <br /> 2. If you specify an installation bundle with the -b flag, the installation bundle is installed using the geninstall utility. The installation bundle syntax should follow geninstall conventions. If you specify the -p preview flag, geninstall will perform a preview operation. <br /> 3. If you specify a fix list, with the -f flag, the fix list is installed using the instfix utility. The fix list syntax should follow instfix conventions. If you specify the -p preview flag, then instfix will perform a preview operation. <br /> 4. If you specify the update_all function, with the -a flag, it is performed using the install_all_updates utility. If you specify the -p preview flag, then install_all_updates performs a preview operation. <br /> ** <br /> # lsvg -l rootvg | grep bos <br /> bos_hd5 boot 1 1 1 closed/syncd N/A <br /> bos_hd4 jfs 4 4 1 closed/syncd /bos_inst <br /> bos_hd2 jfs 48 48 1 closed/syncd /bos_inst/usr <br /> bos_hd9var jfs 21 21 1 closed/syncd /bos_inst/var <br /> bos_hd10opt jfs 4 4 1 closed/syncd /bos_inst/opt <br />
  • * Solstice DiskSuite is also known by old name Sun Volume Manager (SVM). <br /> ** ZFS is a “marriage” between file system and volume manager. <br /> *** Because a non-global zone can be controlled by a non-global zone administrator as well as by the global zone administrator, Sun recommends to halt all zones during lucreate or lumount operations! That means, the Solaris zones cloning is not truly an on-line process. <br />
  • * DRD option “-x ignore_unmounted_fs=true” could be used to <br /> exclude files from unmounted file system – but that is a workaround. <br /> ** Live Upgrade options “-f exclude_list_file” , “-x exclude”, “-z filter_list_file“ <br /> *** LVM can be used to remove a DRD clone but it is more complex process: <br /> # lvrmboot -s drd00 <br /> Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf <br /> # lvremove -f /dev/drd00/lvol2 <br /> Logical volume &quot;/dev/drd00/lvol2&quot; has been successfully removed. <br /> Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf <br /> # lvrmboot -d lvol3 /dev/drd00 <br /> Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf <br /> # lvremove -f /dev/drd00/lvol3 <br /> Logical volume &quot;/dev/drd00/lvol3&quot; has been successfully removed. <br /> Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf <br /> # lvrmboot -r drd00 <br /> Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf <br /> # lvremove -f /dev/drd00/lvol4 <br /> Logical volume &quot;/dev/drd00/lvol4&quot; has been successfully removed. <br /> Volume Group configuration for /dev/drd00 has been saved in /etc/lvmconf/drd00.conf <br /> # vgremove drd00 <br /> Volume group &quot;drd00&quot; has been successfully removed. <br /> **** To leave the Live Upgrade BE empty: <br /> # lucreate -s – <br /> ***** To set up standby BOS with optional image.data file /tmp/image.data and <br /> exclude list /tmp/exclude.list, enter the following command: <br />   <br /> # multibos -Xs -i /tmp/image.data -e /tmp/exclude.list <br /> To set up standby BOS and install additional software listed as bundle file /tmp/bundle <br /> and located in the images source /images, enter the following command: <br /> # multibos -Xs -b /tmp/bundle -l /images <br /> ****** To remove standby BOS, enter the following command: <br />   <br /> # multibos -RX <br />

HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic HP-UX Dynamic Root Disk vs Solaris Live Upgrade vs AIX Multibos by Dusan Baljevic Presentation Transcript

  • HP-UX Dynamic Root Disk, Solaris Live Upgrade and AIX Multibos Dusan Baljevic Sydney, Australia 2009 Dusan Baljevic
  • Cloning in Major Unix and Linux Releases AIX Alternate Root and Multibos (AIX 5.3 and above) HP-UX Dynamic Root Disk (DRD) Linux Mondo Rescue, Clonezilla Solaris Live Upgrade August 7, 2009 2
  • HP-UX Dynamic Root Disk Features • Dynamic Root Disk (DRD) provides the ability to clone an HP-UX system image to an inactive disk. • Supported on HP PA-RISC and Itanium-based systems. • Supported on hard partitions (nPars), virtual partitions (vPars), and Integrity Virtual Machines (Integrity VMs), running the following operating systems with roots managed by the following Volume Managers (except as specifically noted for rehosting):  o HP-UX 11i Version 2 (11.23) September 2004 or later o HP-UX 11i Version 3 (11.31) o LVM (all O/S releases supported by DRD) o VxVM 4.1 o VxVM 5.0 August 7, 2009 3
  • HP-UX DRD Benefit: Minimizing Planned Downtime Without DRD: Software management may require extended downtime With DRD: Install/remove software on the clone while applications continue running Install patches on the clone; applications remain running lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk Original vg00 (active) cloned vg00 (inactive/patched) Activate the clone to make changes take effect lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 boot disk boot mirror Original vg00 (inactive) August 7, 2009 lvol1 lvol2 lvol3 lvol1 lvol2 lvol3 clone clone mirror disk cloned vg00 (active/patched) 4
  • HP-UX Dynamic Root Disk Features continued • Product : DynRootDisk Version: A.3.3.1.221 (B.11.xx.A.3.4.x will be the current version number as of September 2009) • The target disk must be a single physical disk, or SAN LUN. • The target disk must be large enough to hold all of the root volume file systems. DRD allows the cloning of the root volume group even if the master O/S is spread across multiple disks (it is a one-way, many-to-one operation). • On Itanium servers, all partitions are created; EFI and HP-UX partitions are copied. This release of DRD does not copy the HPSP partition. • Copy of lvmtab on the cloned image is modified by the clone operation to contain information that will reflect the desired volume groups when the clone is booted. August 7, 2009 5
  • HP-UX Dynamic Root Disk Features continued • Only the contents of vg00 are copied. • Due to system calls DRD depends on, DRD expects legacy Device Special Files (DSFs) to be present and the legacy naming model to be enabled on HP-UX 11i v3 servers. HP recommends only partial migration to persistent DSFs be performed. • If the disk is currently in use by another volume group that is visible on the system, the disk will not be used. • If the disk contains LVM, VxVM, or boot records but is not in use, one must use the “-x overwrite” option to tell DRD to overwrite the disk. Already-created clones will contain boot records; the drd status command will show the disk that is currently in use as an inactive system image. August 7, 2009 6
  • HP-UX Dynamic Root Disk Features continued • All DRD processes, including “drd clone” and “drd runcmd”, can be safely interrupted issuing Control-C (SIGINT) from the controlling terminal or by issuing kill –HUP <pid> (SIGHUP). This action causes DRD to abort processing. Do not interrupt DRD using the kill -9 <pid> command (SIGKILL), which fails to abort safely and does not perform cleanup. Refer to the “Known Issues” list on the DRD web page (http://www.hp.com/go/DRD) for cleanup instructions after drd runcmd is interrupted. • The Ignite server will only be aware of the clone if it is mounted during a make_*_recovery operation. August 7, 2009 7
  • HP-UX Dynamic Root Disk Features continued DRD does not provide a mechanism for resizing file systems during a clone operation. • After the clone is created, one can manually change file system sizes on the inactive system without an immediate reboot: 1. The whitepaper, “Dynamic Root Disk: Quick Start & Best Practices” describes resizing file systems other than /stand. * 2. The whitepaper “Dynamic Root Disk: Quick Start & Best Practices” describes resizing the boot (/stand) file system on an inactive system image. • One can avoid multiple mounts and unmounts by using “drd mount” to mount the inactive system image before the first runcmd operation and “drd umount” to unmount the inactive system image after the last runcmd operation. ** • Supports root volume groups with any name (prior to version A.3.0, only vg00 was possible). • August 7, 2009 8
  • HP-UX Dynamic Root Disk Commands • The basic DRD commands are: drd drd drd drd drd drd drd drd drd August 7, 2009 clone runcmd activate deactivate mount umount status rehost unrehost 9
  • HP-UX Dynamic Root Disk Commands continued • “drd runcmd” can run specific Software Distributor (SD) commands on the inactive system image only: swinstall swremove swlist swmodify swverify swjob • Three other commands can be executed by the drd runcmd command: view used to view logs produced by commands that were executed by drd runcmd. kctune used to modify kernel parameters. update-ux performs v3 to v3 OE updates August 7, 2009 10
  • HP-UX Dynamic Root Disk Features – Dry Run • A simple mechanism for determining if a chosen target disk is sufficiently large is to run a preview: # drd clone -p -v -t <blockDSF> blockDSF is of the form:  * HP-UX 11i v2: /dev/dsk/cXtXdX * HP-UX 11i v3: /dev/disk/diskX • The preview operation includes the disk space analysis needed to see if the target disk is sufficiently large. August 7, 2009 11
  • HP-UX Dynamic Root Disk versus IgniteUX • DRD has several advantages over Ignite-UX net and tape images: * No tape drive is needed, * No impact on network performance will occur, * No security issues of transferring data across the network. • Mirror Disk/UX keeps an "always up-to-date" image of the booted system. DRD provides a "point-in-time“ image. The booted system and the clone may then diverge due to changes to either one. Keeping the clone unchanged is the Recovery scenario. DRD is not available for HP-UX 11.11, which limits options on those systems. August 7, 2009 12
  • HP-UX Dynamic Root Disk Features continued Dynamic Root Disk (DRD) provides ability to clone an HPUX system image to an inactive disk, and then: * Perform system maintenance on the clone while the HPUX 11i system is online. * Reboot during off-hours - significantly reducing system downtime. * Utilize the clone for system recovery, if needed. * Rehost the clone on another system for testing or provisioning purposes—on VMs or blades utilizing Virtual Connect, HP-UX 11i v3 LVM only; VMs with HP-UX 11i v2 LVM only. * Perform an OE Update on the clone from an older version of HP-UX 11i v3 to HP-UX 11i v3 update 4 or later. August 7, 2009 13
  • HP-UX – Dynamic Root Disk and /etc/bootconf • Errors in /stand/bootconf can make the command drd deactivate to fail. * (This is no longer true in the current release) The /stand/bootconf file on the booted system should contain device files for just the booted disk and any of its mirrors not the clone target. The /stand/bootconf file that is created on the clone target WILL contain the device file of the target itself (or, on an IPF system, the device file of the HP-UX partition of the target). August 7, 2009 14
  • HP-UX – Dynamic Root Disk – Rehosting • The initial implementation of drd rehost only supports rehosting of an LVM-managed root volume group on an Integrity virtual machine to another Integrity virtual machine, or an LVM-managed root volume group on a Blade with Virtual Connect I/O to another such Blade. • The rehost command does not enforce the restriction to blades and VMs, but other use of this command is not officially supported. • As of version A.3.3, rehosting support for HP-UX 11i v2 has been added. August 7, 2009 15
  • HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 • After the clone and system information file have been created, the “drd rehost” command can be used to check the syntax of the system information file and copy it to /EFI/HPUX/SYSINFO.TXT in preparation for processing by auto_parms(1M) during the boot of the image. The following example uses the /var/opt/drd/tmp/newhost.txt system information file: SYSINFO_HOSTNAME=myhost SYSINFO_MAC_ADDRESS[0]=0x0017A451E718 SYSINFO_DHCP_ENABLE[0]=0 SYSINFO_IP_ADDRESS[0]=192.2.3.4 SYSINFO_SUBNET_MASK[0]=255.255.255.0 SYSINFO_ROUTE_GATEWAY[0]=192.2.3.75 SYSINFO_ROUTE_DESTINATION[0]=default SYSINFO_ROUTE_COUNT[0]=1 August 7, 2009 16
  • HP-UX – Dynamic Root Disk – Rehosting on HP-UX 11.31 - continued • To check the syntax of the system information file, without copying it to the /EFI/HPUX/SYSINFO.TXT file, use the preview option of the drd rehost command: # drd rehost –p –f /var/opt/drd/tmp/newhost.txt • To copy it to the /EFI/HPUX/SYSINFO.TXT file, use the following command: # drd rehost –f /var/opt/drd/tmp/newhost.txt August 7, 2009 17
  • HP-UX – Dynamic Root Disk Examples # drd clone -t /dev/disk/disk8 -x overwrite=true ======= 07/02/08 13:09:41 EST BEGIN Clone System Image (user=root) (jobid=syd59) * Reading Current System Information * Selecting System Image To Clone * Selecting Target Disk * Selecting Volume Manager For New System Image * Analyzing For System Image Cloning * Creating New File Systems * Copying File Systems To New System Image * Making New System Image Bootable * Unmounting New System Image Clone • ======= 07/02/08 13:42:57 EST END Clone System Image succeeded. (user=root) (jobid=syd59) August 7, 2009 18
  • HP-UX – Dynamic Root Disk Examples continued # drd status ======= 07/02/08 13:45:42 EST BEGIN Displaying DRD Clone Image Information (user=root) (jobid=syd59) * Clone Disk: /dev/disk/disk8 * Clone EFI Partition: Boot loader and AUTO file present * Clone Creation Date: 07/02/08 13:09:46 EST * Clone Mirror Disk: None * Mirror EFI Partition: None * Original Disk: /dev/disk/disk7 * Original EFI Partition: Boot loader and AUTO file present * Booted Disk: Original Disk (/dev/disk/disk7) * Activated Disk: Original Disk (/dev/disk/disk7) ======= 07/02/08 13:45:51 EST END Displaying DRD Clone Image Information succeeded. (user=root) (jobid=syd59) August 7, 2009 19
  • HP-UX – Dynamic Root Disk Examples continued # drd activate ======= 07/02/08 13:48:03 EST BEGIN Activate Inactive System Image (user=root) (jobid=syd59) * Checking for Valid Inactive System Image * Reading Current System Information * Locating Inactive System Image * Determining Bootpath Status * Primary bootpath : 0/1/1/0.0x1.0x0 before activate. * Primary bootpath : 0/1/1/1.0x2.0x0 after activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 before activate. * Alternate bootpath : 0/1/1/1.0x2.0x0 after activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 before activate. * HA Alternate bootpath : 0/1/1/0.0x1.0x0 after activate. * Activating Inactive System Image ======= 07/02/08 13:48:15 EST END Activate Inactive System Image succeeded. (user=root) (jobid=syd59) August 7, 2009 20
  • HP-UX – Dynamic Root Disk Examples continued # drd_register_mirror /dev/dsk/c1t2d0 * # drd_unregister_mirror /dev/dsk/c2t3d0 ** # drd runcmd view /var/adm/sw/swagent.log # diff /var/spool/crontab/crontab.root /var/opt/drd/mnts/sysimage_001/var/spool/cron tab/crontab.root August 7, 2009 21
  • HP-UX – Dynamic Root Disk Examples continued # /opt/drd/bin/drd mount # /usr/bin/bdf file system kbytes /dev/vg00/lvol3 1048576 /dev/vg00/lvol1 505392 /dev/vg00/lvol8 used avail %used 320456 722432 43560 411288 10% /stand 3395584 797064 2580088 24% /var /dev/vg00/lvol7 4636672 1990752 2625264 43% /usr /dev/vg00/lvol4 204800 8656 194680 4% /tmp /dev/vg00/lvol6 3067904 1961048 1098264 64% /opt /dev/vg00/lvol5 262144 9320 250912 4% 1048576 320504 722392 31% /dev/drd00/lvol1 505392 43560 /var/opt/drd/mnts/sysimage_001/stand 411288 10% /dev/drd00/lvol4 8592 194680 4% /dev/drd00/lvol5 262144 9320 /var/opt/drd/mnts/sysimage_001/home 250912 4% /dev/drd00/lvol3 204800 31% Mounted on / /home /var/opt/drd/mnts/sysimage_001 /var/opt/drd/mnts/sysimage_001/tmp /dev/drd00/lvol6 3067904 1962912 1096416 64% /var/opt/drd/mnts/sysimage_001/opt /dev/drd00/lvol7 4636672 1991336 2624680 43% /var/opt/drd/mnts/sysimage_001/usr /dev/drd00/lvol8 3395584 788256 23% /var/opt/drd/mnts/sysimage_001/var August 7, 2009 2586968 22
  • HP-UX – Dynamic Root Disk – Serial Patch Installation Example # swcopy -s /tmp/PHCO_38159.depot * @ /var/opt/mx/depot11/PHCO_38159.dir # drd runcmd swinstall -s /var/opt/mx/depot11/PHCO_38159.dir PHCO_38159 August 7, 2009 23
  • HP-UX – Dynamic Root Disk update-ux Issue * When executing “drd runcmd update-ux” the inactive on DRD system image, the command errors: ERROR: The expected depot does not exist at "<depot_name>" In order to use a directory depot on the active system image, you will need to create a loopback mount to access the depot. August 7, 2009 24
  • HP-UX – Dynamic Root Disk update-ux Issue - continued Issue Resolution The following steps should be followed in order to update the clone from a directory depot that resides on the active system image.  The steps must executed as root, in this order: 1) Mount the clone using “drd mount” 2) Make the directory on the clone and loopback mount the depot.  The directory on the clone and the source depot must have the same name, in this case “/var/depots/0909_DCOE”, however the name can be whatever you chose: # mkdir -p  /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE               # mount -F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd runcmd update-ux -s /var/depots/0909_DCOE   August 7, 2009 25
  • HP-UX – Dynamic Root Disk update-ux Issue - continued 3) Once your update has completed, unmount the loopback mount and then unmount the clone # umount –F lofs /var/depots/0909_DCOE /var/opt/drd/mnts/sysimage_001/var/depots/0909_DCOE # drd umount   Updates from multiple-DVD Media Updates directly from media are not supported for DRD updates.  In order to update from media, you must copy the contents to a directory depot either on a remote server (easiest method) or to a directory on the active system. If it must be on the active system image you must first copy the media’s contents to a directory depot and then create the clone.  If you already have a clone, you can copy the depot and then loopback mount that depot to the clone (see instructions above).  August 7, 2009 26
  • HP-UX – Dynamic Root Disk update-ux Issue - continued  To copy the software from the DVD’s, make a directory on a remote system or the active system image; mount the DVD media and swcopy its contents into the newly created directory.  Unmount the first disk and insert the second DVD to copy its contents into the directory.    # mkdir –p /var/software_depot/DCOE-DVD # mount /dev/disk/diskX /cdrom # swcopy -s /cdrom –x enforce_dependencies=false * @/var/software_depot/DCOE-DVD # umount /cdrom # mount /dev/disk/diskX /cdrom // this is DVD 2 # swcopy -s /cdrom –x enforce_dependencies=false * @/var/software_depot/DCOE-DVD August 7, 2009 27
  • HP-UX – Dynamic Root Disk update-ux Issue - continued If the depot resides on a remote server (a system other than the one to be updated), proceed with the “drd runcmd update-ux” command and specify the location as the argument of the “-s” parameter: # drd runcmd update-ux -s <server_name>:/var/software_depot/DCOE-DVD <OE>   If the depot resides in the root group of the system to be cloned, and the clone has not yet been created, create the clone and issue the  “drd runcmd update-ux “ command, specifying the location of the depot as it appears on the booted system: # drd runcmd update-ux –s /var/software_depot/DCOE-DVD <OE> If the depot resides on the system to be updated, in a location other than the root group, 2009 if the clone has already been created, use the loopback mount August 7, or 28
  • Solaris Live Upgrade Features • Live upgrade is a feature of Solaris (since version 2.6) that allows the operating system to be cloned to an offline partition (or partitions), which can then be upgraded with new O/S patches, software, or even a new version of the operating system. The system administrator can then reboot the system on the newly upgraded partition. In case of problems, it is easy to revert back to the original partition/version via a single live upgrade command followed by a reboot. • Live upgrade is especially useful because Sun does not officially support installing O/S patches to active partitions - patching while in single user mode or to a non-active live upgrade partition. August 7, 2009 29
  • Solaris Live Upgrade Features continued •  Live Upgrade requires multiple partitions on the boot drive – one • A slice where the root (/) file system is to be copied must be selected. Use the following guidelines when you select a slice for the root (/) file system. The slice must comply with the following: set of partitions is "active" and the other is "inactive“) or on separate drives. These sets of partitions are "boot environments“ (BEs). * Must be a slice from which the system can boot. * Must meet the recommended minimum size. * Cannot be a Veritas VxVM volume or a Solstice DiskSuite metadevice. * Can be on different physical disks or the same disk as the active root file system. * For sun4c and sun4m, the root file system must be less than 2 GB. August 7, 2009 30
  • Solaris Live Upgrade Features continued • The swap slice cannot be in use by any boot environment except the current boot environment or if the “-s” option is used, the source boot environment. The boot environment creation fails if the swap slice is being used by any other boot environment whether the slice contains a swap, UFS, or any other file system. • Typically, each boot environment requires a minimum of 350 to 800 MB of disk space, depending on the system software configuration. • When viewing the character interface remotely, such as over a tip line, set the TERM environment variable to VT220. Also, when using the Common Desktop Environment, set the value of the TERM variable to dtterm, rather than xterm. August 7, 2009 31
  • Solaris Live Upgrade Features continued • lucreate command allows you to include or exclude specific files and directories when creating a new BE. • Include files and directories with: -y include option -Y include_list_file option items with a leading + in the file used with the -z filter_list option • Exclude files and directories with: -x exclude option -f exclude_list_file option items with a leading – in the file used with the -z filter_list option August 7, 2009 32
  • Solaris Live Upgrade and Special Files • Files can change in the original boot environment (BE) after the BE is created but NOT YET activated. • On the first boot of a BE, data is copied from the source BE. • The list to copy is in /etc/lu/synclist. Example: /etc/default/passwd OVERWRITE /etc/dfs OVERWRITE /var/log/syslog APPEND /var/adm/messages APPEND August 7, 2009 33
  • Solaris Live Upgrade Examples • The upgrade process of the new BE can be done in several ways (local, net, CD-ROM, flash). All four of these are done the same way except each one you specify a different path to the image through the -s flag. Examples: Local file: # luupgrade -u -n solenv2 -s /Solaris_10/path/to/os_image Net: # luupgrade -u -n solenv2 -s /net/Solaris_10/path/to/os_image CD-ROM: # luupgrade -u -n solenv2 -s /cdrom/Solaris_10/path/to/os_image Flash: # luupgrade -u -n solenv2 -s /path/to/flash.flar August 7, 2009 34
  • Solaris Live Upgrade Examples # lucompare BE2 Determining the configuration of BE2 ... < BE1 > BE2 Processing Global Zone Comparing / ...  Links differ 01 < /:root:root:33:16877:DIR: 02 > /:root:root:30:16877:DIR:  Sizes differ 01 < /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76550144: 02 > /platform/sun4u/boot_archive:root:root:1:33188:REGFIL:76922880:  ... August 7, 2009 35
  • Solaris Live Upgrade Examples # lucreate -c "solenv1" -m /:/dev/dsk/c0d0s3:ufs -n "solenv2“ * # lucreate -m /:/dev/md/dsk/d20:ufs,mirror -m /:/dev/dsk/c0t0d0s0:detach,attach,preserve -n nextBE ** # lucreate -m /:/dev/md/dsk/d10:ufs,mirror -m /:/dev/dsk/c0t0d0s0,d1:attach -m /:/dev/dsk/c0t1d0s0,d2:attach -n myserv2 August 7, 2009 *** 36
  • Solaris Live Upgrade Examples # lucurr BE1 # ludesc -n BE1 "Dusan BootEnvironment“ # ludesc -n BE1 Dusan BootEnvironment August 7, 2009 37
  • Solaris Live Upgrade Examples # lufslist BE1 boot environment name: BE1 This boot environment is currently active This boot environment will be active on next system boot. Filesystem Options fstype device size Mounted on Mount ----------------------- -------- ------------ -------------------------------/dev/zvol/dsk/rpool/swap swap rpool/ROOT/s10s_u6wos_07b zfs rpool/ROOT/s10s_u6wos_07b/var zfs - 1073741824 - - 5119809024 / - 86450688 /var rpool zfs rpool/export zfs 95149568 /export - hppool zfs ? /hppool - rpool/export/home zfs August 7, 2009 7493079552 /rpool 95129088 /export/home - 38
  • Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade Create BE drd clone lucreate Activate BE drd activate luactivate Check status drd status lustatus Compare BEs Indirect method: diff cmp lucompare Cancel scheduled copy/ create Indirect method – remove from crontab lucancel August 7, 2009 39
  • Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade Display BE/ System drd status Image lucurr Delete BE ludelete N/ * A Add or resync data N/ * * A in BE lumake Set or display BE description N/ A ludesc Mount BE file systems drd mount lumount Unmount BE file system drd umount luumount August 7, 2009 40
  • Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade R ename BE N/ A lurename Install software and patches into BE BE List drd runcmd swinstall drd runcmd update-ux luupgrade N/ A lufslist TUI N/ A lu configuration August 7, 2009 41
  • Clone Commands Compared Task HP-UX DRD Solaris Live Upgrade R ehosting drd rehost N/ A Modify kernel tunables Drd runcmd kctune N/ A August 7, 2009 42
  • AIX Alt_disk_install • • • The AIX alt_disk_install command allows a root sysadmin to create an alternate rootvg on another set of disk drives. The alternate rootvg can be configured by restoring a mksysb image to it while AIX continues to run from the primary rootvg, or the primary rootvg can be "cloned" to the alternate rootvg and updates and fixes can then be installed on the alternate rootvg while AIX continues to run. When the system admin is ready, AIX can be rebooted from the alternate rootvg disks. Changes can be backed out by rebooting AIX from the original primary rootvg. In AIX v.5.3, alt_disk_install has been replaced by alt_disk_copy alt_disk_mksysb alt_rootvg_op The alt_disk_install will continue to ship as a wrapper to the new commands, but it will not support any new functions, flags, or features.
  • AIX Alt_disk_install Examples • Copy the current rootvg to an alternate disk. The following example shows how to clone the rootvg to hdisk1: # alt_disk_copy -d hdisk1 • Copy rootvg (hdisk1) to hdisk0, and then apply the updates to hdisk0: # alt_disk_copy -d hdisk0 -b update_all -l
  • AIX Alt_disk_install Examples • Copy the current rootvg to two alternate disks: # alt_disk_copy -d hdisk2 hdisk3 -O • …assuming that hdisk2 and hdisk3 are the targets on which the copy should be placed. Note that the -O flag is required when "cloning" (when planning to boot the rootvg copy on another LPAR or server), but can be detrimental when making a copy which will be booted on the same LPAR or server. Before taking the target disks away from the existing AIX image, run command: # alt_rootvg_op -X • • • If a rootvg copy has been made for use on the same LPAR/server as the original rootvg (without the -O flag on alt_disk_copy), System Management Services can be used to switch between the primary and backup AIX rootvgs by shutting AIX down, booting to SMS mode, and selecting the disks from which to boot.
  • AIX Multibos Features • multibos command (AIX 5.3 ML3) provides dual AIX boot from the same rootvg. One can run production on one boot image while installing, customizing or updating the other. • This is similar to AIX alt-disk-install, with one major difference: in alt-disk-install the boot images must reside on separate disks and separate rootvg's. The multibos capability allows both O/S images to reside on the same disk/rootvg. August 7, 2009 46
  • MultiBOS (rootvg) Reboot
  • AIX Multibos Features - continued • The multibos command allows the root level administrator to create multiple instances of AIX on the same rootvg. • The multibos setup operation creates a standby Base Operating System (BOS) that boots from a distinct boot logical volume (BLV). This creates two bootable sets of BOS on a given rootvg. The administrator can boot from either instance of BOS by specifying the respective BLV as an argument to the bootlist command or using system firmware boot operations. • Two bootable instances of BOS can be simultaneously maintained. The instance of BOS associated with the booted BLV is referred to as the a c tive BOS. The instance of BOS associated with the BLV that has not been booted is referred to as the s ta nd by BOS. Currently, only two instances of BOS are supported per rootvg. August 7, 2009 48
  • AIX Multibos Features - continued • The multibos command allows the administrator to access, install maintenance and technology levels for, update, and customize the standby BOS either during setup or in subsequent customization operations. • Installing maintenance and technology updates to the standby BOS does not change system files on the active BOS. This allows for concurrent update of the standby BOS, while the active BOS remains in production. August 7, 2009 49
  • AIX Multibos Features - continued • The multibos command has the ability to copy or share logical volumes and file systems. By default, the BOS file systems (currently /, /usr, /var, and /opt,) and the boot logical volume are copied. The administrator can make copies of additional BOS objects (using the -L flag). • All other file systems and logical volumes are shared between instances of BOS. Separate log device logical volumes (for example, those that are not contained within the file system) are not supported for copy and will be shared. • The current rootvg must have enough space for each BOS object copy. BOS object copies are placed on the same disk or disks as the original. August 7, 2009 50
  • AIX Multibos Features - continued • The total number of copied logical volumes cannot exceed 128. • The total number of copied logical volumes and shared logical volumes are subject to volume group limits. • /etc/multibos contains multibos data and logs. • The only supported method of backup and recovery with multibos is mksysb via CD, NIM or tape. If the standby BOS was mounted during the creation of the mksysb, it is restored and synchronized on the first boot from the restored mksysb. However, if the standby BOS wasn’t mounted during the creation of the mksysb backup, the synchronization on reboot will remove the unusable standby BOS. August 7, 2009 51
  • AIX Multibos Examples • Standby BOS setup operation preview: # multibos -Xsp • Set up standby BOS: # multibos -Xs • Set up standby BOS with optional image.data file /tmp/image.dat and exclude list /tmp/exclude.lst: # multibos -Xs -i /tmp/image.dat -e /tmp/exclude.lst August 7, 2009 52
  • AIX Multibos Examples - continued • To set up standby BOS and install additional software listed as bundle file /tmp/bundle and located in the images source /images: # multibos -Xs -b /tmp/bundle -l /images • To execute a customization operation on standby BOS with the update_all install option: # multibos -Xac -l /images August 7, 2009 53
  • AIX Multibos Examples - continued • To mount all standby BOS file systems, type: # multibos –Xm • To perform a standby BOS remove operation preview: # multibos –RXp • To remove standby BOS: # multibos -RX August 7, 2009 54
  • AIX Multibos Examples - continued • Apply TL6 to the standby BOS. The TL6 lppsource is mounted from our Network Installation Manager (NIM) master. Perform a preview operation and then execute the actual update to the standby instance. Check the log file for any issues: • # mount nimsrv:/export/lpp_source/lpp_sourceaix5306 03 /mnt # multibos -Xacp -l /mnt # multibos -Xac -l /mnt August 7, 2009 55
  • AIX Multibos Examples - continued • Back out of the update and return to the previous TL. Set the bootlist and verify that the BLV is set to the previous BOS instance (hd5): • # bootlist -m normal hdisk0 blv=hd5 • hdisk0 blv=bos_hd5 # bootlist -m normal -o hdisk0 blv=hd5 hdisk0 blv=bos_hd5 Now reboot the system and confirm that it’s running at the previous TL. August 7, 2009 56
  • AIX Multibos Examples – continued * # multibos -S MULTIBOS> df Filesystem 512-blocks Free %Used Iused %Iused Mounted on /dev/hd4 1966080 1198800 40% 3364 1% / /dev/hd2 3670016 299344 92% 42697 10% /usr ... /dev/hd3 262144 250776 5% 64 1% /tmp /dev/bos_hd4 1966080 1198800 40% 3364 1% /bos_inst /dev/bos_hd2 3670016 299344 92% 42697 10% /bos_inst/usr /dev/bos_hd9var 655360 594456 10% 674 1% /bos_inst/var /dev/bos_hd10opt 393216 123592 69% 2545 6% /bos_inst/opt # to exit from multibos shell MULTIBOS> exit August 7, 2009 57
  • AIX Multibos Examples – continued * # cat /root/hosts.txt • host1 • host2 • host3 # export WCOLL=/root/hosts.txt # dsh multibos –R # dsh rm /etc/multibos/logs/op.alog # dsh multibos –sXp # dsh alog -of /etc/multibos/logs/op.alog # dsh multibos –sX # dsh mount nimmast:/export/lpp_source/lpp_sourceaix530603 /mnt # dsh multibos -Xacp -l /mnt # dsh multibos -Xac -l /mnt # dsh alog -of /etc/multibos/logs/op.alog # dsh umount /mnt # dsh bootlist –m normal –o # dsh shutdown -Fr August 7, 2009 58
  • AIX Check Boot Environment • After the reboot, confirm the TL level: # oslevel –r • Verify which BLV the system booted from with: # bootinfo –v August 7, 2009 59
  • Features Compared Feature HP-UX DRD Solaris Live Upgrade AIX Multibos Licensing N/ A N/ A N/ A Supported platforms PA-R ISC IA-64 SPARC x86-32 x86-64 32-bit POW ER 64-bit POW * ER PowerPC Supported O/ S HP-UX 11.23 HP-UX 11.31 Solaris 2.6 Solaris 7 Solaris 8 Solaris 9 Solaris 10 AIX 5L Version 5.3 with the 5300-03 Recommended Maintenance package and later Current product DynRootDisk B.11.xx.A.3.4.y where xx is 23 or 31 Live Upgrade 2.0 Part of AIX 6.1 TUI Not supported Supported Not Supported GUI Not supported Not supported Not Supported CLI Supported Supported Supported August 7, 2009 60
  • Features Compared - continued Feature HP-UX Solaris AIX Multibos Add mirror disk to a clone Supported directly via command: Not supported directly! Supported via SVM, ZFS, and VxVM RAID-1 setup only N/ A drd activate –x reboot=true or Standard Unix commands Never use reboot(1) or halt(1) commands. Instead, “init 6” or shutdown(1) bootlist -m normal hdisk0 blv=bos_hd5 Mostly manual process, based on: dvd mount cmp ... diff... lucompare(1) Mostly manual process, based on: multibos –S cmp ... diff ... drd clone –x mirror_disk= R eboot commands Automated comparison of primary and alternate boot environments August 7, 2009 shutdown -Fr or reboot -q 61
  • Features Compared - continued Feature HP-UX Solaris AIX Multibos Mounting inactive images a) “drd mount” does not support mounting on different directories a) lumount(1) supports mounting on different directories multibos –S b) “drd mount” mounts file systems as: /var/opt/drd/mnts/ sysimage_00X It mounts file systems as /bos_inst/... b) “lumount” m ounts file system as: s /.alt.configX Change size of Not supported any file systems during cloning Supported Supported * * File system split Supported * Not supported August 7, 2009 Not supported 62
  • Features Compared - continued Feature HP-UX Solaris AIX Multibos Simple listing of clone file systems drd mount bdf Supported via lufslist(1) command Not directly supported * * Clone updates (re-sync) Supported via full clone recreation: Supported via command lumake(1) Supported via flag “-c” * Supported Not supported drd clone –t= -x overwrite=true Merge file systems during cloning August 7, 2009 Not supported yet 63
  • Features Compared - continued Feature HP-UX Solaris AIX Multibos Change file system type during cloning Not supported Supported. For example, SVM to ZFS migration Not supported Supported Volume Manager LVM VxVM Solstice DiskSuite * VxVM ZFS * * AIX LVM Virtualization Support nPar vPar Integrity VM Solaris Zones *** Logical Domain LPAR Dynamic LPAR Live Partition Mobility on POW ER6 W PAR Full-disk copy during cloning On Itanium servers, all partitions are created and EFI and HPUX are copied. This release of DRD does not copy the HPSP Supported Not supported August 7, 2009 64
  • Features Compared - continued Feature HP-UX Solaris AIX Multibos Multiple target disks for cloning Not supported Supported Not supported Dry-run (preview) cloning Supported Supported Supported Swap shared Primary swap is not shared, secondary swap can be shared Yes, by default Yes, by default On-line cloning Yes Sun recommends to halt all zones during lucreate or lumount operations! That means, the Solaris zones cloning is not truly an on-line process Yes August 7, 2009 65
  • Features Compared - continued Feature HP-UX Solaris AIX Multibos Exclude files from cloning Not supported yet * Supported * * Supported * * * * * Include files during cloning Not supported yet Supported * * Supported * * * * * Simple method to Not supported yet * * * remove clone Supported * * * * Supported * * * * * * Clone on the same physical disk (multiple BEs on the same disk) Supported Supported August 7, 2009 Not supported 66