This document provides a summary of storage controller types and their functionality, firmware patches, and operating system connectivity. It discusses controllers from LSI, NVidia, Marvell, and Uli, covering their specifications, supported RAID levels, and installation instructions for connecting them to various operating systems like Solaris, Linux, and Windows. Abbreviations and additional resources are also listed.
This document provides a support matrix for backup applications and IBM System Storage TS7650, TS7610, and P3000 tape libraries. It lists supported firmware versions and drivers for various backup applications on AIX, HP-UX, Linux, Solaris, and Windows operating systems. Notes provide additional details on driver requirements for features like IBM Path Failover.
The document provides instructions for configuring Linux on the Vortex86SX evaluation board. It describes enabling support for the Vortex86SX CPU, IDE controller, display, networking, USB, and sound hardware in the Linux kernel. It also provides details on required drivers and patches. The technical support contact information is given.
This document summarizes several attacks against platform firmware and secure boot. It describes attacks that modify the platform key in NVRAM to disable secure boot, modify the image verification policies to bypass signature checks, exploit confusion between PE and TE file formats to skip signature verification, and corrupt the "Setup" UEFI variable to potentially brick the system. The attacks demonstrate vulnerabilities in how some firmware implementations store and handle sensitive secure boot configuration data in non-volatile variables.
[DEFCON 16] Bypassing pre-boot authentication passwords by instrumenting the...Moabi.com
Pre-boot authentication software, in particular full hard disk encryption software, play a key role in preventing information theft. In this paper, we present a new class of vulnerability affecting multiple high value pre-boot authentication software, including the latest Microsoft disk encryption technology : Microsoft Vista's Bitlocker, with TPM chip enabled. Because Pre-boot authentication software programmers commonly make wrong assumptions about the inner workings of the BIOS interruptions responsible for handling keyboard input, they typically use the BIOS API without flushing or initializing the BIOS internal keyboard buffer. Therefore, any user input including plain text passwords remains in memory at a given physical location. In this article, we first present a detailed analysis of this new class of vulnerability and generic exploits for Windows and Unix platforms under x86 architectures. Unlike current academic research aiming at extracting information from the RAM, our practical methodology does not require any physical access to the computer to extract plain text passwords from the physical memory. In a second part, we will present how this information leakage combined with usage of the BIOS API without careful initialization of the BIOS keyboard buffer can lead to computer reboot without console access and full security bypass of the pre-boot authentication pin if an attacker has enough privileges to modify the bootloader. Other related work include information leakage from CPU caches, reading physical memory thanks to firewire and switching CPU modes.
This document provides specifications for the M61PMV motherboard, including that it supports AMD Phenom, Athlon 64x2, Athlon 64, and Sempron processors on the Socket AM2+/AM2 socket with 2000 MT/s HyperTransport. It has dual channel DDR2 memory support up to 4GB, integrated NVIDIA GeForce 7025 graphics, and various expansion slots and ports. The document also lists utilities, drivers, BIOS files, and manuals available for download related to the M61PMV motherboard.
The document compares UEFI and traditional BIOS boot modes. UEFI addresses limitations of BIOS such as a 2TB partition size limit and limited flexibility in the boot process. UEFI uses a GPT partition scheme that supports larger drives, redundant partition tables, and booting by file path rather than fixed locations. It also provides a unified user interface for firmware configuration and allocates memory to device firmware on demand. Features like Secure Boot and booting from NVMe drives are only available via UEFI boot mode. Converting from BIOS to UEFI requires changing disk partitioning from MBR to GPT and enabling UEFI mode in the system BIOS.
BIOS and UEFI are types of firmware that control the boot process. BIOS uses the MBR partition table and boots by loading the MBR, then the partition bootsector. UEFI uses the GPT partition table and ESP partition, and its boot manager loads UEFI drivers and bootloaders. Secure Boot is an UEFI extension that verifies signatures of boot components for security.
This document discusses using SR-IOV and KVM virtual machines on Debian to virtualize high-performance servers requiring low latency and high throughput networking. It describes configuring SR-IOV on the server's Ethernet cards through the BIOS. On Debian, it shows enabling SR-IOV drivers in the kernel, configuring virtual functions, and assigning them to virtual machines using libvirt with PCI device passthrough. VLAN tagging and MAC addresses must be configured separately on the host due to limitations of the Debian version used.
This document provides a support matrix for backup applications and IBM System Storage TS7650, TS7610, and P3000 tape libraries. It lists supported firmware versions and drivers for various backup applications on AIX, HP-UX, Linux, Solaris, and Windows operating systems. Notes provide additional details on driver requirements for features like IBM Path Failover.
The document provides instructions for configuring Linux on the Vortex86SX evaluation board. It describes enabling support for the Vortex86SX CPU, IDE controller, display, networking, USB, and sound hardware in the Linux kernel. It also provides details on required drivers and patches. The technical support contact information is given.
This document summarizes several attacks against platform firmware and secure boot. It describes attacks that modify the platform key in NVRAM to disable secure boot, modify the image verification policies to bypass signature checks, exploit confusion between PE and TE file formats to skip signature verification, and corrupt the "Setup" UEFI variable to potentially brick the system. The attacks demonstrate vulnerabilities in how some firmware implementations store and handle sensitive secure boot configuration data in non-volatile variables.
[DEFCON 16] Bypassing pre-boot authentication passwords by instrumenting the...Moabi.com
Pre-boot authentication software, in particular full hard disk encryption software, play a key role in preventing information theft. In this paper, we present a new class of vulnerability affecting multiple high value pre-boot authentication software, including the latest Microsoft disk encryption technology : Microsoft Vista's Bitlocker, with TPM chip enabled. Because Pre-boot authentication software programmers commonly make wrong assumptions about the inner workings of the BIOS interruptions responsible for handling keyboard input, they typically use the BIOS API without flushing or initializing the BIOS internal keyboard buffer. Therefore, any user input including plain text passwords remains in memory at a given physical location. In this article, we first present a detailed analysis of this new class of vulnerability and generic exploits for Windows and Unix platforms under x86 architectures. Unlike current academic research aiming at extracting information from the RAM, our practical methodology does not require any physical access to the computer to extract plain text passwords from the physical memory. In a second part, we will present how this information leakage combined with usage of the BIOS API without careful initialization of the BIOS keyboard buffer can lead to computer reboot without console access and full security bypass of the pre-boot authentication pin if an attacker has enough privileges to modify the bootloader. Other related work include information leakage from CPU caches, reading physical memory thanks to firewire and switching CPU modes.
This document provides specifications for the M61PMV motherboard, including that it supports AMD Phenom, Athlon 64x2, Athlon 64, and Sempron processors on the Socket AM2+/AM2 socket with 2000 MT/s HyperTransport. It has dual channel DDR2 memory support up to 4GB, integrated NVIDIA GeForce 7025 graphics, and various expansion slots and ports. The document also lists utilities, drivers, BIOS files, and manuals available for download related to the M61PMV motherboard.
The document compares UEFI and traditional BIOS boot modes. UEFI addresses limitations of BIOS such as a 2TB partition size limit and limited flexibility in the boot process. UEFI uses a GPT partition scheme that supports larger drives, redundant partition tables, and booting by file path rather than fixed locations. It also provides a unified user interface for firmware configuration and allocates memory to device firmware on demand. Features like Secure Boot and booting from NVMe drives are only available via UEFI boot mode. Converting from BIOS to UEFI requires changing disk partitioning from MBR to GPT and enabling UEFI mode in the system BIOS.
BIOS and UEFI are types of firmware that control the boot process. BIOS uses the MBR partition table and boots by loading the MBR, then the partition bootsector. UEFI uses the GPT partition table and ESP partition, and its boot manager loads UEFI drivers and bootloaders. Secure Boot is an UEFI extension that verifies signatures of boot components for security.
This document discusses using SR-IOV and KVM virtual machines on Debian to virtualize high-performance servers requiring low latency and high throughput networking. It describes configuring SR-IOV on the server's Ethernet cards through the BIOS. On Debian, it shows enabling SR-IOV drivers in the kernel, configuring virtual functions, and assigning them to virtual machines using libvirt with PCI device passthrough. VLAN tagging and MAC addresses must be configured separately on the host due to limitations of the Debian version used.
Plan de sesion_integrado_18_febrero_2011 -eng-Alberto Vargas
The SLI system allows linking of multiple video cards to increase graphics processing power. The Crossfire system also allows multiple graphics cards but is AMD's technology. The motherboard being discussed uses the ATX form factor and has many ports and features including support for multiple graphics cards. It has specifications for the CPU, memory, audio, networking and expansion slots. The documentation provides steps for installing components like CPUs, cooling systems, memory modules, expansion cards and multiple graphics cards in SLI or Crossfire configurations. It also describes BIOS setup options and safety precautions.
The document discusses Unified Extensible Firmware Interface (UEFI), which is a replacement for the older BIOS firmware. It aims to address limitations of BIOS like being based on 16-bit architecture and a non-graphical interface. UEFI uses a new GUID Partition Table scheme and supports 64-bit processors and longer mode. It provides standardized interfaces for booting an operating system and improved performance over BIOS. Major operating systems have implemented UEFI including Windows, Mac OS, and Linux.
The document provides instructions for installing AIX5.3, HACMP, Oracle9i, and Weblogic 8.1 on IBM P510 servers with attached storage. It outlines the required hardware, including servers, storage arrays, and networking equipment. It then details the steps for hardware installation, disk array configuration, operating system installation, software package installation, system configuration, and volume group creation for database storage.
UEFI Spec Version 2.4 Facilitates Secure Updateinsydesoftware
The document discusses new features in UEFI Spec Version 2.4 related to facilitating secure firmware updates. Key points include:
1) UEFI 2.4 defines a new capsule format for delivering firmware management protocol (FMP) updates that allows firmware components to be updated early in the pre-boot process.
2) The capsule format supports delivering multiple driver and image payloads.
3) UEFI 2.4 also defines delivering update capsules to the boot disk and having the firmware process them on restart, as well as leaving a variable with the processing status.
4) These new methods are meant to help securely update firmware in a more automated way compared to previous solutions like using EFI shell.
5th Chapter of "Unified Communications with Elastix" Vol.1
(Version: Elastix 2.2)
We recommend to read the chapter along with the presentation.
http://elx.ec/chapter5
This document discusses the differences between BIOS and UEFI firmware interfaces that initialize hardware and boot operating systems on computers. BIOS has been used for over 25 years but has limitations. UEFI was created in 2005 to replace BIOS and overcome its limitations. UEFI supports larger disk sizes and partitions, a graphical interface, and can be programmed in C/C++, while BIOS is programmed in hex/assembly and has a non-graphical interface. The document recommends writing a program to test if a computer is booted using the legacy BIOS or newer UEFI firmware interface.
PfSense is an open source firewall software that provides features similar to commercial firewalls. It can be installed on Hacom hardware, which typically have Realtek or Intel network interfaces. To set up PfSense, connect a monitor and keyboard during initial boot up to configure the network interfaces. The web administration interface can then be accessed through the LAN IP address. Firmware updates are also described. Technical support is available from Hacom.
This document discusses router hacking using open source software like OpenWRT. It provides an overview of OpenWRT, describing its supported hardware platforms and communities. It discusses building OpenWRT packages and images, and supported router platforms such as the Linksys WRT54G, ASUS WL-700gE, Meraki Mini, and LaFonera which are readily available in Taiwan. It warns of risks when modifying router firmware.
Hpe pro liant dl180 generation9 (gen9) Sourav Dash
The HPE ProLiant DL180 Gen9 is a 2U server designed for SMBs and enterprises needing balanced compute and storage capabilities. It offers expandability through up to two processors, sixteen DIMM slots, three PCIe slots, and sixteen hard drive bays. The server provides reliability, manageability, and flexibility through features like redundant power supply and fan options, iLO management, and various processor, memory, and storage configurations.
The document discusses BIOS and UEFI firmware. It explains that BIOS initializes the computer's hardware and allows booting an operating system, while UEFI is newer standard that supports larger drives and partitions. The document outlines some key advantages of UEFI like supporting drives over 2TB and allowing booting from non-hard drive media. It also discusses UEFI BIOS security features like encryption, theft protection, and secure boot verification of software.
The document provides an overview and introduction to the X64 family of systems from AMD and Intel platforms, including workstations, servers, and blades. It discusses the system specifications, components, BIOS, service processors, and connectivity options. Useful links are also provided for additional information on products, technologies, and a lab for connecting to remote systems.
QNX is a commercial real-time operating system used primarily in embedded systems. It was developed in the 1980s and was acquired by BlackBerry in 2010. QNX uses a microkernel architecture and has been used in vehicles, mobile phones, and other devices. It provides features like distributed processing, multitasking, a file system manager, and an improved graphical user interface. QNX is installed using installation media and guides the user through setting up partitions and copying files to the hard disk.
The document summarizes the 6 main stages of the Linux boot process:
1) The BIOS performs checks and loads the master boot record (MBR) from the hard drive.
2) The MBR loads the GRUB boot loader.
3) GRUB has two stages - stage 1 in the MBR points to stage 2, which loads the GRUB configuration file and displays the boot menu.
4) The GRUB configuration file specifies the default or chosen kernel to load from available options.
5) The kernel is loaded and starts initial processes before handing over to userspace.
6) Linux shutdown uses commands to notify users and block logins before signaling processes and powering off in
SinoV-AP1000 asterisk IPPBX user manualCherry Jiang
Start from 2004, SinoVoIP Factory focus on Asterisk card, GSM Gateway, Opensource IPPBX, POE.
Should any support just kindly msg sinovoip@foxmail.com
We will full-support you in this field.
Thank you!
Cherry
The document provides installation and troubleshooting information for the SimCity 2000 demo, including requirements for video cards, mouse drivers, memory issues, and solutions for common error messages and crashes. It details requirements such as a Microsoft compatible mouse and at least 4MB of RAM, and provides guidance for specific video cards like ATI, Diamond Stealth, and Trident. Troubleshooting tips include creating a boot disk to avoid conflicts with memory managers, and loading drivers or patches for issues with cards like ATI and Diamond Stealth.
Learn about IBM System Storage TS7650 / TS7650G / TS7610 ProtecTIER Deduplication Appliance / Gateway / Appliance Express. For more information on Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
The Unofficial VCAP / VCP VMware Study GuideVeeam Software
Veeam® is happy to provide the VMware community with new, unofficial study guides prepared by VMware certified professionals Jason Langer and Josh Coen.
Free VCP5-DCV Study Guide
In this 136-page study guide Jason and Josh cover all seven of the exam blueprint sections to help prepare you for the VCP exam.
Free VCAP5-DCA Study Guide
For those currently holding their VCP certification and want to take it up a notch, Jason and Josh have you covered with the 248-page VCAP5-DCA study guide. Using this study guide along with hands-on lab time will help you in the three and a half hours, lab-based VCAP5-DCA exam.
With the HPE ProLiant DL325 Gen10 server, Hewlett Packard Enterprise is extending the worlds' most secure industry standard servers product families. This a secure and versatile single socket (1P) 1U AMD EPYC™ based platform offers an exceptional balance of processor, memory and I/O for virtualization and data intensive workloads. With up to 32 cores, up to 16 DIMMs, 2 TB memory capacity and support for up to 10 NVMe drives, this server delivers 2P performance with 1P economics.This datasheet includes features, port description, configuration guide and specification of this series.
VCS 6.0 requires Solaris 11 update 1 and longer supports several older features. It no longer supports configuration wizards, agents for campus clusters, NFS locks, service group heartbeats, SAN volumes, or VRTSWebApp. 1 CPU and 256MB RAM are required, with recommended disk space of 604MB for /opt. Solaris 11 64-bit is needed and Sybase and DB2 agents are not supported in 6.0 PR1.
The JetStor XF2026D is a high density all-flash array solution with a 2U 26-bay form factor. It uses dual redundant hardware and the XEVO operating system to achieve 99.9999% availability. The XEVO OS provides various features like dashboard monitoring, SSD analysis, easy deployment, cloning, replication, and reporting. The XF2026D is suitable for performance-intensive applications like databases and virtualization due to its high throughput and low latency capabilities.
Plan de sesion_integrado_18_febrero_2011 -eng-Alberto Vargas
The SLI system allows linking of multiple video cards to increase graphics processing power. The Crossfire system also allows multiple graphics cards but is AMD's technology. The motherboard being discussed uses the ATX form factor and has many ports and features including support for multiple graphics cards. It has specifications for the CPU, memory, audio, networking and expansion slots. The documentation provides steps for installing components like CPUs, cooling systems, memory modules, expansion cards and multiple graphics cards in SLI or Crossfire configurations. It also describes BIOS setup options and safety precautions.
The document discusses Unified Extensible Firmware Interface (UEFI), which is a replacement for the older BIOS firmware. It aims to address limitations of BIOS like being based on 16-bit architecture and a non-graphical interface. UEFI uses a new GUID Partition Table scheme and supports 64-bit processors and longer mode. It provides standardized interfaces for booting an operating system and improved performance over BIOS. Major operating systems have implemented UEFI including Windows, Mac OS, and Linux.
The document provides instructions for installing AIX5.3, HACMP, Oracle9i, and Weblogic 8.1 on IBM P510 servers with attached storage. It outlines the required hardware, including servers, storage arrays, and networking equipment. It then details the steps for hardware installation, disk array configuration, operating system installation, software package installation, system configuration, and volume group creation for database storage.
UEFI Spec Version 2.4 Facilitates Secure Updateinsydesoftware
The document discusses new features in UEFI Spec Version 2.4 related to facilitating secure firmware updates. Key points include:
1) UEFI 2.4 defines a new capsule format for delivering firmware management protocol (FMP) updates that allows firmware components to be updated early in the pre-boot process.
2) The capsule format supports delivering multiple driver and image payloads.
3) UEFI 2.4 also defines delivering update capsules to the boot disk and having the firmware process them on restart, as well as leaving a variable with the processing status.
4) These new methods are meant to help securely update firmware in a more automated way compared to previous solutions like using EFI shell.
5th Chapter of "Unified Communications with Elastix" Vol.1
(Version: Elastix 2.2)
We recommend to read the chapter along with the presentation.
http://elx.ec/chapter5
This document discusses the differences between BIOS and UEFI firmware interfaces that initialize hardware and boot operating systems on computers. BIOS has been used for over 25 years but has limitations. UEFI was created in 2005 to replace BIOS and overcome its limitations. UEFI supports larger disk sizes and partitions, a graphical interface, and can be programmed in C/C++, while BIOS is programmed in hex/assembly and has a non-graphical interface. The document recommends writing a program to test if a computer is booted using the legacy BIOS or newer UEFI firmware interface.
PfSense is an open source firewall software that provides features similar to commercial firewalls. It can be installed on Hacom hardware, which typically have Realtek or Intel network interfaces. To set up PfSense, connect a monitor and keyboard during initial boot up to configure the network interfaces. The web administration interface can then be accessed through the LAN IP address. Firmware updates are also described. Technical support is available from Hacom.
This document discusses router hacking using open source software like OpenWRT. It provides an overview of OpenWRT, describing its supported hardware platforms and communities. It discusses building OpenWRT packages and images, and supported router platforms such as the Linksys WRT54G, ASUS WL-700gE, Meraki Mini, and LaFonera which are readily available in Taiwan. It warns of risks when modifying router firmware.
Hpe pro liant dl180 generation9 (gen9) Sourav Dash
The HPE ProLiant DL180 Gen9 is a 2U server designed for SMBs and enterprises needing balanced compute and storage capabilities. It offers expandability through up to two processors, sixteen DIMM slots, three PCIe slots, and sixteen hard drive bays. The server provides reliability, manageability, and flexibility through features like redundant power supply and fan options, iLO management, and various processor, memory, and storage configurations.
The document discusses BIOS and UEFI firmware. It explains that BIOS initializes the computer's hardware and allows booting an operating system, while UEFI is newer standard that supports larger drives and partitions. The document outlines some key advantages of UEFI like supporting drives over 2TB and allowing booting from non-hard drive media. It also discusses UEFI BIOS security features like encryption, theft protection, and secure boot verification of software.
The document provides an overview and introduction to the X64 family of systems from AMD and Intel platforms, including workstations, servers, and blades. It discusses the system specifications, components, BIOS, service processors, and connectivity options. Useful links are also provided for additional information on products, technologies, and a lab for connecting to remote systems.
QNX is a commercial real-time operating system used primarily in embedded systems. It was developed in the 1980s and was acquired by BlackBerry in 2010. QNX uses a microkernel architecture and has been used in vehicles, mobile phones, and other devices. It provides features like distributed processing, multitasking, a file system manager, and an improved graphical user interface. QNX is installed using installation media and guides the user through setting up partitions and copying files to the hard disk.
The document summarizes the 6 main stages of the Linux boot process:
1) The BIOS performs checks and loads the master boot record (MBR) from the hard drive.
2) The MBR loads the GRUB boot loader.
3) GRUB has two stages - stage 1 in the MBR points to stage 2, which loads the GRUB configuration file and displays the boot menu.
4) The GRUB configuration file specifies the default or chosen kernel to load from available options.
5) The kernel is loaded and starts initial processes before handing over to userspace.
6) Linux shutdown uses commands to notify users and block logins before signaling processes and powering off in
SinoV-AP1000 asterisk IPPBX user manualCherry Jiang
Start from 2004, SinoVoIP Factory focus on Asterisk card, GSM Gateway, Opensource IPPBX, POE.
Should any support just kindly msg sinovoip@foxmail.com
We will full-support you in this field.
Thank you!
Cherry
The document provides installation and troubleshooting information for the SimCity 2000 demo, including requirements for video cards, mouse drivers, memory issues, and solutions for common error messages and crashes. It details requirements such as a Microsoft compatible mouse and at least 4MB of RAM, and provides guidance for specific video cards like ATI, Diamond Stealth, and Trident. Troubleshooting tips include creating a boot disk to avoid conflicts with memory managers, and loading drivers or patches for issues with cards like ATI and Diamond Stealth.
Learn about IBM System Storage TS7650 / TS7650G / TS7610 ProtecTIER Deduplication Appliance / Gateway / Appliance Express. For more information on Storage Systems, visit http://ibm.co/LIg7gk.
Visit http://on.fb.me/LT4gdu to 'Like' the official Facebook page of IBM India Smarter Computing.
The Unofficial VCAP / VCP VMware Study GuideVeeam Software
Veeam® is happy to provide the VMware community with new, unofficial study guides prepared by VMware certified professionals Jason Langer and Josh Coen.
Free VCP5-DCV Study Guide
In this 136-page study guide Jason and Josh cover all seven of the exam blueprint sections to help prepare you for the VCP exam.
Free VCAP5-DCA Study Guide
For those currently holding their VCP certification and want to take it up a notch, Jason and Josh have you covered with the 248-page VCAP5-DCA study guide. Using this study guide along with hands-on lab time will help you in the three and a half hours, lab-based VCAP5-DCA exam.
With the HPE ProLiant DL325 Gen10 server, Hewlett Packard Enterprise is extending the worlds' most secure industry standard servers product families. This a secure and versatile single socket (1P) 1U AMD EPYC™ based platform offers an exceptional balance of processor, memory and I/O for virtualization and data intensive workloads. With up to 32 cores, up to 16 DIMMs, 2 TB memory capacity and support for up to 10 NVMe drives, this server delivers 2P performance with 1P economics.This datasheet includes features, port description, configuration guide and specification of this series.
VCS 6.0 requires Solaris 11 update 1 and longer supports several older features. It no longer supports configuration wizards, agents for campus clusters, NFS locks, service group heartbeats, SAN volumes, or VRTSWebApp. 1 CPU and 256MB RAM are required, with recommended disk space of 604MB for /opt. Solaris 11 64-bit is needed and Sybase and DB2 agents are not supported in 6.0 PR1.
The JetStor XF2026D is a high density all-flash array solution with a 2U 26-bay form factor. It uses dual redundant hardware and the XEVO operating system to achieve 99.9999% availability. The XEVO OS provides various features like dashboard monitoring, SSD analysis, easy deployment, cloning, replication, and reporting. The XF2026D is suitable for performance-intensive applications like databases and virtualization due to its high throughput and low latency capabilities.
FreedomEV is a project that aims to enable full consumer control over electric vehicles by obtaining root access to a Tesla Model X. It introduces a USB stick containing Ubuntu that runs chrooted to provide additional functionality while preserving the manufacturer's software. A dynamic web interface is used to configure FreedomEV apps, which bundle car functions to ensure proper activation and deactivation. While obtaining root access directly from Tesla is challenging, workarounds include finding someone with an existing exploit or asking a service technician. Future goals include improving the hotspot functionality and adding more customization apps.
VDCF is a management tool for virtualizing and monitoring Solaris environments. It allows centralized installation, operation, migration, monitoring, security, hardening and disaster recovery of Solaris zones, LDoms, and bare metal servers. VDCF provides simplicity, standardization, and high availability for private clouds. It has been in production use since 2006 to virtualize and manage Solaris environments.
Symantec Endpoint Encryption - Proof Of Concept DocumentIftikhar Ali Iqbal
The document is to be used as a POC template for the Drive Encryption part in Symantec Endpoint Encryption Powered by PGP. Please make sure that the latest information and platform support is used.
This document discusses iWave Systems' products and services related to embedded software development. It describes iWave's expertise in developing board support packages (BSPs) for various operating systems including Windows Embedded Compact 7, Embedded Linux, Android, and others. It also lists iWave's driver development experience and capabilities across domains like storage, display, multimedia, wireless technologies and more. Product details are provided for some of iWave's BSP offerings for Freescale platforms like i.MX6, i.MX53 and Sabre boards.
This document provides instructions for installing, securing, and maintaining FreeBSD servers. It discusses pre-installation planning including partitioning, software selection, and kernel customization. Post-installation tasks covered include rebuilding the operating system to incorporate updates, installing software via packages and ports, and preparing for automated upgrades. The goal is to provide a secure, optimized system tailored to the server's purpose through careful configuration and removal of unnecessary components.
The SLI system allows linking multiple video cards to increase graphics processing power. The Crossfire system similarly allows up to four GPUs in one computer. The supercomputer motherboard has an ATX form factor and supports Intel Core i7 processors, triple channel DDR3 memory, and multiple PCIe slots. It has various audio, LAN, and rear I/O features.
This document provides a quick reference guide for computer technicians with useful DOS commands, important data locations, common router/modem login details, IP addresses to test connectivity, BIOS beep codes, and links to diagnostic tools and driver/manual repositories. It covers commands for networking, file management, and system information. Important locations are listed for email, address books, documents, and accounting software databases. Default credentials are given for common router models from Linksys, Netgear, and D-Link.
This document provides instructions for installing a mainboard. It identifies the main components of the mainboard and explains how to install the processor, memory modules, and install the mainboard into a chassis. Jumpers and switches must be set correctly before installation. Optional devices such as expansion cards can then be installed along with making connections to onboard ports and headers.
Free radius billing server with practical vpn exmapleChanaka Lasantha
This document provides instructions for setting up a total site-to-site Linux-based OpenVPN solution with dynamic DNS (DDNS) in 3 pages. It includes steps to install and configure a DDNS client, FreeRADIUS server, MySQL database, OpenVPN server, firewall rules, and a web interface for managing the FreeRADIUS server. The full document contains technical details for installing packages, editing configuration files, testing the setup, and securing the system.
This is a presentation that looks ta some of the Linux commands you could use to identify the hardware on your system. This can be useful for troubleshooting, or just for figuring out which motherboard is in which box.
This document provides instructions for installing Snort 2.8.5 and Snort Report 1.3.1 on an Ubuntu 8.04 LTS system to monitor network traffic and view intrusion detection alerts. It outlines downloading and installing the Ubuntu operating system, Snort Report dependencies like MySQL and PHP, compiling and configuring Snort from source, and basic network topology. Installing all components results in an intrusion detection system that sniffs traffic on one network interface and allows administration and alert viewing on another.
The document provides details about various components of a computer system including motherboard components, RAM types, CPU types, and BIOS settings. It discusses the purposes and properties of motherboard components such as CPU slot, RAM slots, expansion card slots, and ports. It also compares different RAM types such as SRAM, DRAM, SDRAM, DDR SDRAM, DDR2 SDRAM, and DDR3 SDRAM in terms of features and specifications. The document provides information about configuring and applying BIOS settings to change boot options, set passwords, and configure hardware settings.
Building PoC ready ODM Platforms with Arm SystemReady v5.2.pdfPaul Yang
The purpose of this technical talk with the demo is to show ODMs, OEMs, and ISVs how to leverage SystemReady Lab, showcase the use-case based on the virtualization platform for the edge, and deploy open-source tools that set up ODMs to develop their Arm platforms.
X64 Workshop Linux Information GatheringAero Plane
The document discusses methods for gathering data from Linux systems to troubleshoot issues, including collecting log files and configuration information using tools like sysreport, siga, and Linux Explorer. It also covers analyzing the collected data, capturing system core dumps, using the Linux SysRq feature, and troubleshooting hanging systems. Advanced topics like crash dump analysis and the crash utility are also briefly outlined.
The document provides an overview of I/O architectures including PCI, PCI-X and PCI-e. It discusses the bus architectures, speeds and loading of PCI and PCI-X. It describes the different transaction types (programmed I/O, DMA, peer-to-peer) and protocols (retry, disconnect). It also covers features of PCI-e such as lanes, bandwidth and link initialization. Finally, it mentions where to find information on current issues and links to specifications.
The document discusses information gathering from system components like the BIOS, IPMI, and sensors. It provides an overview of BIOS execution stages, IPMI architecture and commands, and how to view sensor data using entities and thresholds. IPMI commands allow viewing the sensor data repository, system event log, and field replaceable unit information.
The document discusses managing hardware on X64 systems including AMD CPU types, system upgrades involving CPUs and memory, identifying M2 and non-M2 systems, updating ILOM and FRUID data, and current issues. It provides details on AMD CPU architectures, models, sockets and steps. It also outlines procedures for identifying M2 systems, updating firmware, replacing components, and correcting FRUID information.
The document discusses CPU and memory architecture, error handling, and troubleshooting for Opteron processors. It covers cache and memory organization, error reporting banks, correctable and uncorrectable error types, machine check exceptions, memory addressing including interleaving and the memory hole, and provides examples of error messages from Linux and Solaris systems.
This document provides an overview of various diagnostic tools available for Sun servers, including:
1. SP Diags (stinger) and spdiag Tool for running diagnostics from the Service Processor.
2. SUNvts, PC Check, and HDT for running comprehensive hardware tests and debugging.
3. CSTH, HERD, EDAC, and mcelog for monitoring hardware errors and machine checks.
2. 1 – What is not covered by this TOI 2 – Controller types and functionality 3 – Controller firmware and patches 4 – Operating system connectivity 5 – Disk size and geometry 6 – RAID levels, building and recovery 7 – iSCSI concepts 8 – iSCSI requirements and building 9 – Abbreviations, issues & useful links
45. Building RAID - LSI1020/1030 Select Controller Select RAID Properties Add Primary Disk Add Members Exit Configuration Disk Now Resyncing
46.
47.
48.
49.
50.
51.
52.
53.
54.
55.
56.
57.
58.
59.
60.
61.
62.
63.
64.
65.
66.
67.
68.
69.
70.
71.
72.
73.
74.
75.
76.
77.
78.
79.
80.
81.
82.
83.
84.
85.
86.
Editor's Notes
Slide 34 - Sun SAS/SATA Platforms Now you understand the technology lets talk about the implementation of SAS and SATA in the Sun product range. What you will see from this table is that many platforms share a common or similar SAS or SATA controller. This is deliberate and makes driver development and maintenance more manageable. From the left, we have the product marketing name, internal product code, device storage type, controller, RAID levels supported and the drive form factor. DISCLAIMER: Some platforms are still under development so specifications may have changed since the platform documentation was created. Blade servers as an example change design very frequently due to market requirements. At the bottom of the slide the legend details marketing name and ASIC name for the Nvidia class MCP's otherwise known as Media Communications Processors. Media Communications Processors are the central bus generators for Nvidia chip sets. These extended bridge chips generate most platform buses which include the SATA controller. What is important to note with the LSI controllers is the end letter or to be exact the lack of a letter. Typically, the PCI express version of the controller ends with an “e”. The PCI and PCI-X version of the HBA does not have a letter but is more commonly referred to as the “x” version.
Slide 34 - Sun SAS/SATA Platforms Now you understand the technology lets talk about the implementation of SAS and SATA in the Sun product range. What you will see from this table is that many platforms share a common or similar SAS or SATA controller. This is deliberate and makes driver development and maintenance more manageable. From the left, we have the product marketing name, internal product code, device storage type, controller, RAID levels supported and the drive form factor. DISCLAIMER: Some platforms are still under development so specifications may have changed since the platform documentation was created. Blade servers as an example change design very frequently due to market requirements. At the bottom of the slide the legend details marketing name and ASIC name for the Nvidia class MCP's otherwise known as Media Communications Processors. Media Communications Processors are the central bus generators for Nvidia chip sets. These extended bridge chips generate most platform buses which include the SATA controller. What is important to note with the LSI controllers is the end letter or to be exact the lack of a letter. Typically, the PCI express version of the controller ends with an “e”. The PCI and PCI-X version of the HBA does not have a letter but is more commonly referred to as the “x” version.
Slide 35 – LSI 1064x The LSI 1064 or 1064x controller is the most common amongst x64 and SPARC platforms. The 1064 is a member of the MPT Fusion family of HBAs which were first seen in Sun on the v20z and v440 platforms. The v20z had a single bus LSI1020 controller and the v440 had a dual channel LSI 1030 controller. The MPT Fusion range of HBAs fuse an ARM compliant processor with memory and the physical disk interface. ARM compliant processors are commonly found in PDAs, cell phones and set top boxes around the home. There are two variants of the 4 port LSI 1064 that we at Sun use. The 1064 and 1064e. The 1064 usually sits on a digital PCI-X bus but in some implementations we use the HBA on a standard PCI bus. The 1064e is mounted on a serial PCI express bus. Time to market of the LSI 1064e meant that most products used the 1064 on a PCI-X bus. This was due to a few bugs on the PCI express version that were resolved late 2005. Products like the T2000 Ontario had to use a 1064 on a PCI-X card while the PCI express version was being fixed. Here is an overview of the LSI 1064 specifications .
Slide 35 – LSI 1064x The LSI 1064 or 1064x controller is the most common amongst x64 and SPARC platforms. The 1064 is a member of the MPT Fusion family of HBAs which were first seen in Sun on the v20z and v440 platforms. The v20z had a single bus LSI1020 controller and the v440 had a dual channel LSI 1030 controller. The MPT Fusion range of HBAs fuse an ARM compliant processor with memory and the physical disk interface. ARM compliant processors are commonly found in PDAs, cell phones and set top boxes around the home. There are two variants of the 4 port LSI 1064 that we at Sun use. The 1064 and 1064e. The 1064 usually sits on a digital PCI-X bus but in some implementations we use the HBA on a standard PCI bus. The 1064e is mounted on a serial PCI express bus. Time to market of the LSI 1064e meant that most products used the 1064 on a PCI-X bus. This was due to a few bugs on the PCI express version that were resolved late 2005. Products like the T2000 Ontario had to use a 1064 on a PCI-X card while the PCI express version was being fixed. Here is an overview of the LSI 1064 specifications .
Slide 35 – LSI 1064x The LSI 1064 or 1064x controller is the most common amongst x64 and SPARC platforms. The 1064 is a member of the MPT Fusion family of HBAs which were first seen in Sun on the v20z and v440 platforms. The v20z had a single bus LSI1020 controller and the v440 had a dual channel LSI 1030 controller. The MPT Fusion range of HBAs fuse an ARM compliant processor with memory and the physical disk interface. ARM compliant processors are commonly found in PDAs, cell phones and set top boxes around the home. There are two variants of the 4 port LSI 1064 that we at Sun use. The 1064 and 1064e. The 1064 usually sits on a digital PCI-X bus but in some implementations we use the HBA on a standard PCI bus. The 1064e is mounted on a serial PCI express bus. Time to market of the LSI 1064e meant that most products used the 1064 on a PCI-X bus. This was due to a few bugs on the PCI express version that were resolved late 2005. Products like the T2000 Ontario had to use a 1064 on a PCI-X card while the PCI express version was being fixed. Here is an overview of the LSI 1064 specifications .
Slide 36 – LSI 1064e The LSI 1064e is similar to the 1064 but interfaces with the host computer using the serial PCI express protocol. The 1064e can be connected to 1, 4 or 8 PCI express lanes from the NVIDIA or NEC PCI express bridge chip bus generator. Speeds and model number or the ARM CPU differ slightly but the overall performance is similar. The PCI express implementation is point to point which makes up in performance for the higher latency serial bus link. It is important to note that the firmware must be updated when available as recognition of new disks and their capacities is required, as well as generic bug fixes.
Slide 39 – LSI 1068x and e The LSI 1068 variant of the LSI HBA is basically a 1064 controller with 8 PHYs rather than 4 in the form of 2 transport modules. The core logic ARM cpu is as in the 1064 controller but the 1068 adds a second 4 port transport module in the mix. Specificaitons are similar for both the PCI-X digital bus and PCI express serial bus versions. As before, the v440 family of platforms is first to implement this new HBA with the v445 server. Raidctl is still used to create and administer RAID arrays but remember to patch Raidctl as it is common that the base operating system's revision of raidctl often does not support the latest LSI ASIC.
Slide 40 – LSI 1078x and e The LSI 1078 HBA is being marketed as a ROC design meaning RAID on chip. This controller is an option on the V445 platform and adds new support for RAID 5. System administrators using this controller will have to use raidctl with the switch -r 5 to generate a RAID 5 disk array.
Slide 37 – Solaris Patches Although the LSI SAS controllers are supported on Solaris 10 and some Solaris 9 platforms, LSI MPT Fusion patches and firmware do exist for Solaris 8 also. This is because the v440 supports Solaris 8. Solaris 9 versions of the x64 and SPARC patches are available on SunSolve and an ever increasing number of new platforms are being qualified for Solaris 9 due to customer pressure based on slow migration to Solaris 10. The LSI SAS and SATA drivers have now become part of the jumbo kernel update for Solaris 10 x64. It is expected that SPARC platform patches will follow. The above patch revisions were correct at the time of presentation creation. Patch increments may have changed since.
Slide 41 – Nvidia NF2050 and 2200 The Nvidia HBA is not actually a single ASIC. The SATA controller is actually built into the Nforce 2200 media communications processor which is also responsible for generating PCI, PCIe, USB and legacy buses. The Nforce 2050 companion chip is similar in design to the 2200 but with reduced functionality that provides another 4 PHYs and a fixed PCI express lane configuration. RAID 0 and 1 are available with this HBA however the RAID levels provided by this HBA are more of a software implementation rather than a hardware RAID solution. When the NVRAID BIOS is enabled, this RAID array looks for and executes a special boot sector which includes detailed disk member information. This boot block contains the RAID configuration which is later read by the special Nvidia storage driver. Without the storage driver, RAID will not correctly work on the platform and the individual disks will be seen rather than the group array.
Slide 42 – Nvidia NF3050/3400 The Nforce 3400 media communications processor and its companion the Nforce 3050 I/O are the new chipset found in Opteron AM2 and Socket F platforms. The chips contain an updated set of features and more functionality over the original NF2200 and NF2050 MCP's RAID 5 is added as an array option for this controller.
Slide 37 – Solaris Patches Although the LSI SAS controllers are supported on Solaris 10 and some Solaris 9 platforms, LSI MPT Fusion patches and firmware do exist for Solaris 8 also. This is because the v440 supports Solaris 8. Solaris 9 versions of the x64 and SPARC patches are available on SunSolve and an ever increasing number of new platforms are being qualified for Solaris 9 due to customer pressure based on slow migration to Solaris 10. The LSI SAS and SATA drivers have now become part of the jumbo kernel update for Solaris 10 x64. It is expected that SPARC platform patches will follow. The above patch revisions were correct at the time of presentation creation. Patch increments may have changed since.
Slide 43 – Marvell 88SX6081 The Marvell 88SX6081 controller is a low cost ASIC built into the Thumper x4500 platform. The x4500 was originally ear marked to use the LSI 1068 however time to market and cost of 6 full SAS/SATA ASIC's meant that the Marvell SATA only controller was a better choice for Thumber. This SATA controller does not incorporate any hardware RAID functions. The controller was selected for Thumper because Thumper uses ZFS for RAID functions.
Slide 37 – Solaris Patches Although the LSI SAS controllers are supported on Solaris 10 and some Solaris 9 platforms, LSI MPT Fusion patches and firmware do exist for Solaris 8 also. This is because the v440 supports Solaris 8. Solaris 9 versions of the x64 and SPARC patches are available on SunSolve and an ever increasing number of new platforms are being qualified for Solaris 9 due to customer pressure based on slow migration to Solaris 10. The LSI SAS and SATA drivers have now become part of the jumbo kernel update for Solaris 10 x64. It is expected that SPARC platform patches will follow. The above patch revisions were correct at the time of presentation creation. Patch increments may have changed since.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 44 – Uli M1575 The Uli M1575 PCI express bridge is similar in design to the Nvidia Nforce 2200 MCP. This is a low cost bridge chip comes with 4 SATA 2 ports and 2 IDE ports as well as a host of other buses. RAID levels include 0, 1, 0+1 and 5 however Sun does not use the RAID features of this controller. The Netra CT 900 platform's CP3060 Montoya blade uses this controller as a low cost alternative to adding a separate LSI 1064 controller since the Uli already contains SATA logic.
Slide 45 – Solaris SATA Driver Solaris 10 Update 2 included SATA driver 1.3 which replaces the traditional ata driver for connecting to SATA disks. Improvements include correct device number addressing, Improved DMA. Support for SATA features like native command queueing. Support for the full SATA 2 specification.
Slide 46 – Solaris “raidctl” Usage As you can see from this slide, the new raidctl command includes support for a new switch -r. The -r switch allows the user to define the RAID level to be created. Depending on platform specifics, -r 0 can be used to create a stripe. -r 1 can be used to create a mirror and -r 5 can be used to create a distributed parity array. Firmware of the HBA can be upgraded by using the -F switch and then by specifying the firmware file and location.
iSCSI - The Internet Small Computer Systems Interface (iSCSI) protocol defines the rules and processes to transmit and receive block storage applications over TCP/IP networks by encapsulating SCSI commands into TCP and transporting them over the network via IP. iSCSI describes: * Transport protocol for SCSI which operates on top of TCP * New mechanism for encapsulating SCSI commands on an IP network * Protocol for a new generation of data storage systems that natively use TCP/IP
An architecture of a pure SCSI is based on the client/server model. A client, for example, server or workstation, initiates requests for data reading or recording from a target - server, for example, a data storage system. Commands which are sent by the client and processed by the server are put into the Command Descriptor Block (CDB). The server executes a command which completion is indicated by a special signal alert. Encapsulation and reliable delivery of CDB transactions between initiators and targets through the TCP/IP network is the main function of the iSCSI, which is due to be implemented in the medium untypical of SCSI, potentially unreliable medium of IP networks. The Diagram depicts a model of the iSCSI protocol levels which allows us to get an idea of an encapsulation order of SCSI commands for their delivery through a physical carrier. The iSCSI protocol controls data block transfer and confirms that I/O operations are truly completed. In its turn, it is provided via one or several TCP connections.
i Benefits of IP storage * IP storage leverages the large installed base of Ethernet-TCP/IP networks and enables storage to be accessed over LAN, MAN, or WAN environments, without needing to alter storage applications. * It also lets IT managers use the existing Ethernet/IP knowledge base and management tools. * It provides for consolidation of data storage systems · Data backup · Server clusterization · Replication · Recovery in emergency conditions * To transfer data to storage devices with the iSCSI interface it's possible to use not only data carriers, communicators and routers of existent LAN/WAN but also usual network cards on the client's side. * The conception of building the World Wide Storage Area Network excellently fits in the development of modern IP Storage technologies. * Maximize storage resources to be available to more applications; * Use existing storage applications (backup, disaster recovery, and mirroring) without modification; and * Manage IP-based storage networks with existing tools and IT expertise.
How does it work? How iSCSI works iSCSI defines the rules and processes to transmit and receive block storage applications over TCP/IP networks. At the physical layer, iSCSI supports a Gigabit Ethernet interface so that systems supporting iSCSI interfaces can be directly connected to standard Gigabit Ethernet switches and/or IP routers. The iSCSI protocol sits above the physical and data-link layers and interfaces to the operating system's standard SCSI Access Method command set. iSCSI enables SCSI-3 commands to be encapsulated in TCP/IP packets and delivered reliably over IP networks. iSCSI can be supported over any physical media that supports TCP/IP as a transport, but today's iSCSI implementations are on Gigabit Ethernet. The iSCSI protocol runs on the host initiator and the receiving target device. iSCSI can run in software over a standard Gigabit Ethernet network interface card (NIC) or can be optimized in hardware for better performance on an iSCSI host bus adapter (HBA). iSCSI also enables the access of block-level storage that resides on Fibre Channel SANs over an IP network via iSCSI-to-Fibre Channel gateways such as storage routers and switches. In the diagram, each server, workstation and storage device support the Ethernet interface and a stack of the iSCSI protocol. IP routers and Ethernet switches are used for network connections.
i Limitations of ISCSI * In IP, packets are delivered without a strict order, it is also in charge of data recovery, which takes more resources. At the same time, in SCSI, as a channel interface, all packets must be delivered one after another without delay, and breach of the order may result in data losses. (iSCSI has managed to solve this problem to some degree requiring a longer packet's head). The head includes additional information which speeds up packet assembling by a great margin. * Considerable expenses of processor power on the client's side which uses such card. According to the developers, the software iSCSI realization can reach data rates of Gigabit Ethernet at a significant, about 100%, CPU load. That is why it is recommended using special network cards which support mechanisms of CPU unload before TCP stack processing. * latency issues - Although there are a lot of means developed to reduce influence of parameters which cause delays in processing of IP packets, the iSCSI technology is positioned for middle-level systems.
Address and Naming Conventions As the iSCSI devices are participants of an IP network they have individual Network Entities. Such Network Entity can have one or several iSCSI nodes. An iSCSI node is an identifier of SCSI devices (in a network entity) available through the network. Each iSCSI node has a unique iSCSI name (up to 255 bytes) which is formed according to the rules adopted for Internet nodes. For example, fqn.com.ustar.storage.itdepartment.161. Such name has an easy-to-perceive form and can be processed by the Domain Name System (DNS). An iSCSI name provides a correct identification of an iSCSI device irrespective of its physical location. At the same time in course of handling data transfer between devices it's more convenient to use a combination of an IP address and a TCP port which are provided by a Network Portal. The iSCSI protocol together with iSCSI names provides a support for aliases which are reflected in the administration systems for better identification and management by system administrators.
Session Management The iSCSI session consists of a Login Phase and a Full Feature Phase which is completed with a special command. The Login Phase of the iSCSI is identical to the Fibre Channel Port Login process (PLOGI). It is used to adjust various parameters between two network entities and confirm an access right of an initiator. If the iSCSI Login Phase is completed successfully the target confirms the login for the initiator; otherwise, the login is not confirmed and a TCP connection breaks. As soon as the login is confirmed the iSCSI session turns to the FULL Feature Phase. If more than one TCP connection was established the iSCSI requires that each command/response pair goes through one TCP connection. Thus, each separate read or write command will be carried out without a necessity to trace each request for passing different flows. However, different transactions can be delivered through different TCP connections within one session. At the end of a transaction the initiator sends/receives last data and the target sends a response which confirms that data are transferred successfully. The iSCSI logout command is used to complete a session - it delivers information on reasons of its completion. It can also send information on what connection should be interrupted in case of a connection error, in order to close troublesome TCP connections.
Error Handling Because of a high probability of errors in data delivery in some IP networks, especially WAN, where the iSCSI can work, the protocol provides a great deal of measures for handling errors. So that error handling and recovery can work correctly both the initiator and the target must be able to buffer commands before they are confirmed. Each terminal must have a possibility to recover selectively a lost or damaged PDU within a transaction for recovery of data transfer. Here is the hierarchy of the error handling and recovery after failures in the iSCSI: 1. The lowest level - identification of an error and data recovery on the SCSI task level, for example, repeated transfer of a lost or damaged PDU. 2. Next level - a TCP connection which transfers a SCSI task can have errors. In this case there is an attempt to recover the connection. 3. At last, the iSCSI session can be damaged. Termination and recovery of a session are usually not required if recovery is implemented correctly on other levels, but the opposite can happen. Such situation requires that all TCP connections be closed, all tasks, underfulfilled SCSI commands be completed, and the session be restarted via the repeated login.
Security As the iSCSI can be used in networks where data can be accessed illegally, the specification allows fpr different security methods. Such encoding means as IPSec which use lower levels do not require additional matching because they are transparent for higher levels, and for the iSCSI as well. Various solutions can be used for authentication, for example, Kerberos or Private Keys Exchange, an iSNS server can be used as a repository of keys.
Use 'iscsitadm' to set up iSCSI target devices. You'll need to provide an equivalently sized ZFS or UFS file system as the backing store for the iSCSI daemon. Use 'iscsiadm' to identify your iSCSI targets, which will discover and use the iSCSI target device.
How does it work? How iSCSI works iSCSI defines the rules and processes to transmit and receive block storage applications over TCP/IP networks. At the physical layer, iSCSI supports a Gigabit Ethernet interface so that systems supporting iSCSI interfaces can be directly connected to standard Gigabit Ethernet switches and/or IP routers. The iSCSI protocol sits above the physical and data-link layers and interfaces to the operating system's standard SCSI Access Method command set. iSCSI enables SCSI-3 commands to be encapsulated in TCP/IP packets and delivered reliably over IP networks. iSCSI can be supported over any physical media that supports TCP/IP as a transport, but today's iSCSI implementations are on Gigabit Ethernet. The iSCSI protocol runs on the host initiator and the receiving target device. iSCSI can run in software over a standard Gigabit Ethernet network interface card (NIC) or can be optimized in hardware for better performance on an iSCSI host bus adapter (HBA). iSCSI also enables the access of block-level storage that resides on Fibre Channel SANs over an IP network via iSCSI-to-Fibre Channel gateways such as storage routers and switches. Fig. 1. IP network with iSCSI devices used Here, each server, workstation and storage device support the Ethernet interface and a stack of the iSCSI protocol. IP routers and Ethernet switches are used for network connections.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.
Slide 47 – Sun Fire x4500 Prtdiag Output. Solaris 10 Update 2 x64 included a much improved ACPI layer which is similar in design to the PICL platform libraries found in the SPARC implementation of the O/S. This allows for the x64 version of prtdiag to be included. This slide shows an example of the x4500 prtdiag and as you can see the 6 Marvell controllers are listed towards the back end of the output.