Your SlideShare is downloading. ×
VICS: a Storage Virtualization Management System for SAN
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

VICS: a Storage Virtualization Management System for SAN

247

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
247
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1. Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) 9 VICS: a Storage Virtualization Management System for SAN Li Bigang, SHU Ji-wu, ZHENG Wei-min Department of Computer Science and Technology Tsinghua University, Beijing 100084, China lbg01@mails.tsinghua.edu.cn Abstract 1. Introduction Storage Area Networks (SANs) have the Storage Area Networks (SANs) [1, 2] virtues of high scalability, high use a net-oriented storage structure, availability and high performance. On which enables the separation of data the other hand, their storage processing and data storage. SANs have virtualization systems are not the virtue of high availability and compatible with multi-operating systems, scalability, high I/O performance, and and it is hard for the virtualization data sharing. SANs employ backup, storage management system to manage remote mirroring, and virtualization multi-type storage. This paper proposes functions, which has made them more a new virtualization storage popular. The storage virtualization management model for SANs: Virtual management system can manage various Intelligent Control System (VICS). It storage systems which still provide one includes three layers, the logical storage uniform interface for users. But at this management layer, the virtualization time the management of the SAN layer and the storage resource systems is not compatible with multi management layer. With the logical operating systems. Various storage management layer and the storage systems, such as XIOtech [3], IBM [4], resource management layer the VICS EMC [5], all have their own virtualization can manage multi operating systems and management systems, which add extra is compatible with various storage complexity and difficultly. Furthermore, systems. The VICS controls the storage the incompatibility between them makes resources through the FC network and the management of SANs more complex, applies the LUNs for various operating and unified storage management is systems, giving users a uniform difficult to achieve. management interface. The VICS The LVM [6] and EVMS [7] storage system employs functions such as virtualization systems run on host storage virtualization, and LUN zoning, servers, but they can only be used for and it supports the management of disks one certain operating system. The and tapes. Furthermore, a cache XIOTech and other storage systems can mechanism is also designed in the VICS, implement storage virtualization which improves the SAN’s performance. management at the device level and are We implemented a prototype of the suitable for multi operating systems, but VICS, subsequent testing proved that the they can only manage their own storage VICS makes the SAN systems more system. StorAge [8] has developed an compatible and easier to manage. out-band virtualization system which 9
  • 2. 10 Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) can manage various storage system. It Server Server Server Server Server Server has limited compatibility with few determinate operating systems through HostSAN the agent program running on its servers. It introduces extra work for the hosts, VIC VIC VIC VIC VIC VIC VIC VIC and it is still complicated for users to manage. What’s more, the extra agent DeviceSAN running on the servers bring additional risk and complication for users. This paper introduces the Virtual Figure 1 Architecture of a SAN with VICS Intelligent Control System (VICS), a storage virtualization management 2. Related Work system based on storage area networks. Now the most popular networks for The VICS splits the traditional SAN into SANs are FC and IP. With the a host SAN and a device SAN. The host publication of the iSCSI standard, the SAN is made up of the multi servers and development of the IP SAN has the networks; the various storage progressed greatly. Now the Intel [9] and resources and the network for them the University of New Hampshire have make up the device SAN. The VICS their own iSCSI implementation [10]. controls both the device SAN and the There have been many studies on host SAN and offers one uniform the volume management software. We management interface for the user. The researched the Sistina Company, which VICS has broad compatibility with multi used the global file system (GFS) [11] as operating systems and with various a parallel file system in a SAN storage resources. Figure 1 illustrates the environment, and issued the logical architecture of a SAN system with a volume manager (LVM) as a part of a VICS. Linux kernel. They provided a plan for In order to validate the VICS, we the virtualization of storage in single implemented a prototype of it. This systems. IBM also used EVMS to solve prototype could apply functions such as this problem [7]. The SANtopia volume volume online resizing/creating, management is a storage virtualization snapshot, LUN-mapping and other system based on hosts [12]. Another virtualization functions. The VICS is example of work on the logical volume also compatible with multi operating is the GFS’s Pool Driver [13]. It is a systems. Furthermore, it is also suitable logical volume manager for SANs in for IP networks and FC networks. The Linux. It also builds virtual volumes for VICS has excellent compatibility and file systems and is a cluster aware driver, scalability. We tested the performance of but it can only be used with the Linux the prototype, and the result showed that OS. the VICS only introduced a little latency We also researched the SCSI for SANs, and the cache mechanism in initiator driver and target driver for the the VICS could greatly improve the ISP HBA in TH-MSNS [14]. Based on it SAN’s performance. we developed the VICS. 10
  • 3. Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) 11 INTERFACE to ne twork FC Target Driver iSCSI Target Driver other drivers VD MAP logical storage TABLE VD Manager Manage Layer Storage Virtualization LBA Mapping Free Space Manager Layer Network Scout PD Manager storage/error detection Layer SCSI Command Execute Module DISKIO TAPEIO CACHEIO CLUSTER DEVICE VIC JBOD RAID FC TAPE Figure 2 the architecture of the VICS controls the device SAN, detects various 3 Architecture of the VICS storage systems and implements the Figure 2 shows the VICS’s cache mechanism. architecture. The system works on the 3.2 The logical storage management network layer. It manages all the storage layer resources and implements storage The logical storage management virtualization, and LUN zoning. The driver receives the SCSI commands and VICS captures all the SCSI I/O messages from the target driver (iSCSI commands and transfers them to the or FC HBA target driver), and then proper storage to implement. sends them to the proper logical disks. 3.1 Overview of the VICS The interface between the target driver The VICS splits the SAN into a and the logical storage management is device SAN and a host SAN. The host designed to be suitable for the IP-SAN SAN enable the VICS to be compatible and the FC-SAN. The main functions of with multi operating systems, and the the logical storage management layer are VICS controls the different storage explained below. systems through the device SAN. Figure (1) The logical storage management 2 shows the software architecture of the layer allows the VICS to be compatible VICS, which includes three layers: the with different target drivers. With this logical storage management layer, the function the VICS can be implemented virtualization layer and the storage in both the FC-SAN and the IP-SAN. resource management layer. The logical The interface is designed as Ref [14]. storage management layer implements Figure 3 shows the SCSI command the LUN zoning and enables the flow. When a new command is received, multiple operating systems to become the target driver calls the rx_cmnd() to uniform. The virtualization layer notify the VICS. After handling the provides the virtualization functions. SCSI command, the VICS calls the The storage resource management layer xmit_reponse() function to notify the 11
  • 4. 12 Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) target driver of the completion of the Table 1 LUN Zoning Table command. If the command is a write VD1 VD2 VDn command, the VICS first informs the ServerA RO DENY RW target driver that the buffer for the data ServerB RW RO RO is ready first. This is implemented ServerX DENY RW RO through the rdy_to_xfer() function. 3.3 The virtualization layer VIC System The virtualization layer implements the storage virtualization management rx_cmnd() rdy_to_xfer() scsi_target_done() function for the VICS. It connects scsi_rx_data() several physical disks (PD) to form command xmit_response() storage container (SC). All the storage queue space is split into the same size of HBA TARGET DRIVER segments, which default is 32MB. The unassigned space of segment named free FC NETWORK segment (FS), others, which are used for virtual disks (VD), are named physical Figure 3 the SCSI command flow in VICS regions. Then, all the storage resource is organized as figure 4. All the metadata With this interface the VICS is able for the virtualization layer is stored at to work well with the iSCSI target driver the first part of the physical disks, and the FC HBA target driver. including the UUID number for the (2) The logical storage management physical disk. Generally speaking, the layer should implement the LUN zoning PD information and the VD information function to manage the logical storage of SC is less then 1MB for a 73GB disks, resources. The logical storage resource so the space for the metadata of access mode includes several grades: virtualization layer is very few. RW: the corresponding host can As shown in figure 4, all the VD are read from and write to the logical made up of the PS. Normally, these PS volume freely. come from different PD to improve the RO: the corresponding host can only performance with this structure, the read data from the logical volume. virtualization layer can offer the DENY: the corresponding host functions such as online cannot access the logical volume. resizing/creating and so on. For example, To implement this function, the if the VD_A need to extend from 50GB logical storage management layer to 100GB, the virtualization layer can identifies one unique number for every just arrange FS link to the end of VD_A host (for example, the MAC address, IP for 100GB. The VD can be extended if or WWN of the FC Network). The there were free space (or FS) in the SC. logical storage management layer keeps The address-mapping is also very one resource table dynamically. One important for this layer. If the SCSI possible example is shown in table 1. command’s logical address is LA and The storage manager can read/modify the logical volume number is LV, the the grades through the management address-mapping manager would map software applied by the VICS. them to the proper PS and offset. 12
  • 5. Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) 13 Another important function of the Name SG size SC Name virtualization layer is snapshot. The PD number Type VD number Metadata VD snapshot function can make a static data VD VD VD pointers info Size copy of a whole VD very fast and SG number ID PD PD PD PS Map Style without stopping the data service; so Size SG number PS many online backup systems use this SG used Max fs size FS PD characteristic to backup data. The Start number FS PS Size Start snapshot is implemented with a Size FS technology named COW (copy on write). Kim and others improved the traditional Figure 4 the architecture of space organization snapshot technology and we have adopted this improved method to The address-mapping maybe is implement this function [15]. The detail linear mode or stripe mode, which are information can be found in the ref. 15. very simple. But VICS’s mapping mode 3.4 Storage resource management is complex. With Stripe Mode, data with layer a fixed size are sent to different PDs one The storage resource management by one, and the performance of the VD layer deals with various disks, and would be better. The virtualization layer employs the determinate interface for implemented the address mapping work the virtualization layer. It receives the in three steps: SCSI commands from the virtualization (1) Find the proper VD according to layer, and sends them to the various the LUN number with the SCSI disks or tapes. The main functions of commands; this layer include: (2) Find the corresponding PS by (1) Providing the virtualization layer comparing the LBA in SCSI command with a uniform interface. This layer and PS size, then get the proper PD screens the difference between the disks information; for the virtualization layer. The RAID (3) Convert logical address to actual algorithm is also implemented in this address and read/write the data. layer. With these 3 steps this layer can (2) Implementing the DISKIO and convert the (CD, LBA) to the (PD, offset) TAPEIO. With the DISKIO, the VICS and read/write data. Figure 5 shows it. can control the disks. The VICS also can control the tapes with the TAPEIO. The DISKIO deals with the SCSI Block Commands, and the TAPEIO deals with the SCSI Stream Commands. The virtualization layer uses the disks to construct virtual disks, while the backup system uses the tape to back up data. The storage resource management layer distinguishes the SCSI commands and sends them to the proper IO driver. Figure 5 address-mapping (3) Implementing the 13
  • 6. 14 Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) communication mechanism. Multi VIC These devices were connected with each nodes communicate with each other. For other through on FC network with a example, two VIC nodes can form a bandwidth of 2Gbps. The different failover system through the operating systems and various storage communication mechanism. The systems formed a complex SAN. Figure metadata and the configuration data can 6 shows the hardware architecture of the be kept synchronous with each other. test environment. (4) Implementing the cache RedHat9 WIN2000 FreeBSD Sp Solaris WIN2000 NetWare mechanism for the VICS. The VICS can use the RAM as a data buffer to improve FC-SA the performance of the storage system. VIC Node VIC Node In this mechanism, many algorithms for N cache systems, such as read-ahead optimizations, should be implemented, as they dramatically improve block JBOD TAPE DEVICE XIOTech device’s I/O performance. The VICS is constructed of the three Figure 6 hardware architecture of the test layers mentioned above. These three environment layers have clear functions and a clear 4.1 Testing configuration interface, so the VICS can provide a The initiator server machines uniform management interface for users include a Xeon 2.4G CPU, 1GB RAM and it is compatible with multi operating and a Qlogic2300 HBA for the Fiber systems and various storage systems. Channel. The operating system is The storage virtualization management different with each to test the system gives users a more intelligent compatibility of VICS. The VIC servers and accessible storage management included two Xeon 2.4G CPUs, 1GB technique. RAM and two Qlogic 2300 HBAs for 4. Performance Evaluation the Fiber Channel. One Qlogic HBA was working in the initiator mode and the In order to prove the benefits and other was working in the target mode. compatibility of the VICS and test its The VICS modules were running on performance, we implemented a these servers. The FC switch type was prototype of the VICS based on the the Brocade Silkworm 3200. This switch TH-MSNS [14]. This prototype supported provides 2Gbps bandwidth for the FC the disk arrays and tape devices. The Channel. cache mechanism was also The storage device for the SAN was implemented. a little complicated. One FC_JBOD was In testing configuration, the server’s connected to the device SAN. This operating systems included the RedHat9, JBOD included five Seagate FC disks. A Windows 2000, FreeBSD, Windows XP, IDE and a SCSI array were connected to NetWare and Solaris SAPRC, and the the device SAN through the I/O control storage systems included FC-DISK machine, same with a tape device. JBOD, XIOTech FC-DISK, SCSI disk 4.2 Testing Results array, IDE disk array and a tape device. We evaluated the performance of 14
  • 7. Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) 15 the VICS with the iometer, which is a systems. standard benchmark used for measuring I/O performance. This benchmark was originally developed by Intel, and it can measure the read/write performance in a sequential/random manner and test the I/O latency. Figure 7 shows the SAN’s read throughput, and Figure 8 shows the SAN’s write throughput. The cache throughput represents the VICS Figure 7 the Read Throughput performance with the cache mechanism. Write The results show that the VICS and the SAN had the same performance. This proves that the VICS had little influence on performance, as the VICS only modifies and transmits the SCSI commands in the RAM, and these operations are much faster than those on the disks. The VICS enhanced the Figure 8 the Write Throughput SAN’s functions, so the slight effect on latency (< 1%) is acceptable. Figure 9 shows the average response time in the three conditions. The result shows that VICS has little influence on the latency of IO operations, since the latency of the VICS and the FC network is much less than the disks. What’s more, the cache mechanism can greatly improve the Figure 9 the average response time VICS’s performance. The cache system used 512MB RAM as the data buffer in 5. Conclusion the test environment. With the cache This paper proposed a storage implementation, the throughput of the virtualization management system based VICS increased from 110MB/s to on the storage area network. The new 130MB/s. The average response time of model includes three layers: the logical IO was reduced greatly, proving the storage management layer, the cache mechanism of VICS can improve virtualization layer and the storage the SAN’s performance. resource management layer. The three In testing environment, all the layers, which have their own interface servers with different OS could access and functions, make up the VICS, which LUN correctly, and the VICS could is a storage management system with control the various storage systems well. high scalability and compatibility. This proves that the VICS is compatible Compared with other virtual storage with multi OS and with various storage systems for SANs, the VICS has some 15
  • 8. 16 Proceedings of the 3rd International Workshop on Storage Network Architecture and Parallel I/Os (SNAPI05) obvious advantages: ement/controlcenter/pdf/H1140_cntrlctr_srm_ (1) By splitting the SAN into a plan_ds_ldv.pdf, May 2004. device SANs and a host SANs and [6] David C. Teigland, Heinz Mauelshagen, introducing the network management Volume Managers in Linux, Sistina Software layer, the VICS can control and manage Inc. http://www.sistina.com,2001 the SANs more directly and easily. [7] Steven Pratt, EVMS:A Common (2) The VICS is compatible with Framework for Volume Management, Linux multi operating systems. The VICS does Technology Center, IBM Corp., not need an additional http://evms.sf.net client/agent/driver to run on the hosts. [8] StoreAge Networking Technologies Ltd., This conserves the power of the hosts High-Performance Storage Virtualization and predigests the management Architecture, http://www.store-age.com, 2001 complexity. [9] Intel Corp, “Intel iSCSI project”, (3) The VICS centralizes the storage http://sourceforge.net/projects/intel-iscsi,2001 resource and provides one uniform [10] Ashish Palekar. Design and interface for the manager. It can manage Implementation of A Linux SCSI Target for the various storage resources and screen Storage Area Networks. Proceedings of the 5th the differences between them for users. Annual Linux Showcase & Conference. 2001 (4) The VICS slightly influences the [11] Sistina Software, Inc. Global File performance of the SAN, and the cache Systems, http://www.sistina.com mechanism in the VICS can greatly [12] Chang-Soo Kim, Gyoung-Bae Kim, improve the performance. Bum-Joo Shin, Volume Management in SAN Environment. ICPADS 2001: 500-508. 1997 Acknowledgement [13] Teigland D. The Pool Driver: A Volume The work described in this paper Driver for SANs [Master Degree Dissertation]. was supported by the National Key Minnesota: Department of Electrical and Basic Research and Development (973) Computer Engineering University of Program of China (Grant No. Minnesota, 1999 2004CB318205). [14] Shu Ji-wu, Li Bigang, Zheng Wei-min: Design and Implementation of a SAN System Reference Based on the Fiber Channel Protocol, IEEE [1] B.Phillips, “Have storage area networks Transactions on Computers, 54(4), 2005: come of age?” [J] IEEE Computer, vol.31, p439-448 no.7, 10-12, July 1998 [15] Kim, Chang-Soo; Bak, Yu-Hyeon etc: A [2] R. Khattar, et al., Introduction to Storage method for enhancing the snapshot Area Network: Redbooks Publications performance in SAN volume manager, 6th (IBM),1999 International Conference on Advanced [3] XIOTech Corp., http://www.xiotech.com/, Communication Technology, Broadband May 2004. Convergence Network Infrastructure, 2004, p [4] IBM Corp. 945-948 http://www.redbooks.ibm.com/pubs/pdfs/redbo oks/sg245470.pdf, March 2003, [5] EMC Corp., http://www.emc.com/products/storage_manag 16

×