Presentación Hitachi Data Systems Logicalis VT Buenos Aires
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Presentación Hitachi Data Systems Logicalis VT Buenos Aires

on

  • 2,362 views

Presentación de HDS (Hitachi Data Systems) en el evento de Logicalis Vertical Technologies Buenos Aires del 18 de Septiembre.

Presentación de HDS (Hitachi Data Systems) en el evento de Logicalis Vertical Technologies Buenos Aires del 18 de Septiembre.

Statistics

Views

Total Views
2,362
Views on SlideShare
2,362
Embed Views
0

Actions

Likes
0
Downloads
15
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • This presentation is the second of three parts. See part 1 for overall business value proposition, product positioning and Application Optimized Storage solution messaging. Part 3 delves into software and solutions for the Hitachi TagmaStore Adaptable Modular Storage and Workgroup Modular Storage.
  • A quick overview of our company, Hitachi Data Systems Corporation. Hitachi Data Systems Corporation is a wholly-owned subsidiary of Hitachi, Ltd., originally formed as Hitachi Data Systems Corporation in 1989 as a joint venture between Hitachi, Ltd. and EDS. Hitachi, Ltd. owned 86% of the original entity with Electronic Data Systems (EDS) owning the remaining 14%. Fast forward to April 1999, Hitachi Data Systems became a wholly-owned subsidiary of Hitachi, Ltd. and has been operating as such ever since. The key point I’d like to make here is that in North America, Europe and other key geographies, Hitachi, in various forms -- whether it is Hitachi Data Systems, its predecessor, National Advance Systems, or the predecessor of that company, Itel -- the fact is Hitachi has been selling industry-leading storage solutions and server solutions for the past three decades. This is an impressive track record that most storage companies we compete with today cannot claim. Our go-to-market strategy is comprised of direct and indirect sales in over 170 countries and regions. Hitachi Data Systems has a 3,200-strong global employee base and it is expanding. We’re essentially positioned, within Hitachi, Ltd., as the strategic focal point for all storage infrastructure solutions, storage management software, and consultative services pertaining to storage. We have also been recognized for excellence in customer service, which is very important to us as a business. We have been praised by Bank of America and SBC and we have also won a “Supplier Excellence” award from Texas Instruments. There are many other awards Hitachi Data Systems has received which are not listed in this slide, including one from a large auction company that starts with an “e”
  • A quick overview of our parent company, Hitachi Ltd. Hitachi, Ltd. is a public company traded on the Tokyo Stock Exchange under ticker symbol “6501” and in the U.S. on the New York Stock Exchange under the ticker symbol “HIT.” It is one of the world’s largest integrated electronics companies. Many industry watchers essentially view Hitachi, Ltd. as a unique fusion of IBM and General Electric - in that Hitachi, Ltd. encompasses the broad spectrum of IT products and solutions and semiconductor fabrication expertise that you see at a company like IBM. But, Hitachi also spans the heavy machinery, thermonuclear reactor engineering, and other heavy machinery-oriented goods that General Electric produces. Hitachi, Ltd. is a manufacturer of over 20,000 products. We believe that gives us a competitive advantage relative to storage-only vendors, in that we can leverage the IP and the research talent across many, many thousands of products, and bring a lot of that IP and research talent to bear on a central key focus or core business area -- storage. Again, the main point to emphasize here is that cross-pollination across multiple product disciplines is a key differentiator that has contributed to Hitachi’s vast product portfolio. Currently, there are about 932 subsidiaries within Hitachi with over 355,000 employees. The unique thing that is Hitachi, Ltd. is home to over 2,000 Ph.D.’s – that is to say there are more Ph.D.’s within Hitachi than there are employees at some of our competitors’ companies. So, Hitachi is very proud of the fact that we have in fact one of the largest associations and groupings of Ph.D.’s out there in the information technology and science space.
  • Hitachi’s fiscal year runs from April to March. Total FY07 revenue: for the fiscal year ending in March 2008, total sales were a little over $112.2 billion. Any investment made in information technology, whether it’s networking, telecommunications, enterprise servers, super computers, storage systems, other storage solutions, etc., Hitachi Data Systems utilizes cross-pollination to reap the benefits of that investment and leverages it for the development of other products. Taking a look now at the composition of Hitachi, Ltd’s business and the vertical markets it competes in. Hitachi, Ltd. has 7 distinct business segments, which comprise the over 20,000-strong product portfolio. Starting at the lower bottom on the left side, comprising about 22% of total sales for last fiscal year is the Information Systems and Telecommunications Group. This is the most strategic business segment for Hitachi, and many times, the most profitable as well. This comprises storage systems, storage consulting services, super computers, telecommunications equipment, gigabit Ethernet routers, SONET switches, enterprise blade servers, which are now being sold in North America, Korea, as well as Japan and other geographies. Basically, all information systems in telecommunications, IT and networking all unified in one group spanning servers, networking and storage. Powerful unification amongst these three facilitates great cross-pollination efforts. Power Industrial Systems, last fiscal year comprising 28% of total revenues—a very profitable business segment for Hitachi, Ltd. This comprises everything ranging from Shinkansen Bullet Trains (the trains in Tokyo and other regions of the world that can go in excess of 150 to 160 miles-per-hour), thermonuclear fusion reactors; heavy earth-moving equipment; various turbines that are being made in conjunction with General Electric; and so forth. If your customer is interested in earth moving equipment, Hitachi produces bulldozers and cranes and other earth-moving equipments. (Note Caterpillar competes with Hitachi). There is also the financial services business segment comprised of various capital and leasing corporations, within Hitachi Ltd., which constitutes about 3% of overall total sales. The Electronic Devices segment covers primarily semiconductor manufacturing equipment, digital media and consumer products and contributed to 10% of overall revenues for FY07. If you look around here within our Executive Briefing Center, all of the projectors, the plasma screens, the LCD screens - basically everything that will comprise the home theater experience is produced by Hitachi, Ltd. Anything you can imagine from DVD players to plasma screens to LCD screens, stereo equipment, you name it, high definition video equipment - all digital media products are produced by Hitachi, Ltd. Another important point to make is the Electronic Devices division of Hitachi, Ltd. has its own semiconductor fabrication operation which provides a distinct advantage over competitors. While many competitors rely upon third-parties for semiconductor chip manufacturing, we have our own fabrication plants, which gives us a powerful story from a vertical integration perspective. Logistic services and other segments round out the portfolio. High Functional Materials & Components is a rather interesting group with tremendous industry expertise not many people are aware of. For one, Hitachi, Ltd. is a key supplier to automotive companies such as Honda, Toyota, Mazda, and General Motors. Case in point, Toyota recently turned to Hitachi, Ltd. for hybrid motors for its Lexus RX 400H hybrid. The turbo chargers in the Mazda Miata; the hoses and rubber materials in many of the Nissan cars leverage manufacturing innovations from Hitachi, Ltd. Another example, Hitachi, Ltd. owns a subsidiary called the Xanavi (Spelled x-a-n-a-v-i) which is a leading provider of navigation systems for automobiles. In fact, if you go to your local Infinity or Nissan dealer, all the navigation systems in those vehicles are from Xanavi, owned by Hitachi.
  • Storage Services Evolution: Infrastructure Road Map, what’s happening with IT systems? Customers are moving from SAN islands to a consolidated storage infrastructure. The next step is to network-accessible services, where storage is treated as a utility and applications access it according to their performance, availability, and other needs. Over time, as servers become more commoditized, and the DATA becomes the critical factor in IT, the ability to reconfigure your server storage farm on the fly and reallocate storage resources will become THE critical enabling technology. Even today, many customers who are thinking of employing “grid”-based computing models are looking to Hitachi Data Systems to supply platform infrastructures that enable a grid model. As an example, with a grid, the ability to boot from multiple “LUN 0’s” becomes an ante, in order to play.
  • Storage Services Evolution: Infrastructure Road Map, what’s happening with IT systems? Customers are moving from SAN islands to a consolidated storage infrastructure. The next step is to network-accessible services, where storage is treated as a utility and applications access it according to their performance, availability, and other needs. Over time, as servers become more commoditized, and the DATA becomes the critical factor in IT, the ability to reconfigure your server storage farm on the fly and reallocate storage resources will become THE critical enabling technology. Even today, many customers who are thinking of employing “grid”-based computing models are looking to Hitachi Data Systems to supply platform infrastructures that enable a grid model. As an example, with a grid, the ability to boot from multiple “LUN 0’s” becomes an ante, in order to play.
  • Storage Services Evolution: Infrastructure Road Map, what’s happening with IT systems? Customers are moving from SAN islands to a consolidated storage infrastructure. The next step is to network-accessible services, where storage is treated as a utility and applications access it according to their performance, availability, and other needs. Over time, as servers become more commoditized, and the DATA becomes the critical factor in IT, the ability to reconfigure your server storage farm on the fly and reallocate storage resources will become THE critical enabling technology. Even today, many customers who are thinking of employing “grid”-based computing models are looking to Hitachi Data Systems to supply platform infrastructures that enable a grid model. As an example, with a grid, the ability to boot from multiple “LUN 0’s” becomes an ante, in order to play.
  • These are the IT challenges that customers almost always mention. Today’s presentation will demonstrate how HDS is addressing each one, with our suite of SRM software products and services.
  • Developer: No Changes are required to this slide. Presenter: This slide is the first of three that introduce your audience to S.O.S.S. Services Oriented Storage applies service-oriented architecture (SOA) concepts to storage to deliver a storage platform that can be readily reconfigured and optimized to changing business requirements; our solutions deliver a process-oriented service approach to storage rather than the traditional piecemeal, task-oriented approach, which leads to needless redundancies, over-subscription of storage, management complexity, and possible compliance exposure. Let’s talk briefly about some definitions: Simply stated, Service Oriented Architecture (SOA) is a business-centric IT architectural approach that supports integrating your business as linked, repeatable business tasks, or services. So, a service-oriented architecture is essentially a collection of services that can be shared and can communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity. Practically speaking, instead of running a bunch of discrete applications that are expensive, complex, and difficult to manage, the goal of SOA is a flexible IT infrastructure enabled by a common set of services that can be leveraged across all applications. The result for IT is greater flexibility and efficiency with reduced cost and complexity. Why is this Important? As we all know, connecting IT with business has been the mantra of IT organizations for many years. However, the reality often finds the data center mired in redundancies based upon the proliferation of monolithic storage architectures and infrastructure resulting in limited IT flexibility to adapt to business requirements while also incurring increased cost, complexity, and risk. Progressive IT organizations are adopting a services-oriented approach to managing core IT functions. Services are increasingly defined in user’s terminology and the IT infrastructure needed to support those services is mapped and managed to service level agreements (SLAs). The ability to do this in a cost-effective manner is the trick. In the past, storage systems lagged behind servers and networks whose management tools have adapted to these needs. Hitachi Data Systems has changed all of that. How is Hitachi Data Systems Different? Hitachi Data Systems has been developing its storage strategy with a services-oriented approach for many years. Some of the unique hallmarks of the Hitachi Data Systems strategy include: Control unit virtualization with enhanced storage services that enable heterogeneous (HDS and other vendors) storage systems to interact and work in concert to optimize storage performance, data protection, and system availability An integrated portfolio of storage management, tiered storage, business continuity, and data migration and mobility services that enable organizations to leverage a single set of tools for all of their storage and data management challenges. Most recently, the addition of file (HNAS) and object (HCAP) services, enabling organizations to leverage a single platform for ALL their data storage requirements. How do customers benefit? Most importantly, organizations moving to a services approach to storage can now respond more quickly to business and technology change and: Reduce cost and increase efficiency by reducing the complexity of their infrastructure and automating the process of storage management Boost utilization and reduce over-subscription of storage resources Cost-effectively address a growing array of structured and unstructured data types and applications Improve availability, reliability, and SLA consistency for midrange and small enterprise data applications Provide metrics and enable policies to measure and automate the use of storage services This is what we call Services Oriented Storage. We’ll talk a lot more about our services-oriented approach to storage throughout this presentation.
  • Key Objective: is to introduce the customer to Hitachi’s ‘One Platform All Data’ strategy and articulate the unique value of our strategy and how it is superior to the competition. The Storage Command Suite is the heart of SOSS. It’s enables the storage capabilities – mapping applications to the physical storage: Provisioning storage systems Virtualization services that provide flexibility such as transparent, non-disruptive data migration Reporting and forecasting to improve planning and minimize service disruptions Performance monitoring to insure meeting SLAs and optimize utilization Key Points: 1.) Hitachi Data System’s focus is on storage. Our goal is to help customer’s closely align their storage infrastructure with their business requirements by delivering storage solutions that reduce complexity, cost, and risk, as well as TCO, while increasing IT efficiency. 2.) Our strategy is to deliver ‘One Platform For All Data’. To understand why this is this important let’s look at the customer challenge: - The amount of digital data being created and stored continues to grow unabated. - Regulatory and compliance requirements are driving organizations to store more data for longer periods of time. Furthermore, they need to be able search for specific data if they’re ever asked to. - The explosive growth in semi-structured (e.g. email) and unstructured (files) is forcing customers to look for new ways to deal with files, metadata, and content. - Every application has different storage requirements for performance, availability, retention, etc. - Vendors traditionally throw discrete solutions at each of these different problems. - It’s still cheaper for organizations to buy storage then to manage it so customers typically throw more storage at the problem. - IT budgets remain relatively flat. Because the traditional response to these challenges has been to throw more storage at the problem customers end up managing multiple silos for their different application requirements. This is complex, costly, and inefficient. Hitachi address these challenges with a unique ‘One Platform For All Data’ strategy comprised of an integrated family of: 1.) Storage arrays for applications from mission-critical OLTP to long-term archiving. 2.) Intelligent storage controllers to virtualize and simplify heterogeneous storage environments. 3.) Storage management solutions to manage all your storage infrastructure. 4.) Tiered storage and data mobility solutions to simplify your infrastructure and reduce cost by aligning your data with the right tiers of storage. 5.) Business continuity solutions to support all backup, local and remote replication requirements. 6.) Archiving solutions provide enterprise class archiving and search across all applications. 7.) NAS solutions for high-performance applications, SAN/NAS consolidation, and common file/print services. With Hitachi’s strategy, all of these capabilities work in unison enabling customers to leverage ‘One Platform For All Data’. The benefits can be immense. Once the customer gets the general idea of our platform strategy, the next key is to understand the customers key pain points and how they measure success. Do they want to save money, reduce risk, meet a compliance requirement, insure availability of missio-critical applications, etc.? If you understand that you can translate it into what we can deliver. Bottom Line: Hitachi has a very unique strategy enabling customers to leverage a single platform for all their storage requirements. This is very different than what our competitors, in this particular case Sun, can offer. Customers should walk away from this part of the discussion a clear understanding of our platform strategy and how it can benefit them. For further education, here are some additional facts about data: 20% Structured Data (databases, transactional, data warehouses) 80% Unstructured (objects and files) and Semi-structured (e-mail) Data - <5% of unstructured data is managed through content management….and shrinking - Unstructured Data is growing at 10X the rate of Structured Data (Files, Email, Content) - 2,272 PB of Unstructured Data Today, 20,000PB in 2010…Most is dormant after 90 days. ESG. Value of the File….Content Is King - File Attributes help basic classification - Content Attributes (Metadata) enables extra classification, extra descriptions - Content inside the file enables text searching…informational value
  • Key Objective: is to illustrate to the customer how SOSS is built on an integrated platform of services and why that is important to them. Key points: 1.) SOSS provides a single platform for all block, file, and object services. The eliminates the traditional silo approach to storage we highlighted earlier in the presentation. 2.) Using SOSS customers can align their storage with application requirements based upon metrics including QoS, SLA, I/O, RTO, etc. Some of these metrics are highlighted in the Sample Metrics portion of the graphic. 3.) Professional services are a key part of SOSS. Hitachi offers services for consulting, design, implementation, and health checks. Some of our business-centric consulting services are highlighted in the Storage Practices portion of the graphic. Presenter Commentary: As we have described throughout this paper, the Services Oriented Storage Solutions platform is a business-centric concept enabling organizations to closely align their storage infrastructure with their business requirements. While many storage vendors may claim to have business-centric strategies only Hitachi can deliver because Service Oriented Storage Solutions are built upon a dynamic, flexible platform of integrated storage services enabling customers to optimize storage infrastructure while reducing cost and complexity. The platform is both powerful and simple: The architecture summary illustrates that the Services Oriented Storage Solutions are comprised of an integrated stack of services including: Block Services – which include volume virtualization, discovery, provisioning, partitioning, volume management, replication, migration, security, and metering File Services – which include file virtualization, replication, migration, security, encryption, and archiving Object Services – which include content services including index, search, classification, and security These services used individually or collectively deliver Services Oriented Storage Solutions to meet the necessary application storage requirements based upon metrics (listed under sample metrics) including I/O, service-level-agreements, Quality-of-Service, Recovery Time and Recovery Point Objectives (RTO and RPO), and retention. Most importantly, the unique value of Services Oriented Storage Solutions is the ability to leverage all of these services on a single, integrated storage platform, managed by a common management interface. Finally, SOSS also incorporates professional services for consulting, design, implementation, and health checks. The storage practices column highlights some of our key business-centric consulting services. Bottom Line: SOSS is unique in the industry. If customer’s want to break the logjam of complexity, cost, and inefficiency they should go with SOSS.
  • The HDS Platform -- enabled by the Services Oriented Architecture (NOTE: This slide is used in conjunction with the next slide – notes are for both). As we move forward in our environment, the platform will be enabled by a Services Oriented Architecture. We look at things from four components: Data Producers Data Consumers Data Storage and Data Protection As we look through this, Data Produces are applications that users interact with like SAP, Oracle and Exchange, or applications that the systems interacts with like NetBackup and TSM. These are applications that produce data. Now that data may be produced in a NAS format and be consumed in a Data storage environment via a NAS interface via a Virtual tape interface or a Content Archival interface or a LUN interface. You may store that data on a Modular environment. You may store that data on an Enterprise or on a Virtualized environment. It does not matter. You need to protect that data. At the Data Protection level and management through a consistent interface. [ note: Pop ups will display ] We can get this through a Virtual Tape Product, through a Content Archival, our HCAP product. [ note: Pop ups will display as we move into the Storage environment, the Modular product , the USP and ISP V will appear.]
  • Please see notes from SLIDE above – this is a BUILD
  • 09/07/12 Our product line now consists to two families which have a common integrated management suite
  • This presentation is the second of three parts. See part 1 for overall business value proposition, product positioning and Application Optimized Storage solution messaging. Part 3 delves into software and solutions for the Hitachi TagmaStore Adaptable Modular Storage and Workgroup Modular Storage.
  • 09/07/12 Our product line now consists to two families which have a common integrated management suite
  • Should you ever outgrow your Hitachi Simple Modular Storage 100, or you need very high performance and fibre channel connectivity, you can easily migrate to Hitachi’s Workgroup or Adaptable Modular Storage family scaling to over 300TB and enough performance for any modular workload!
  • Hitachi Simple Modular Storage will be available beginning in October 2007 from Hitachi Data Systems and our many reseller partners throughout the world.
  • Hitachi Simple Modular Storage will be available beginning in October 2007 from Hitachi Data Systems and our many reseller partners throughout the world.
  • The AMS500 has two independent back-end 2-Gbit paths from each controller. There are two connections (two pairs of IN-OUT connections per controller). As there are two active back-end paths per controller, all disks can be seen by just one controller in the event of a failure of the alternate controller. Both FC and SATA enclosures may be installed on this system.
  • Presenter: Use this slide to briefly introduce the product. This slide should allow you to introduce the strategic nature of this product, introduce how it may be part of a larger family of products without creating confusion, and describe the basic functionality this product provides. More detail on what the product does, and how it does it, is provided on the subsequent slides. Our advanced midrange systems offer industry leading features including: Cache partitioning and modular volume migration allowing storage administrators to quickly adapt the storage to meet changing application requirements. They also offer energy efficient storage with a “power savings” feature that spins down and turns off storage when not required. Like their enterprise counterparts, the Hitachi’s mid-range Adaptable and Workgroup Modular Storage families support all major OS and files systems and come with fibre channel, iSCSI and NAS attached options. The AMS1000 offers dual protocol support.
  • Key value: 2 parity drives allow a customer to lose up to 2 HDDs in a RAID group without losing data. RAID groups configured for RAID-6 are many thousand times less likely to lose data in the event of a failure. RAID-6 performs nearly as well as RAID-5 (for similar usable capacity). RAID 6 also gives the customer options as to when to rebuild the RAID group. When an HDD is damaged, the RAID group must be rebuilt immediately (since a second failure may result in lost data). During a rebuild, applications using the volumes on the damaged RAID group can expect severely diminished performance. A customer using RAID-6 may elect to wait to rebuild until a more opportune time (night or weekend) when applications won’t require stringent performance. HDD roaming allows the spare to become a part of the RAID group, no copy back is required saving rebuild time.
  • Cache Partitioning allows an AMS customer to apportion cache memory to suit the needs of any application. Cache segment size can be alloted in 4k (4,000 kilobytes) 16K, 32K, 64K, 128K and 512K segments. These segments allow data to be moved into cache more efficiently from the RAID Group (which are also flexible). This way, less cache is wasted and business critical applications can be assured that cache is readily available while less critical applications can be restricted to other segments. No other vendor offers this type of flexibility and AMS outperforms it’s competitors in this market.
  • Cache Partitioning allows an AMS customer to apportion cache memory to suit the needs of any application. Cache segment size can be alloted in 4k (4,000 kilobytes) 16K, 32K, 64K, 128K and 512K segments. These segments allow data to be moved into cache more efficiently from the RAID Group (which are also flexible). This way, less cache is wasted and business critical applications can be assured that cache is readily available while less critical applications can be restricted to other segments. No other vendor offers this type of flexibility and AMS outperforms it’s competitors in this market.
  • Cache Partitioning allows an AMS customer to apportion cache memory to suit the needs of any application. Cache segment size can be alloted in 4k (4,000 kilobytes) 16K, 32K, 64K, 128K and 512K segments. These segments allow data to be moved into cache more efficiently from the RAID Group (which are also flexible). This way, less cache is wasted and business critical applications can be assured that cache is readily available while less critical applications can be restricted to other segments. No other vendor offers this type of flexibility and AMS outperforms it’s competitors in this market.
  • Multi-protocol support provides the AMS customers with the flexibility of using their storage for either Fibre Channel or iSCSI SANs or both. Customers can use this capability to connect the same storage array to high performance fibre channel based servers as well as lower cost iSCSI based servers. Customers also have the flexibility to migrate their storage from iSCSI to Fibre Channel SANs. This flexibility provides excellent investment protection and is not available on many competitive modular storage systems.
  • With the introduction of the iSCSI interface for the WMS100, AMS200, and AMS500 s systems, Hitachi Data Systems has further advanced the ability of its customers and Channel Partners to deploy storage that is optimized to their applications. The AMS1000 takes this one step beyond other vendors by offering the ability for customers to choose multiple interfaces while still having only one scaleable array to manage.
  • The WMS100, AMS200 and AMS500 systems can provide iSCSI and Fibre Channel multi-protocol support with an optional bridge connected to a fibre channel controller on the storage array. This option allows a single storage array to store data for heterogeneous SANs.
  • AMS systems also have a “power savings” feature which allows volumes to be powered off when there is no IO. This feature is ideal for applications with scheduled but infrequent access such as backup volumes, archive or even unallocated storage. This feature saves on electric utility costs as well as data center cooling costs. Unlike dedicated “MAID” (massive array of idle disks) which limit the number of drives which can be spinning at any one point in time, Hitachi allows the volumes to be spun up at the customers discretion. There is no limitation as to how many volumes must be off at one time. No other vendor offers this flexibility.
  • Hitachi TagmaStore™ Adaptable Modular Storage and Workgroup Modular Storage are the new product names for Hitachi’s midrange offerings. The TagmaStore brand name now refers to all Hitachi storage products. The Hitachi TagmaStore™ Universal Storage Platform family replaced the Hitachi Lightning 9900™ V Series enterprise storage systems in October 2004. Now the Adaptable Modular Storage line enhances the Hitachi Thunder 9500™ V Series modular storage systems, which will remain available well into 2006. The Adaptable Modular Storage and Workgroup Modular Storage models offer many unique features that the Thunder 9500 V Series does not. However, the Hitachi Thunder 9585V™ ultra high-end modular storage still offers very high performance and capacity and will continue to appeal to the market. The Hitachi TagmaStore™ Network Storage Controller model USP V/VM is a rack-mounted Universal Storage Platform device. We have priced and positioned the product for the high end of the midrange market, above the Thunder 9585V system but below the model USP100 in terms of scalability, performance, and price. The Workgroup Modular Storage line continues Hitachi Data Systems’ branding of the Workgroup Modular Storage descriptor for SMB products. This presentation touches briefly on all Hitachi Data Systems offerings and then covers the Adaptable Modular Storage and Workgroup Modular Storage products in greater detail. More information for the USP V/VM and Thunder 9585V system may be found in those product presentations. Note that the USP V/VM is under NDA. This presentation is ONLY for Hitachi Data Systems employees and authorized resellers who have signed the NDA form and for current and prospective customers under NDA. All information is subject to change.
  • The Hitachi approach, by virtue of our ability to separate the controller from the backend media is that customers can take their people, their processors, their resources and their existing storage and continue to utilize it, because our storage controllers, our intelligent virtual mega controllers, can assimilate non disruptively into existing IT environments. They can complement customers’ environments. Hitachi’s not asking clients to rip and replace. Customers can reinvigorate existing assets, obtain the functionality on Hitachi’s controllers and enhance their existing investments. No other vendor can provide this level of business enabling functionality, of storage functionality to reinvigorate and improve the performance of and extend the life of existing assets. We’re talking about non-disruptive assimilation, where Hitachi’s managed to enter large accounts that were previously the domain of our arch competitors, because we’ve enabled these clients to put a USP or an NSC in front of their existing storage and complement it, give it new functionality and provide a single replication engine and a single management interface across all of their storage assets. It’s essentially a management, storage management solution, that complements their assets. With Hitachi, you can achieve and attain this functionality in a non-disruptive fashion. That’s our approach. The competition, on the other hand, your people, your processes, your resources, throw it all away. It’s all rip and replace. Forget about your prior investments. Any new functionality to the competition may or may not have added into their controllers is interlocked with these drive or disk array frames to the left and right of the central controller. If you want that functionality, if they have even put it in the new controller, you have to buy the entire thing. Whereas, with Hitachi, if you look at the new USPV, you can simply buy the controller and apply all that functionality to your JBOD, apply all that functionality to your existing storage capacity, to your existing BMX’s, your Clarion’s, your IBM DS’s, your DS-60 800’s [?], your LSI’s, white box storage, whatever you may have.
  • Introducing a new dimension for storage virtualization, a 247-petabyte address space. If you look at the industry, if you look at all these high-end, monolithic, aging storage systems, they just keep getting bigger and bigger, and high-end vendors just tend to keep stuffing more and more drives interlocked with these controllers and not thinking about the management issues that brings about. If you took a high-end storage system, they all have commodity media into the array cabinets directly to the left and the right of the controller. In the middle is an intelligent controller. That is where the majority of the R&D investments go, where the majority of the R&D investments go, where all the storage vendors make a majority of the R&D investments go from the storage vendors. A majority of our R&D investments go into that intelligent controller. That is where we have the new software, the new microprocessors, the new architectural innovations, the new services, the new intellectual property. All of the research and development dollars go into embedding more and more intelligence, an increasing amount of intelligence into that intelligent, virtual storage controller, if you will, into that controller. All the vendors spend big R&D budgets to try and innovate and put more and more functionality into their controller. However, Hitachi is the only company that has completely separated that controller from the backend disk media, giving customers the flexibility to invest in the most valuable part of the storage system only, the controller, the intelligence, thereby enabling them to get the latest functionality and apply it to their existing storage capacity without forcing them to buy more and more capacity and larger and larger storage arrays. The disaggregation of storage is key to the success of our industry going forward, and this is the direction that Hitachi has been heading on and has further expanded it with this product, which enables us to apply all the key functionality that resides in our controller to externally attach storage devices and, now, across a 247-petabyte address space. Additionally, this enables Hitachi to compete and sell the customers the business value of this intelligent storage controller, which is a storage management solution now. It is not a box. A box is something that exists in and of itself and is an isolated piece of equipment that has an isolated management cockpit, an isolated management console, if you will. This goes beyond the box. This goes beyond the confines of a box to provide common storage services to externally attach storage devices. You’re going beyond the confines of a box. It’s not a box. It’s not a system. It’s a platform, a true storage services platform. Again, we’re going beyond virtualization.
  • 09/07/12 Our product line now consists to two families which have a common integrated management suite
  • With this announcement, Hitachi is changing the industry … again. We are delivering the industry’s first Universal Storage Platform — a custom-designed tight integration of hardware and software. The Universal Storage Platform is a new industry category featuring breakthrough technologies not available in any other storage systems today. The Universal Storage Platform will enable a new paradigm for managing and deploying the storage infrastructure. The Universal Storage Platform includes an embedded virtualization layer capable of managing up to 32 petabytes of internal and external storage, with up to 332TB of internal storage. This breakthrough solution can logically partition the physical storage cache, capacity, and ports and attached storage into secure, independently managed virtual private storage machines. It brings a new combination of technologies, such as disk-based journaling and “pull” copying, that support storage-agnostic data replication. All of this is impossible without a hardware platform powerful enough and reliable enough to drive the software functionality. The Universal Storage Platform delivers with the third-generation Hitachi crossbar switch architecture – pushing 2 million IOPS, 68GB/sec cached bandwidth, and 256 concurrent memory operations – all at least 5 times more than other storage systems available today. All combine to deliver an unparalleled value proposition by reducing TCO as much as 40% over three years. Let’s take a closer look at each of these valuable innovations.
  • With this announcement, Hitachi is changing the industry … again. We are delivering the industry’s first Universal Storage Platform VM — a custom-designed tight integration of hardware and software. The Universal Storage Platform VM is designed to bring high-end Enterprise class virtualization features and reliability to the Small Enterprise and growing mid-market customers. The Universal Storage Platform VM includes an embedded virtualization layer capable of managing up to 96 petabytes of internal and external storage, with up to 72TB of internal storage. This USP VM can logically partition the physical storage cache, capacity, and ports and attached storage into secure, independently managed virtual private storage machines.
  • This slide should help customers understand technically how the connection of externally attached storage is achieved. The external storage looks as if it were part of the Network Storage Controller platform, with no distinction. The user will be able to see where the volume is physically created and can manage it accordingly (assign the volume to an application, use it as secondary SI volume, etc.), across heterogeneous storage platforms from the same device-management screen.
  • Hitachi ShadowImage™ In-System Replication software can also be used to mirror data volumes. [CLICK] For example, you might use it to mirror a copy as a hot backup on an internal array, so that in the event of a failure you could swap over to that system. And, as is true with data on the Hitachi Lightning 9900 V Series systems, ShadowImage can maintain as many as nine additional copies of a volume of data. On the Thunder 9500V Series systems, for example, ShadowImage software can create only one mirror of a volume. Using the Universal Virtualization Layer, you can use the enterprise-class version of ShadowImage on other storage systems as well. [CLICK] Now you can mirror as many as nine copies of a volume on any storage system.
  • Using the virtualization capabilities, in addition to a mirror for hot backup on an internal system, you can at the same time mirror another copy off to an external storage system that might be used for offline backup, and move a third copy perhaps to yet another system that might be used for development or testing. All with one replication product.
  • This slide shows the partitioning capabilities of the USP. We’ve created 3 logical partitions (yellow, green, and gray), assigning a few volumes to each partition. In this case, each partition has some internal volume (orange) and a mix of volumes representing different storage tiers. Also, note that we’ve partitioned cache as well. In this case, we’ve split it in thirds, more or less. (It doesn’t need to be that way… we’ll change that shortly.) We’ve assigned ports for each partition too.
  • Once created, Private Virtual Storage Machines allow the storage administrator to reallocate storage resources as needed: Click: Allocate additional storage to Partition #1 (1 internal USP volume) Then allocate more storage to Partition #3 (two volumes, from different tiers) Increase the cache for Partition #2 (at the expense of both Partitions 1 & 3, in this case)
  • Hitachi Volume Migration software now becomes an extremely powerful tool for lifecycle management and optimizing applications. After the end of the fiscal year, for example, you might take some of the accounting data from the prior year, which is not going to be accessed quite as frequently, and move it off to an external storage system—say a Hitachi Thunder 9500™ V Series system with SATA drives. Similarly, if you had an application that perhaps was a key project and your CEO called the CIO to complain about its performance, you could use Volume Migration software to quickly and easily move that data from an external storage system onto the highest-performing storage on internal volumes.
  • Its about reducing CAPEX and OPEX. Its about aligning applications to the right storage tier while improving operational efficiencies. Storage tiers can be designed around many characteristics including availability, performance, costs, protection. Tiering around: Availability: Raid types, controller architecture etc.. Performance: 15k fast storage (internal) that is typically assigned to the most demanding and important transactional applications. Probably only one RAID type used here. Also probably remote replication used. 10k 300 Gigabytes internal for daily business/Web applications. Possibly multiple RAID types deployed (could be considered ‘sub-tiers’). Depending on the critical business nature, remote replication and/or ShadowImage software. 10k FC external storage for daily or less than 24hour-day applications that are not as demanding. Slower external storage (as low as SATA caliber) for saved snapshots, read-only historical data, data warehousing, etc    Cost: Use of HDFC drives, SATA Protection: Tiering around protection to include VTL and Active Archive Solutions.
  • This is the TS Maturity Model which you should be well familiar with (details below). It maps to the products in the following way> Virtualization (UVM), Data Mobility (HTSM). Automation is a services integration solution at this stage. Hitachi has developed a comprehensive maturity model to help customers realize the vision of tiered storage through a stepped approach. Level 0: Heterogeneous Storage Environment Most customers today have a heterogeneous strorage environment. It is characterised by multiple storage arrays from different vendors, multiple management interfaces, under utilized storage capacities, VTL, Archive and NAS. This disparate storage strategy results in underutilization of storage assets with a very high storage management costs. The final symptom of this level is that both CAPEX and OPEX are out of control. Level 1: Virtualization Virtualizing heterogeneous storage assets behind a USP or NSC simplifies the storage infrastructure thus enabling improvements in storage utilization. It provides a common platform for storage management, business continuity and other Storage Services like NAS, Content Management, and Virtual Tape. Virtualization also enables customers to align storage tiers with business needs. As all data does not have the same business value, treating them equally is an expensive value proposition. Virtualization enables customers to create storage tiers with different provisioning and management processes and aligning the right data to the appropriate storage tier based upon business value. This dramatically reduces capital expenditure and operational costs. HDS Customers Examples : Alberta Justice, Fidelity National, University of Utah Level 2: Data Mobility Customers who have realized the benefits of virtualization (Level 1) can further improve IT efficiencies by incorporating data mobility tools in their virtualized storage environment. Typically, data migrations are time consuming, needs application downtime and prone to failure. With data mobility tools like Hitachi Tiered Storage Manager technology refreshes can become seamless. Data on assets reaching the end of their lease or life cycle can easily be migrated from one platform to the other. Changing application and data life cycle needs also require ongoing alignment of storage tiers with business needs. A good example would be of a payroll application which requires more computing resources only during certain days of the month. Data Mobility tools from Hitachi Data Systems make migration across storage tiers seamless. Seamless data migration during technology refresh and data life cycle management reduces risk, reduced operational cost, application uptime. At maturity level 2, we recommend customers integrate VTL and archive as storage tiers behind the USP or NSC. HDS Customers Examples : HDFC Bank, HUK Coburg Level 3: Policy Based Automation The next level of the Tiered Storage Maturity Model automates aligning of storage tiers to business needs. Most end customers or business being served demand SLAs at application level e.g. 10ms response time on Oracle application 95% of the time. Also based upon life cycle of the application demands of performance and availability could change. Policy based automation dynamically moves data across storage tiers based upon pre-set policies. Eg If the Oracle application requires a higher level of performance the policy engine will automatically migrate the data to a higher tier to ensure meeting of the SLAs. Customers adoptiong this level of automation benefit from optimized performance, reduced infrastructure and management costs, and assured SLAs. HDS Customers Examples : EDB Level 4: Content Aware Automation This is the highest level of automation where, based upon meta-data, the application is automatically provisioned and tiered to meet SLAs. This is totally self healing on an intelligent tiered storage platform. This is the next step for complete realization of our Tiered Storage vision. Common Management HDS offers a single common integrated platform across all levels with common management for structured and unstructured data. Common management includes monitoring, measurement and security.
  • At a CAGR of 50% data grows 337% over 3 years In order to address these challenges of high growth data in a fixed budget world, it becomes critical to determine information value and match value to the right infrastructure cost structure. Basic Tiered Solutions often create islands of disparate inefficient storage with limited service capability. Explain the model Reactive – sever internalized or direct attached (highly decentralized, expensive, ) Tiered Storage Islands – limited consolidation, limited service levels, decentralize management , limited leverage (util, aligning a box to a tier) Virtualized Storage Tiers – unify the disparate islands in one highly leveraged pool, truly consolidate increasing utilization, apply SLs with out barriers giving true standard classes of service, uniform manageability allowing consistent processes, can manage seamlessly across the enterprise, ( SOS – Seamless and policy based tier mobility, dynamic policy based service adjustments What level of maturity does the client have now?
  • Hitachi Data Systems’ remote replication software provides a similar common tool for disaster recovery–from any storage tier to any storage tier. Both Universal Replicator software , our new asynchronous replication software , and Hitachi TrueCopy™ Remote Replication software, our time-tested synchronous and asynchronous replication, provide reliable solutions for replication. [CLICK – show internal-to-internal replication] Replicate between volumes on two USP V/VM systems — using TrueCopy software synchronously for immediate failover with guaranteed data integrity, or Universal Replicator software for remote replication over any distance with guaranteed data integrity. [CLICK – external-to-external replication] Both Universal Replicator software and TrueCopy software will allow replication to/from any internal storage volume or to/from any external storage system, providing full heterogeneous remote replication between virtually any two storage systems.
  • CYQ407 – same results (42% marketshare for Hitachi) ====== This is a rather interesting chart because it details what has transpired in the high-end storage market over the last seven years. In Q1 of calendar 2000, EMC with its Symmetrix product had approximately 75% market share, essentially owning the entire high-end storage market. Hitachi, Ltd. had approximately 16%, and IBM had 10%. Now let’s take a look at what happened. In June 2000, by virtue of its unique research and development capabilities, Hitachi, Ltd. introduced its massively parallel crossbar switch architecture in the form of its Lightening 9900 series. Take a look at what happens with Hitachi Ltd’s market share from June 2000. Q2 2000, Hitachi moved up. Hitachi, Ltd. has been essentially gaining share ever since. The company introduced in May 2002 its Lightening 9900V series, and market share continued to climb. And in September 2004 it introduced its Universal Storage Platform, and its market share climbed to record levels at that time. The key point here is that as a result of its unique and industry-leading R&D capabilities, Hitachi, Ltd. was able to introduce a storage system that effectively enabled customers to do more with less and broke the propriety business model of one of our key competitors--who in the early 2000’s was requesting customers to put a maximum of one terabyte per high-end Symmetrix subsystem, each with its own SRDS software license. Hitachi Ltd. was able to break the bottleneck of that shared bus architecture. You can scale to 25-plus terabytes per individual subsystem and license your software with the amount of terabytes under management. So you’re looking at a 25-plus-to-1 ratio that effectively effects the scalability performance characteristics. Our virtualization business continuity solutions significantly enables us to, as the slide indicates, slice EMC’s market share in half. Hitachi doubled its size, and even IBM managed to grow. In summary, Hitachi, according to the latest financial analyst rankings, is essentially tied with EMC for high-end market share. There’s fluctuations each quarter—what might be a strong quarter for EMC might be the start of our fiscal year (April for Hitachi). But the point is not so much who is precisely X% higher than the other at this juncture, the point is that EMC went from 75% of the market to the mid 30% while Hitachi’s gone from 15% to the 40%’s. IBM’s made progress as well. So this just shows how innovation can have an impact on a very large, influential and profitable market space.
  • So, now transition to talking about the Suite. The Storage Command Suite provides capabilities across the entire HDS storage line. With 6.0 that also includes the new SMS100 too. (though only with Device Manager). Most competitors (e.g. EMC) provide different tools on different platforms
  • Hitachi TagmaStore™ Adaptable Modular Storage and Workgroup Modular Storage are the new product names for Hitachi’s midrange offerings. The TagmaStore brand name now refers to all Hitachi storage products. The Hitachi TagmaStore™ Universal Storage Platform family replaced the Hitachi Lightning 9900™ V Series enterprise storage systems in October 2004. Now the Adaptable Modular Storage line enhances the Hitachi Thunder 9500™ V Series modular storage systems, which will remain available well into 2006. The Adaptable Modular Storage and Workgroup Modular Storage models offer many unique features that the Thunder 9500 V Series does not. However, the Hitachi Thunder 9585V™ ultra high-end modular storage still offers very high performance and capacity and will continue to appeal to the market. The Hitachi TagmaStore™ Network Storage Controller model USP V/VM is a rack-mounted Universal Storage Platform device. We have priced and positioned the product for the high end of the midrange market, above the Thunder 9585V system but below the model USP100 in terms of scalability, performance, and price. The Workgroup Modular Storage line continues Hitachi Data Systems’ branding of the Workgroup Modular Storage descriptor for SMB products. This presentation touches briefly on all Hitachi Data Systems offerings and then covers the Adaptable Modular Storage and Workgroup Modular Storage products in greater detail. More information for the USP V/VM and Thunder 9585V system may be found in those product presentations. Note that the USP V/VM is under NDA. This presentation is ONLY for Hitachi Data Systems employees and authorized resellers who have signed the NDA form and for current and prospective customers under NDA. All information is subject to change.
  • 09/07/12
  • Let’s review the key benefits of NAS. First of all, NAS is optimized for file sharing so customers can use one NAS to displace multiple file servers. This eliminates file server proliferation, and reduces capital expenditure. NAS also offers high performance and support across multiple file sharing protocol, be it Windows, Unix or Linux. By consolidating multiple file servers into one NAS, customers can reduce management cost and improve operational efficiency. Customers have fewer servers to manage, fewer SW license to buy. They will also complete their file sharing and backup tasks faster. NAS is easy. It’s easy to install and manage for file applications. It leverages existing IP network and the ease of management will lower OPEX NAS is also a convenient way to backup data to meet compliance requirement. This is especially critical for remote and branch offices.
  • On March 4, 2008, we announced GA of the Hitachi Essential NAS Platform which replaces the NAS Blade for USP V Family and AMS/WMS NAS Option. We also announced the next generation of High-performance NAS – the 3000 Series. GA is in calendar Q2 2008. 3100 and 3200 will replace the 2100 and 2200 respectively. The 2000 models and the 2000 Nearline models will remain unchanged in the portfolio.
  • Presenter: Use this slide with the following 4, to briefly introduce the product. This slide should allow you to introduce the strategic nature of this product, introduce how it may be part of a larger family of products without creating confusion, and describe the basic functionality this product provides. More detail on what the product does, and how it does it, is provided on the subsequent slides. The Hitachi Essential NAS Platform is an easy-to-use NAS solution that is an ideal solution for medium-sized businesses, remote or branch offices and enterprises needing file serving, backup or file server consolidation solutions. The Hitachi Essential NAS Platform replaces the NAS Blade for the Universal Storage Platform family and Adaptable Modular Storage/Workgroup Modular Storage with NAS Option. The Hitachi Essential NAS Platform complements the Hitachi High-performance NAS Platform, powered by BlueArc®. This NAS solution consolidates and manages up to 512 terabytes (TB) of data in a two-node cluster with access to data for Common Internet File System (CIFS) and Network File System (NFS). The Essential NAS Platform delivers best-in-class availability and scalability at a low price. This NAS solution provides complete cost effective data protection with superior Hitachi hardware-based RAID technology and various data protection software such as TrueCopy, HUR and SyncImage.
  • The Hitachi Essential NAS Platform family is comprised of three models. Field upgrade is available allowing an easy upgrade path from the entry models to higher end models. Upgrade from 1100c  1300c  1500c Offline Upgrade of Memory License Upgrade to desired model Optional second Power Supply can be installed in the field Depending on the model an optional dual ported 1/2/4Gbps autosensing HBA can be installed. Second HBA is required for NDMP over SAN backup. Depending on the model additional network card options can be added to each system which can be either 8 x copper and/or 8 x optical offering up to additional 16 ports
  • The key features of Essential NAS Platform include: Easy to use management interface which is designed for NAS management with the feedback from customers. It has the same design as HiCommand and it’s fully integrated with Device Manager, Tiered Storage Manager and Tuning Manager. We offer two options – one for advanced users, the other for inexperience users. Best in class scalability and availability Advanced data protection capabilities
  • In November 2007, Hitachi introduced the Hitachi High-performance NAS Platform 2000.. This platform is designed for medium-sized businesses, just like the Essential NAS Platform. When to position these two products? The key difference between the two NAS platforms is that the Essential NAS Platform does NOT support some of the high-performance NAS Platform advanced enterprise-class features.
  • In November 2007, Hitachi introduced the Hitachi High-performance NAS Platform 2000.. This platform is designed for medium-sized businesses, just like the Essential NAS Platform. When to position these two products? The key difference between the two NAS platforms is that the Essential NAS Platform does NOT support some of the high-performance NAS Platform advanced enterprise-class features.
  • Hitachi High-performance NAS Platform offers the highest performance, highest scalability and most advanced virtualization framework today. These compelling capabilities make it the ideal solution for Consolidation and High-performance applications.
  • Data protection has been an IT challenge for decades. Some analysts estimate that backup accounts for over 50% of IT’s time. Data protection technologies are increasingly perceived as being slow, costly, labor-intensive, and unreliable. As a result, many enterprises are enhancing their tape backup strategies with new disk-based options. By tapping the capabilities of disk, such as concurrent read/write and random access, enterprises can complement their tape backup strategies to achieve faster and more reliable backup and recovery while continuing to use tape for what it does best such as off-site storage and long-term archiving.
  • Here you see a media server with a Fibre Channel connection to an optional switch to the Linux server I mentioned previously to a FC connected disk array. PAUSE When we developed VTFO we designed into it the ability to eliminate bottlenecks. For example, CLICK if the bottleneck is in getting data to our server you can have multiple front end connections attached to multiple media servers; CLICK if the bottleneck is getting data to disk then we provide the ability for multiple connections to the disk arrays. Before I continue displaying the scalability that VTFO provides let me take a minute to share with you a customer. case study: PAUSE We have a client who implemented a VTFO system because of tremendous issues they were experiencing with backup and recovery. Prior to implementing VTFO they had the occasion to execute a std. tape recovery of a server that contained 1M files. As all of the tapes were on-site, it took them 17 hrs to do a complete recovery. After implementing VTFO they found a dramatic improvement in their backup & recovery. As an example they had an occasion to have to recover another server, this time with 1.6 M files, 60% more files then before and they were able to do the recovery in 1 hour and 38 minutes. This led them to keep more data accessible on disk so they wanted to add more disk CLICK to the 29 TB they already had. PAUSE Contrast to competition with pre built appliances where that type of scalability isn’t possible. What would they do? They’d have to add another appliance, since it’s bundled, to include another software license, another processor, another server and disk, which means you wind up paying for more than you need. And you need to manage a separate appliance. PAUSE Now we don’t dictate what you must invest in when your environment changes and you need to expand. For example, should you want to address a larger library, you can cluster up to 4 servers CLICK Now as you know, servers have a lot of different capabilities. Our comp. will recommend a server for you that will potentially have fixed capabilities, i.e. a Dell 1750 with one internal 32 bit I/O bus, would perform significantly different than a server with three (3) 64 bit I/O busses ( CLICK to show larger servers) Again if the server’s the bottleneck we allow you to select servers of your choice to eliminate the bottleneck Now I’d like to take a few minutes to speak with you about how VTF Open functions in de-staging data from virtual tape to real tape …. Next Slide
  • When we first began this presentation we agreed that there are a number of challenges facing the data centers today. Adding a VTL into a backup environment is an easy way to relieve some if not all of those issues. PAUSE We also agreed that there is an ever increasing challenge today and that is finding a way to reduce the amount of data that you have to manage and protect. And only those vendors that can effectively give you the capability to reduce the amount of data will be able to provide an economical solution. PAUSE “ In order to dramatically improve the management and protection of data …a ‘game changing’ technology is required…
  • Diligent has changed the data protection game. PAUSE/CLICK With break-through technology that will reduce required disk back-up capacity on an average by a factor of 25 times or more… PAUSE/CLICK … thereby enabling to protect more by storing less… PAUSE/CLICK … at an acquisition cost below that of tape. A lot of vendors talk about total cost of ownership (TCO) and the fact that if you can invest in a disk based VTF today, over time you will realize a return that is greater than the available tape alternatives. However, many companies are restrained by their current budgets in making that investment. PAUSE But, Diligent has changed all that , by making the initial investment, or acquisition cost, of a ProtecTIER™ system less than the comparable tape based alternative available today.
  • ProtecTIER™ software runs on a Linux based server. PAUSE/CLICK ProtecTIER™ looks at storage systems as one large storage repository. This is unlike the backup application D2D, where each system is attached to a media server and only the media server that created the backup on the system has access to that system. PAUSE/CLICK A critical component of ProtecTIER™ is a patent pending factoring algorithm called HyperFactor. PAUSE HyperFactor has a memory resident index, like a table of contents, that can map the contents of a 1PB repository in 4GB of memory. That 250,000:1 ratio between the repository and the index is a significant differentiator for Diligent and has orders of magnitude greater granularity than anything in the market place. The HyperFactor index looks at a backup stream and finds data that already exists in the repository without doing any I/O. This feature functions even when the repository is up to a petabyte in size. PAUSE/CLICK To show how this works, we’ve depicted different data patterns in the repository with these multi-colored icons. PAUSE/CLICK Here you see a new backup data stream coming in from a backup app. This stream contains some data that already exists (as represented by the multi colored icons) and some data that’s new (as represented by the tan icons). PAUSE/CLICK Now the backup data stream passes through the HyperFactor “filter” which looks at all of the data patterns in the stream and uses the index to filter out the similar items and only store the delta while pointing to the existing data it needs. As a by product of this, one PB of disk can represent, on avg., 25PB of tape data. PAUSE I keep using the word similar because it’s not identical, and that’s because part of the algorithm’s power is that it uses similarities instead of identicals to achieve unmatched performance. The most similar pattern in the repository is found with NO I/O and then that data into brought to the server to do a computational compare and then store the delta. This is performed without impacting the search time, regardless of the repository size. PAUSE Because there is no I/O, we’re actually performing a memory search on an index, so the search time difference will not be noticeable whether it’s 10TB or a petabyte. The location and similarity of the data isn’t affected by naming conventions, shifts or offsets in position, because we are looking at the byte level of the data. PAUSE Couple of “key” points to remember: What happens if the index disappears? Remember the index is used to locate similarity in the repository, and in fact it’s not used in the restore process at all. If the backup applications data stream needed to be restored, this data that exists in the repository is self describing which means that a restore can be done without the index since the data itself tells me what is required to restore the stream. As we said, the index is important in finding similarity, not only is it in the server memory, but it’s also duplicated in two places on RAID protected disk and synchronized. Let’s look at the HyperFactor algorithm in a little more detail.
  • CLICK Because of the tremendous reduction of required disk capacity, a much smaller pipe is needed to transfer the data to a remote site. Now you have accomplished your disaster recovery, in addition to your backup. If you lose the primary site, the data can be fully accessed at the secondary site. CLICK 2 At the secondary site you may destage to tape. Note that the Backup Server in the secondary site is part of the same Domain as the master server at the primary site. This allows ProtecTIER™ to make the cartridges available to it through a different virtual library. Once at the remote site, the images on the virtual tapes can be vaulted to physical tapes. DTC will have a section of the best practices guide to describe how to implement this.
  • Slide 5: Retention times are getting longer Regulatory compliance has become a major burden to almost every organization. There are over 10,000 compliance laws that an enterprise may be subjected. This slide summarizes that certain vertical market segments are facing longer times that they MUST retain data….and make it available upon demand.
  • Slide 6: A typical enterprise archive environment Digital archiving is not a new phenomenon. Various departments and applications have been backing up their data since they first implemented computers. The problem is that this creates silos of information. The problems are that this type of arrangement does not scale well, and trying to search across silos is almost impossible….at least it is very, very expensive.
  • Slide 14: HCAP: How it works HCAP receives information from data creating applications such as email, document management, home grown applications, etc. When that information is ingested into the archive we first authenticate that information and assign a unique fingerprint. HCAP let’s a customer select from a variety of authentication algorithyms…such as MD5, SHA1, SHA256 and more. That information and its metadata are then reliably stored in the archive. The customer can choose the level of data protection…and HCAP will automatically maintain that selection. HCAP uses highly distributed techniques to ingest and store so that the archive will perform to the customer’s needs. In addition, data can be indexed in a separate and parallel processors so that ingestion and storage performance are not impacted. Once that information is stored and indexed, it can be easily and readily searched. We will explain the search features later. HCAP has been tested to operate using 80 processor nodes on over 2.5 PBs of storage…with over 2 billion user objects. No one has come close to these numbers, and we have not come close to seeing our top end of scalability. Our limits have been only based on the amount of storage and processing equipments used in our labs. Third party validation has also been secured.
  • 09/07/12
  • Non-disruptive service HCAP has been designed to never lose data. In addition, high availability features are built in to make sure the user has continuous access. Policies enforce data preservation and retention, and the clustering software provides for failures without impact…called self-healing, and recovery without effort …called self-configuration. For continuous scaling the cluster also provides for automatic load balancing. The software looks for low water mark thresholds, and then starts distributing data and work to other processors and storage. As the customer adds more processing and storage, the clustering software automatically continues to take advantage of the additional resources. As the cluster is self-healing service can be provided at a “relaxed” pace. If a disk or processor fails, the system adjusts. When the failed resources are replaced, the system reconfigures and rebalances. Remote serviceability tools enable both Hitachi and the user to investigate problems and schedule routine maintenance activities. Customers should think of HCAP as a “set and forget” type solution. * Requires remote connectivity
  • HCAP Fully-integrated appliance includes: Includes : Hitachi Content Archiver V2.0 software 1U Server Nodes (8GB memory) – start with two, scaling up in pairs Two Ethernet Switches Two FC Switches (16 port expandable) WMS Array (controllers + disk; RAID 6) 42U Rack Redundant connectivity, pre-cabled
  • Teniendo en cuenta las necesidades de nuestros clientes, Hitachi presentó su estrategia Application Optimized Storage en Mayo de 2004. Nuestro objetivo es alinear los recursos IT con los objetivos de negocios de nuestros clientes, para obtener el máximo beneficio. Esta alineación de IT y objetivos de negocio implican mucho más que simplemente administrar datos a través de su ciclo de vida. Implica un entendimiento de las necesidades del negocio para desarrollar, gestionar e implementar una infraestructura de almacenamiento que permita optimizar la disponibilidad de la información apoyando las necesidades de las aplicaciones de negocios en todo momento. Para direccionar este problema complejo, las soluciones HDS Application Optimized Storage, almacenamiento optimizado por aplicación, están basadas en un framework integrado de hardware, software y servicios que incluyen: aplicaciones, contenidos, datos, y servicios de almacenamiento como representa este gráfico. Nuestra visión cambia la forma en que los clientes ejecutan sus estrategias de almacenamiento, con beneficios tangibles. Explicación detallada del gráfico: Each component in the framework plays a critical role in an overall solution, so let’s talk about each layer in more detail: Application Services Application services provide the application-centric infrastructure management that is critical to enterprises today and are comprised of the application modules of the HiCommand® Storage Area Management Suite. Application services correlate the availability of business-critical applications with storage network capacity and performance, provide logical-to-physical application path management, and enable application optimization by aligning storage resources with business needs. Application Services are delivered through a product set that includes the following application management modules: - HiCommand QoS Modules - HiCommand QoS for Oracle® - HiCommand QoS for Sybase® - HiCommand QoS for Microsoft® Exchange - HiCommand QoS for File Servers - HiCommand Chargeback Module - HiCommand Tuning Manager These Application Services tools are all integrated and allow robust management of the enterprise’s storage infrastructure from application to disk. Content Services Companies run a wide variety of applications in support of their business processes and understanding the lifecycle requirements of the data generated by these applications is a critical component of Application Optimized Storage. Therefore, content services represent any applications that provide the ability to index, store, search, and retrieve information. These applications, including databases, messaging, file systems, ERP, and CRM, are all considered content services and provide critical information about the lifecycle requirements of application data. Application Optimized Storage solutions use this application awareness to appropriately optimize storage infrastructure to meet application requirements. Unlike other storage vendors who have chosen to deliver their own proprietary application solutions, Hitachi Data Systems is committed to an open, collaborative approach. We partner with leading application vendors including IBM, Microsoft, OpenText (IXOS), Oracle, and Sybase, providing customers with the flexibility to choose the applications they need to support their business. Two examples of Content Services offered by Hitachi Data Systems include: Message Archive for E-mail Message Archive for E-mail, powered by IXOS software, provides users with a limitless mailbox by seamlessly offloading messages and attachments to archival storage. This lowers e-mail server loads and greatly reduces the number of servers and software licenses required to support a given e-mail user population, thereby improving their efficiency and performance and lowering total cost of ownership. Message Archive for E-mail improves productivity as it reduces the time users and IT administrators alike spend managing e-mail, minimizes costs, and expedites retrieval of e-mails required for legal discovery or auditing purposes. Message Archive for Compliance The Message Archive for Compliance solution helps customers optimize their e-mail systems while providing message indexing, search and retrieval capabilities, audit trails, and policy management to preserve messages for mandatory retention periods. Message Archive for Compliance combines Hitachi storage with Hitachi Data Retention Utility software for WORM protection, IXOS archive software including Compliance Package, and Hitachi Data Systems implementation services. It enables companies to retain an unalterable archive of e-mail and instant messages for the fixed period of time mandated by SEC Rule 17a-4, Sarbanes-Oxley, Basel II, and other regulatory requirements. With these archiving solutions as starting points, Hitachi Data Systems will roll out additional Content Services for applications in areas such as rich media and health care. Data Services A common set of data management tools is a key component of Application Optimized Storage. Based upon an understanding of application storage requirements, storage cost, performance, functionality, and availability can be optimized using comprehensive data management tools for backup, migration, replication, and security. Data Services products from Hitachi Data Systems include: - Hitachi HiCopy Cross-System Copy software - Hitachi CopyCentral z/OS® Business Continuity Manager software - Hitachi QuickShadow™ Copy-on-Write Snapshot software - Hitachi ShadowImage™ In-System Replication software - Hitachi TrueCopy™ Remote Replication software - Hitachi Data Retention Utility software Hitachi Data Systems is recognized as a leading provider of copy and data protection products as well as associated business continuity and data migration design and implementation services in both open systems and mainframe environments. These products and skills are essential for tiered storage deployments which match data value to appropriate classes of storage systems. Storage Services Storage services provide the foundation for all Application Optimized Storage solutions by providing a heterogeneous, multi-tier storage infrastructure supported by common storage management tools. This architecture allows the exact matching of application priority policies and storage infrastructure across an unmatched range of high-end enterprise and midrange storage products that provide a broad selection of performance, availability, functionality, and price/performance attributes. The components of storage services are heterogeneous, multi-tier infrastructure, connectivity, and common management: Infrastructure Hitachi Lightning 9900™ V Series enterprise storage systems Hitachi Lightning 9900™ V Series enterprise storage systems provide seamless scalability with nondisruptive expansion to over 140TB to simplify your storage infrastructure through massive consolidation. When combined with Hitachi storage software and the HiCommand® Storage Area Management Suite, these systems support Application Optimized Storage™ solutions, enable “set and forget” management, protect data assets, and optimize resources. Lightning 9900 V Series systems are powered by the Hi-Star™ crossbar switch architecture. This assures you of no single point of failure and instant, 24/7 data access. We even back it up with a 100% data availability guarantee. The Lightning 9900 V Series systems support not only open systems, but also mainframe environments through FICON and ESCON as well as copy software compatibility. Recently the Enterprise Storage Group reports the Lightning 9980V storage system is unsurpassed for the kind of high-end, multidimensional scalability required for serious storage consolidation. Hitachi Thunder 9500™ V Series modular storage systems The Hitachi Thunder 9500 V Series modular storage systems provide with industry-leading (up to 64TB) capacity, performance, and connectivity in a small footprint. These systems can grow with your business, addressing applications such as data replication, message archiving, and regulatory compliance. For economical information lifecycle management, match the cost of storage to the value of your data by tiering storage down from Lightning 9900™ V Series enterprise systems to lower-cost Thunder 9500™ V Series models. SATA Intermix Option New global regulatory requirements are driving demand for automated storage solutions that simplify the management and migration of data throughout the entire data lifecycle. The Serial ATA (SATA) Intermix Option for the Thunder 9500 V Series of modular storage systems can be added to existing Thunder 9585V™, Thunder 9580V™ and Thunder 9570V™ high-end modular storage systems, enabling customers to create the world’s first “DLM in a box”—high-speed Fibre Channel and lower cost native SATA tiered within one storage system. Connectivity Storage Area Networks SANs are an essential part of Hitachi Data Systems delivery of Application Optimized Storage solutions. SANs make large storage pools shareable across the enterprise, centralize storage management, and dramatically improve storage utilization, resulting in lower costs. Yet they can simultaneously provide better performance for the applications that drive business. Our SAN solutions encompass storage systems, switches, servers, management software, multi-protocol support, services and other storage network components developed by Hitachi, our alliance partners, and third party providers. Working with the storage networking industry leaders, such as Brocade, Cisco, CNT, and McDATA, Hitachi Data Systems provides extensive connectivity options, including IP (iSCSI, FCIP, iFCP) and Fibre Channel configurations. In addition, the Lightning family of storage systems supports both ESCON and FICON protocols for mainframe connectivity concurrently with open systems protocols. This makes Lightning family the platform of choice for massive consolidation projects. Virtual Storage Ports/Host Storage Domains Virtual storage ports, available in both the Lightning 9900 V Series and Thunder 9500 V Series storage systems, enable each Fibre Channel physical port to support 128 heterogeneous open systems servers. Each server has its own secure storage partition and bootable LUN 0 through Host Storage Domains. This capability simplifies the storage network infrastructure, eases management, and enables large-scale consolidation, resulting in lower TCO. Network Attached Storage For many applications, especially Web, design, and medical, the concern is not bandwidth but file access response time. Hitachi solutions for NAS, the HDS-NetApp® Enterprise NAS Gateways and the Lightning NAS Blade, help deliver cost-efficient storage utilization across the enterprise. Common Storage Management Common Storage Management is achieved through standards-based rich management tools that provide IT executives with a Single Point of Control for both application and infrastructure requirements. To fully benefit from Application Optimized Storage, all of these elements, including business continuity characteristics, array performance, and network fabric, need to be defined, managed, and mapped to what business requires from the applications in order to optimize delivery of value to the business. Common Storage Management is perhaps the most important component of Application Optimized Storage. Rather than provide end-users with disparate interfaces for disparate platforms, essentially resulting in multiple islands of storage and inaccessible information, Hitachi Data Systems provides customers with the same software, the same management interfaces, and the same tool sets to manage all heterogeneous storage systems from a single console. The final key components of Application Optimized Storage are Services and Best Practices. To ensure organizations maximize their investment in Application Optimized Storage solutions, Hitachi Data Systems offers a comprehensive suite of technology, storage, education, and professional services, as illustrated in Global Solution Services consultants can help you plan, design, implement, integrate, manage, and optimize storage infrastructure solutions that meet your needs. Areas in which our consultants assist customers include: - Industry Solutions—Enterprise content archival solutions that incorporate hardware, software, and professional services to address your business and regulatory compliance requirements. - Application Optimized Solutions—Bridge the gap between business applications and IT’s ability to precisely deliver service levels with GSS strategic consulting, design integration and robust deployment capabilities. - Storage Services—Services that apply proven best practices along with appropriate tools and training to help you to plan, design, implement, integrate, manage, optimize, and maintain your storage infrastructure. - Product-Based Services—Implementation, simplification, and optimized ROI and TCO for Hitachi Data Systems and select third-party products. - Education Services—Help you to improve your staff efficacy and efficiency in implementing and supporting multi-vendor storage solutions. Aligning IT and business objectives is far more complex than simply managing data through its lifecycle. It’s about IT understanding the needs of the business and building, managing, and adapting a storage infrastructure to optimize data delivery in support of the myriad needs of the applications businesses rely upon. To address this complex problem, Application Optimized Storage solutions are based upon an integrated framework of hardware and software services including application, content, data, and storage services as represented in the figure above. Each component in the framework plays a critical role in an overall solution, so let’s talk about each layer in more detail: Application Services Application services provide the application-centric infrastructure management that is critical to enterprises today and are comprised of the application modules of the HiCommand® Storage Area Management Suite. Application services correlate the availability of business-critical applications with storage network capacity and performance, provide logical-to-physical application path management, and enable application optimization by aligning storage resources with business needs. Application Services are delivered through a product set that includes the following application management modules: - HiCommand QoS Modules - HiCommand QoS for Oracle® - HiCommand QoS for Sybase® - HiCommand QoS for Microsoft® Exchange - HiCommand QoS for File Servers - HiCommand Chargeback Module - HiCommand Tuning Manager These Application Services tools are all integrated and allow robust management of the enterprise’s storage infrastructure from application to disk. Content Services Companies run a wide variety of applications in support of their business processes and understanding the lifecycle requirements of the data generated by these applications is a critical component of Application Optimized Storage. Therefore, content services represent any applications that provide the ability to index, store, search, and retrieve information. These applications, including databases, messaging, file systems, ERP, and CRM, are all considered content services and provide critical information about the lifecycle requirements of application data. Application Optimized Storage solutions use this application awareness to appropriately optimize storage infrastructure to meet application requirements. Unlike other storage vendors who have chosen to deliver their own proprietary application solutions, Hitachi Data Systems is committed to an open, collaborative approach. We partner with leading application vendors including IBM, Microsoft, OpenText (IXOS), Oracle, and Sybase, providing customers with the flexibility to choose the applications they need to support their business. Two examples of Content Services offered by Hitachi Data Systems include: Message Archive for E-mail Message Archive for E-mail, powered by IXOS software, provides users with a limitless mailbox by seamlessly offloading messages and attachments to archival storage. This lowers e-mail server loads and greatly reduces the number of servers and software licenses required to support a given e-mail user population, thereby improving their efficiency and performance and lowering total cost of ownership. Message Archive for E-mail improves productivity as it reduces the time users and IT administrators alike spend managing e-mail, minimizes costs, and expedites retrieval of e-mails required for legal discovery or auditing purposes. Message Archive for Compliance The Message Archive for Compliance solution helps customers optimize their e-mail systems while providing message indexing, search and retrieval capabilities, audit trails, and policy management to preserve messages for mandatory retention periods. Message Archive for Compliance combines Hitachi storage with Hitachi Data Retention Utility software for WORM protection, IXOS archive software including Compliance Package, and Hitachi Data Systems implementation services. It enables companies to retain an unalterable archive of e-mail and instant messages for the fixed period of time mandated by SEC Rule 17a-4, Sarbanes-Oxley, Basel II, and other regulatory requirements. With these archiving solutions as starting points, Hitachi Data Systems will roll out additional Content Services for applications in areas such as rich media and health care. Data Services A common set of data management tools is a key component of Application Optimized Storage. Based upon an understanding of application storage requirements, storage cost, performance, functionality, and availability can be optimized using comprehensive data management tools for backup, migration, replication, and security. Data Services products from Hitachi Data Systems include: - Hitachi HiCopy Cross-System Copy software - Hitachi CopyCentral z/OS® Business Continuity Manager software - Hitachi QuickShadow™ Copy-on-Write Snapshot software - Hitachi ShadowImage™ In-System Replication software - Hitachi TrueCopy™ Remote Replication software - Hitachi Data Retention Utility software Hitachi Data Systems is recognized as a leading provider of copy and data protection products as well as associated business continuity and data migration design and implementation services in both open systems and mainframe environments. These products and skills are essential for tiered storage deployments which match data value to appropriate classes of storage systems. Storage Services Storage services provide the foundation for all Application Optimized Storage solutions by providing a heterogeneous, multi-tier storage infrastructure supported by common storage management tools. This architecture allows the exact matching of application priority policies and storage infrastructure across an unmatched range of high-end enterprise and midrange storage products that provide a broad selection of performance, availability, functionality, and price/performance attributes. The components of storage services are heterogeneous, multi-tier infrastructure, connectivity, and common management: Infrastructure Hitachi Lightning 9900™ V Series enterprise storage systems Hitachi Lightning 9900™ V Series enterprise storage systems provide seamless scalability with nondisruptive expansion to over 140TB to simplify your storage infrastructure through massive consolidation. When combined with Hitachi storage software and the HiCommand® Storage Area Management Suite, these systems support Application Optimized Storage™ solutions, enable “set and forget” management, protect data assets, and optimize resources. Lightning 9900 V Series systems are powered by the Hi-Star™ crossbar switch architecture. This assures you of no single point of failure and instant, 24/7 data access. We even back it up with a 100% data availability guarantee. The Lightning 9900 V Series systems support not only open systems, but also mainframe environments through FICON and ESCON as well as copy software compatibility. Recently the Enterprise Storage Group reports the Lightning 9980V storage system is unsurpassed for the kind of high-end, multidimensional scalability required for serious storage consolidation. Hitachi Thunder 9500™ V Series modular storage systems The Hitachi Thunder 9500 V Series modular storage systems provide with industry-leading (up to 64TB) capacity, performance, and connectivity in a small footprint. These systems can grow with your business, addressing applications such as data replication, message archiving, and regulatory compliance. For economical information lifecycle management, match the cost of storage to the value of your data by tiering storage down from Lightning 9900™ V Series enterprise systems to lower-cost Thunder 9500™ V Series models. SATA Intermix Option New global regulatory requirements are driving demand for automated storage solutions that simplify the management and migration of data throughout the entire data lifecycle. The Serial ATA (SATA) Intermix Option for the Thunder 9500 V Series of modular storage systems can be added to existing Thunder 9585V™, Thunder 9580V™ and Thunder 9570V™ high-end modular storage systems, enabling customers to create the world’s first “DLM in a box”—high-speed Fibre Channel and lower cost native SATA tiered within one storage system. Connectivity Storage Area Networks SANs are an essential part of Hitachi Data Systems delivery of Application Optimized Storage solutions. SANs make large storage pools shareable across the enterprise, centralize storage management, and dramatically improve storage utilization, resulting in lower costs. Yet they can simultaneously provide better performance for the applications that drive business. Our SAN solutions encompass storage systems, switches, servers, management software, multi-protocol support, services and other storage network components developed by Hitachi, our alliance partners, and third party providers. Working with the storage networking industry leaders, such as Brocade, Cisco, CNT, and McDATA, Hitachi Data Systems provides extensive connectivity options, including IP (iSCSI, FCIP, iFCP) and Fibre Channel configurations. In addition, the Lightning family of storage systems supports both ESCON and FICON protocols for mainframe connectivity concurrently with open systems protocols. This makes Lightning family the platform of choice for massive consolidation projects. Virtual Storage Ports/Host Storage Domains Virtual storage ports, available in both the Lightning 9900 V Series and Thunder 9500 V Series storage systems, enable each Fibre Channel physical port to support 128 heterogeneous open systems servers. Each server has its own secure storage partition and bootable LUN 0 through Host Storage Domains. This capability simplifies the storage network infrastructure, eases management, and enables large-scale consolidation, resulting in lower TCO. Network Attached Storage For many applications, especially Web, design, and medical, the concern is not bandwidth but file access response time. Hitachi solutions for NAS, the HDS-NetApp® Enterprise NAS Gateways and the Lightning NAS Blade, help deliver cost-efficient storage utilization across the enterprise. Common Storage Management Common Storage Management is achieved through standards-based rich management tools that provide IT executives with a Single Point of Control for both application and infrastructure requirements. To fully benefit from Application Optimized Storage, all of these elements, including business continuity characteristics, array performance, and network fabric, need to be defined, managed, and mapped to what business requires from the applications in order to optimize delivery of value to the business. Common Storage Management is perhaps the most important component of Application Optimized Storage. Rather than provide end-users with disparate interfaces for disparate platforms, essentially resulting in multiple islands of storage and inaccessible information, Hitachi Data Systems provides customers with the same software, the same management interfaces, and the same tool sets to manage all heterogeneous storage systems from a single console. The final key components of Application Optimized Storage are Services and Best Practices. To ensure organizations maximize their investment in Application Optimized Storage solutions, Hitachi Data Systems offers a comprehensive suite of technology, storage, education, and professional services, as illustrated in Global Solution Services consultants can help you plan, design, implement, integrate, manage, and optimize storage infrastructure solutions that meet your needs. Areas in which our consultants assist customers include: - Industry Solutions—Enterprise content archival solutions that incorporate hardware, software, and professional services to address your business and regulatory compliance requirements. - Application Optimized Solutions—Bridge the gap between business applications and IT’s ability to precisely deliver service levels with GSS strategic consulting, design integration and robust deployment capabilities. - Storage Services—Services that apply proven best practices along with appropriate tools and training to help you to plan, design, implement, integrate, manage, optimize, and maintain your storage infrastructure. - Product-Based Services—Implementation, simplification, and optimized ROI and TCO for Hitachi Data Systems and select third-party products. - Education Services—Help you to improve your staff efficacy and efficiency in implementing and supporting multi-vendor storage solutions.
  • Our calendar 2008 company outlook – The big message here is that Hitachi is leading the industry in storage virtualization! We are the leaders in storage virtualization, and there are several significant proofpoints to support this. Starting with… 1) Hitachi is really the only company with storage virtualization technology in its flagship products. If you think about it, Hitachi’s USP and NSC, our flagship enterprise storage Virtualization offerings, have virtualization technology embedded in them. Whereas if you look at our competitors such as EMC, if you want virtualization it’s not embedded in their flagship offering, DMX. You’d have to buy the DMX and you’d have to buy Invista, their virtualization offering, which is a peripheral switch hybrid-type device. If you’d want to buy virtualization from IBM, it’s not available in their flagship DS 8400 product. It’s available in the form of an appliance that sits in the network, or a product called the SBC. So, again, Hitachi is so dedicated to storage virtualization, our flagship products are embedded with virtualization technology. That is a true differentiator in the market. 2) Additionally, Hitachi pioneered a revolutionary storage architecture. With its Intelligent Virtual Controllers, we have separated the brain from the body of storage, or the innovation and the intelligence from the commodity, the body being the disks. And that’s enabled us to disrupt the markets, once again, such like we did when we introduced High Star Architecture in 2000 (we’ll touch on that as well). 3) This last bullet covers our overall outlook for the year. We believe we exhibit the highest levels of hardware and software sophistication. This is demonstrated by our platform direction and our portfolio of common storage services. Hitachi is truly the only company that can provide customers with a single replication engine and a single management interface across all storage assets -- regardless of cost, manufacturer, type, price band, etc. It’s truly the most advanced common storage services across all platforms available in the market today.
  • Some Interesting Facts: 20% Structured Data (databases, transactional, data warehouses) 80% Unstructured (objects and files) and Semi-structured (e-mail) Data - <5% of unstructured data is managed through content management….and shrinking - Unstructured Data is growing at 10X the rate of Structured Data (Files, Email, Content) - 2,272 PB of Unstructured Data Today, 20,000PB in 2010…Most is dormant after 90 days. ESG. Value of the File….Content Is King - File Attributes help basic classification - Content Attributes (Metadata) enables extra classification, extra descriptions - Content inside the file enables text searching…informational value

Presentación Hitachi Data Systems Logicalis VT Buenos Aires Presentation Transcript

  • 1. Tendencias en Tecnología de Virtualización de Storage e Información crítica Un valor agregado para su negocio Sandra.Ryan@ hds.com © 2006 Hitachi Data Systems
  • 2. Agenda • Introducción Hitachi Data Systems • Servicios de Almacenamiento para el rango Medio y Corporativo • Soluciones de Archivos • Soluciones de Valor Agregado para la explotación de la información – El Contenido – La Búsqueda – La Protección © 2006 Hitachi Data Systems 2
  • 3. Introducció n© 2006 Hitachi Data Systems
  • 4. Hitachi Data Systems Subsidiaria de Hitachi,Ltd. (NYSE:HIT) • Fundada en 1989 • Ventas directas e indirectas en más de 170 países y regiones • 3,400 empleados y en expansión El foco principal de Hitachi, Ltd. para soluciones de infraestructura de almacenamiento, software de administración y servicios de consultoría Excelencia en Servicio al Cliente Premiada por: © 2006 Hitachi Data Systems 4
  • 5. Hitachi, Ltd. (NYSE:HIT/TSE:6501) Una de lasmayores compañías de electrónicos del mundo • Fundada en 1910 • Produce más de 20,000 productos – 910 subsidiarias – 390,000 empleados – Más de 700 Ph.Ds • Ventas Totales FY07 por $112.2B • Inversión en I&D FY06: $4.5B – Aprox. 40% en IT • Aprox. $5.6B en efectivo No. 48 en el ranking FORTUNE Global 500® 2007 © 2006 Hitachi Data Systems 5
  • 6. Hitachi, Ltd. FY2007 Revenue bySegment FY2007 Revenues of $112.2B Financial Services Electronic Devices Power and Industrial Systems 3% 28% 10% Digital Media and 12% Consumer Products 22% 10% Information Systems 15% Logistics, Services and Other and Telecommunications High Functional Materials and Components © 2006 Hitachi Data Systems 6
  • 7. Evolució n de los entornos de IT Islas de IT FC FC FC FC FC FC ERP SAN Backup SAN FC FC FCEngineering SAN Midrange DASPoint Solutions - Host SRM - Network/SAN Management © 2006 Hitachi Data Systems 7
  • 8. Evolució n de los entornos de IT Islas de IT Consolidación FC FC FC FC FC FC FC FC FC FC Security FC FC ERP SAN FC FC Backup FC SAN iSCSI FICON Consolidated Storage Fibre FC FC FC ChannelEngineering SAN Replication Shadow Image Midrange DAS Network SolutionsPoint Solutions - Reporting and Discovery - Host SRM - Topology, Capacity, Utilization - Network/SAN Management - Availability © 2006 Hitachi Data Systems 8
  • 9. Evolució n de los entornos de IT Islas de IT Consolidación Servicios de Valor Agregado Dynamic Provisioning FC FC FC FC FC Storage Logical Virtualization FC Partitioning FC FC FC FC Security FC FC ERP SAN FC FC FC Backup FC FC FC FC FC SAN FC FC FICON FC FC iSCSI Consolidated Storage Storage Utility DLM Fibre FC FC FC Channel StorageEngineering QoS SAN Remote Replication Replication Shadow Image Continuous Data Midrange DAS Protection Network Solutions Utility SolutionsPoint Solutions - Reporting and Discovery - Chargeback, SLA, Asset and Host Utilization - Host SRM - Topology, Capacity, Utilization - Forecasting, Capacity Planning - Network/SAN Management - Availability - Provisioning by QoS, © 2006based Auto. Systems Policy Hitachi Data 9
  • 10. Desafíos de Negocio – Desafíos de IT• Reducir costos – Mejorar la eficiencia del staff – Escalabilidad• Simplificar lo complejo – Consolidar – Automatizar – Interoperabilidad• Mejorar SLA / SLO – Adaptarse a la dinámica de cambios y crecimiento – Predecir consumos – Cumplir las mé tricas de servicios• Minimizar riesgos Hacer Más con Menos – Proteger los datos – Garantizar disponibilidad © 2006 Hitachi Data Systems 10
  • 11. Hitachi Services Oriented StorageSolutions Permite que los servicios de almacenamiento sean ofrecidos de acuerdo a las necesidades de negocio, no según las características tecnoló gicas Data Nondisruptive Volume I/O Load Dynamic Replication Data Migration Management Balancing Provisioning File Data Data Business Content Management De-Duplication Classification Continuity Management Services © 2006 Hitachi Data Systems 11
  • 12. Solutions:Una Plataforma integrada para suInformació nStructured Data/RDB, Apps Unstructured Data (Files, Metadata, Content)High-end Enterprise Application/DB Archiving/Object/Content Level Awareness Tiered Storage/Virtualization via the Universal  Foundation for open, scalable andStorage Platform family V-VM integrated content solutions Common Protection Solutions INTEGRATED Common Storage Management STRATEGY Common Storage & DataMidrange Application / Management, Tiered Storage, Data NAS -- Two KeyDB Enablement Protection, Security, Segments: Hitachi Adaptable Modular Storage Common Search  Hitachi High-Performance NASand Workgroup Modular Storage Platform – for high throughput Common Storage Management applications  Hitachi Essential NAS Common Protection Solutions Platform™ – for SAN/NAS consolidation and file and print services © 2006 Hitachi Data Systems 12
  • 13. Hitachi Services Oriented Storage SolutionsArchitecture Applications Sample Storage Metrics Email CRM File/Print Database ERP ECM Practices QoS Storage Object Services Economics SLA Storage Platform Index, Search, Classification, Security Data I/O File Services Classification Virtualization, Replication, Migration, De- Duplication, Security, Encryption, Archiving RPO Risk Analysis Block Services RTO Virtualization, Discovery, Partitioning, Provisioning, Compliance & Volume Management, Replication, Migration, Security, Archiving Metering Charge Back Consolidation & Utilization Tiered Storage Fibre Archive SATA TAPE Channel Physical Storage © 2006 Hitachi Data Systems 13
  • 14. The HDS Platform IT Applications SAP SFAData Producers Exchange NetBackup Oracle Notes TSMData Consumers Virtual Content MGMT NAS LUN Tape Archival Device Mgr TuningData Storage Modular Enterprise Virtualized Mgr Tiered Storage Mgr DR OptionsData Protection Snapshot Sync Mirror Async Mirror © 2006 Hitachi Data Systems 14
  • 15. The HDS Platform IT Applications SAP SFAData Producers Exchange NetBackup Oracle Notes TSM HNAS VTL HCAP LUNsData Consumers MGMT Device SAN Mgr HDS USP USP V AMS VM TuningData Storage Mgr Tiered Storage Mgr DR OptionsData Protection Snapshot Sync Mirror Async Mirror IBM Shark EMC DMXHDS AMS © 2006 Hitachi Data Systems 15
  • 16. Portfolio Integrado de SolucionesEscalables USP V USP USP VMFunctionality Demanded Controladores Ineligentes de Storage Virtual >>Toda la Familia manejada desde AMS1000 una suite común AMS500 >>Todos los modelos preparados para AMS200 Tiered Storage WMS100 Sistemas de Storage Modulares SMS Avanzados Small Business Mid-size Business Large Business or Department or Department Size of Organization Served © 2006 Hitachi Data Systems 16
  • 17. Hitachi Data Systems Soluciones para el Rango Medio© 2006 Hitachi Data Systems
  • 18. Agenda• Hitachi Data Systems : Portfolio de Productos• Hitachi Adaptable Modular Storage y Workgroup Modular Storage Product• Key Features – RAID-6 – Cache Partitioning – Multiprotocol: FC SAN/iSCSI – Volume Migration Modular software – Copias Locales - Hitachi ShadowImage – Ré plica Remota - Hitachi TrueCopy™ © 2006 Hitachi Data Systems 18
  • 19. Portfolio Integrado de SolucionesEscalables USP V USP USP VMFunctionality Demanded Controladores Ineligentes de Storage Virtual >>Toda la Familia manejada desde AMS1000 una suite común AMS500 >>Todos los modelos preparados para AMS200 Tiered Storage WMS100 Sistemas de Storage Modulares SMS Avanzados Small Business Mid-size Business Large Business or Department or Department Size of Organization Served © 2006 Hitachi Data Systems 19
  • 20. Hitachi - Productos para SM B y Rango Medio Upgrad Upgrad e e WMS100 AMS200Throughput 2GB Cache 4GB Cache Fibre Channel or F ibre Channel or iSCSI Attach iSCSI Attach AMS500 AMS1000 Up to 105 SATA II M ix Up to 105 SATA II Disks And Fibre Channel Disks 8GB Cache 16GB Cache Up to 78.5TB U p to 72TBs Fibre Channel or iSCSI Attach Fibre Channel or iSCSI Attach Hitachi Simple Mix Up to 225 TB SATA II Mix Up to 450 SATA II Up to 512 LUNs U p to 512 LUNs And F ibre Channel Disks And F ibre Channel Disks Modular to 512 Hosts Up to 512 Hosts U p Storage Up to 162TBs Up to 324TBs 2GB Cache Up to 2048 LUNs Up to 4096 LUNs iSCSI Up to 512 Hosts Up to 1024 Hosts 6, 8 or 12 HDDs SAS or SATA II up to 9TB Scalability © 2006 Hitachi Data Systems 20
  • 21. Solució n Departamental Hitachi Simple Modular Storage 100• Simple para Instalar y Usar • Flexible – Se instala en minutos – Discos High performance SAS o High Capacity SATA II – No requiere storage expertise – 6, 8 or 12 HDD – Interfaz GUI – Conectividad iSCSI • Fibre Channel models available in mid 2008• Protecció n de Data – Modelos dual-controller • Autoreparació n Automigració n – Doble paridad para Datos (RAID-6) – Reemplazo de drives fallados – Copias Snap y Full automat sin sacar el fallado – Remote Replication – Migració n automat a modelos más grandes © 2006 Hitachi Data Systems 21
  • 22. Solució n DepartamentalHitachi Simple Modular Storage 100 Para amplia variedad de entornos: –Microsoft Windows 2003 –Windows 2008* (64 bit) –Windows XP –Windows Vista –Sun Solaris –HP-UX –IBM® AIX® –RedHat Linux, SuSE Linux Vol B Vol C –VMware Vol A’ –Novell NetWare © 2006 Hitachi Data Systems 22
  • 23. Hitachi M idrange Line AMS200 4GB CacheThroughput WMS100 Fibre Channel or d e iSCSI Attach gra 2GB Cache Up Mix Up to 105 SATA Fibre Channel or And Fibre Channel Disks AMS1000 iSCSI Attach Up to 49.5TBs AMS500 16GB Cache Up to 105 SATA Up to 512 LUNs 8GB Cache Fibre Channel or iSCSI Attach Disks Up to 512 Hosts Fibre Channel or iSCSI Attach Mix Up to 450 SATA Up to 52.5TBs Up to 512 LUNs Mix Up to 225 SATA And Fibre Channel Disks Up to 512 Hosts And Fibre Channel Disks Up to 219TBs Up to 109.5TBs Up to 4096 LUNs Scalability Up to 2048 LUNs Up to 1024 Hosts Up to 512 Hosts © 2006 Hitachi Data Systems 23
  • 24. AM S500 Internal ArchitectureDual Controller © 2006 Hitachi Data Systems 24
  • 25. Adaptable Modular Storage y WorkgroupModular Storage Introduction • Storage Adaptable de Rango Medio para todo negocio • Confiable, ó ptima relació n costo-beneficio para categorizar los datos en grandes empresas • Funcionalidad Bá sica: – Tecnología Líder • Cache partitioning • Nondisruptive volume migration • “power down” feature for cost savings • RAID-6 – Capacidad escala de 285GB a 424TB – Sin single point of failure for maximum uptime – Opció n de Fibre Channel o iSCSI server connections – Opció n de Fibre Channel o SATA II hard drives © 2006 Hitachi Data Systems 25
  • 26. Adaptable M odular Storage / WorkgroupM odular Storage: Funcionalidad, Capacidad,Valor• Funcionalidad: RAID-6 (dual parity) DATA 1• Capacidad: DATA 2 – Data parity se guarda en 2 drives, lo que permite la falla simultánea de 2 drives sin riesgo de pé rdida DATA 3 DATA 3 o acceso de datos DATA 3 HOT SPARE• Valor Negocio: DATA 4 – Mejora la disponibilidad – RAID group rebuild can be done DATA 5 at a time outside of normal business hours DATA 6 Normal Operation Normal Operation PARITY 1 Rebuild Operation PARITY 2 √ © 2006 Hitachi Data Systems 26
  • 27. Adaptable M odular Storage / WorkgroupModular Storage: Funcionalidad, Capacidad,Valor • Funcionalidad: Modular Volume Migration • Capacidad: Los volúmenes pueden migrar – de un tipo de disco a otro (SATA to FC por ej) – O desde un tipo de RAID a otro (RAID-5 to RAID-6) FC – or entre distintos tamañ os de RAID (5+1 to LU1 7+1) Sin que el servidor tenga que hacer remount • Valor Negocio: SATA – Movimiento no disruptivo de datos LU2 – Mejora throughput, disponibilidad y reduce costos – Es la base del Tiered Storage © 2006 Hitachi Data Systems 27
  • 28. Adaptable M odular Storage / WorkgroupModular Storage: Funcionalidad, Capacidad,Valor• Funcionalidad: Cache Partitioning• Capacidad: El cache se puede particionar en distintos tamañ os según requerimientos de las aplicaciones. Por ejemplo: 4KB blocks para databases, 16KB block para file systems y 64KB blocks para media files• Valor negocio: – I/O throughput se mejora para cargas mixtas – Se desperdicia menos cache – Mejora el Cache hit rates Typical 4KB Database blocks in 16KB Cache pages 75% Cache Wasted! © 2006 Hitachi Data Systems 28
  • 29. Adaptable M odular Storage / WorkgroupModular Storage: Funcionalidad, Capacidad,Valor• Funcionalidad: Cache Partitioning• Capacidad: El cache se puede particionar en distintos tamañ os según requerimientos de las aplicaciones. Por ejemplo: 4KB blocks para databases, 16KB block para file systems y 64KB blocks para media files• Valor negocio: – I/O throughput se mejora para cargas mixtas – Se desperdicia menos cache – Mejora el Cache hit rates 4KB Database blocks in 4KB Cache pages 16KB File system blocks in 16KB Cache pages 512KB Video data blocks in 64KB Cache pages © 2006 Hitachi Data Systems 29
  • 30. Adaptable Modular Storage / WorkgroupModular Storage: Funcionalidad, Capacidad,Valor • Funcionalidad: Soporte Multi Protocol • Capacidad: Un subsistema de almacenamiento puede ser accedido por Fibre Channel SAN, iSCSI SAN or AMBOS. • Valor Negocio: – Consolidació n de storage para servidores por Fibre Channel y para servidores má econó micos basados en iSCSI s – Se puede pasar de una red de Storage iSCSI a Fibre Channel sin reemplazar el storage © 2006 Hitachi Data Systems 30
  • 31. Protocol Options Fibre Channel or iSCSI WMS100/AMS200/AMS500 AMS1000 Dedicated FC or iSCSI Multi-protocol FC/iSCSIFibre Channel Array FC FC FC Fibre Channel Array FC FC FC iSCSI Array iSCSI iSCSI iSCSI Array iSCSI iSCSI FC iSCSI FC and iSCSI FC iSCSI Multi-protocol FC Port iSCSI Port Note: A maximum of 4 (four) iSCSI ports are supported on the AMS1000 (even as single protocol) © 2006 Hitachi Data Systems 31
  • 32. Multi-protocol para WM S100, AMS200 yAMS500 Application Application Web Backup Exchange SQL LAN Fibre Channel SAN iSCSI Fibre Channel AMS200 © 2006 Hitachi Data Systems 32
  • 33. Adaptable M odular Storage / WorkgroupM odular Storage: Funcionalidad, Capacidad,Valor • Funcionalidad: Power Savings • Capacidad: Spin down de drives en RAID groups que no se acceden con frecuencia • Valor Negocio: – Ahorra el consumo de energia y refrigeració n del Spun-Up RAID Group datacenter Spun-Down RAID Group – Extiende la vida ú de los disk drives til – El cliente elige qué discos se apagan y lo puede integrar a aplicaciones tales como VTL, backup y archiving © 2006 Hitachi Data Systems 33
  • 34. Protecció n de Datos Copias Locales – Usos diversos• Copias de datos productivos para Desarrollo• Nuevas versiones del aplicativo y testing• Soporte a toma de decisiones en base a informació n real• Pruebas de nuevos productos• Backup a tape – CommVault, VERITAS Software BackupExec, NetBackup Decision support S-Vol Test Data Version 1 Application development Primary S-Vol and testing Test Data Volume Version 2 Production Production Test new Database versions of Processing S-Vol Continues Test Data third-party Version 3 software Unaffected or hardware S-Vol Test Data Version 3 Tape Backup © 2006 Hitachi Data Systems 34
  • 35. Protecció n Avanzada-TrueCopy y ShadowImage Software • Copias locales – Backup no disruptivos – Restore rápido • Disaster Recovery Remoto – Tape Archive Remoto • Prueba No-disruptiva del Disaster Recovery plan Local Data Center Remote Copy Remote Site TrueCopy ShadowImage ShadowImage TrueCopy TrueCopy P-Vol P-Vol S-Vol S-Vol TrueCopy-------------------- S-Vol S-Vol ShadowImage ShadowImage -------------------- --------------------ShadowImage ShadowImage -------------------- S-Vol S-Vol P-Vol ShadowImage ShadowImage ShadowImage ShadowImage P-Vol P-Vol S-Vol S-Vol P-Vol Test your disasterApplication Testing recovery plan with Backup to tape Remote Tape current data Backup/Vault © 2006 Hitachi Data Systems 35
  • 36. Soluciones High-end© 2006 Hitachi Data Systems
  • 37. El enfoque monolítico: Reemplazo deinversiones ya realizadas © 2006 Hitachi Data Systems 37
  • 38. El enfoque de Hitachi: Asimilació n deinversió n , consolidació n y funcionalidad Universal Virtualization up to 247 petabytes © 2006 Hitachi Data Systems 38
  • 39. Portfolio Integrado de SolucionesEscalables USP V USP USP VMFunctionality Demanded Controladores Ineligentes de Storage Virtual >>Toda la Familia manejada desde AMS1000 una suite común AMS500 >>Todos los modelos preparados para AMS200 Tiered Storage WMS100 Sistemas de Storage Modulares SMS Avanzados Small Business Mid-size Business Large Business or Department or Department Size of Organization Served © 2006 Hitachi Data Systems 39
  • 40. En May 2007 Hitachi AnunciaUniversal Storage Platform V USP V/VM• Todas las funcionalidades del High End HDS Universal Storage Platform (USP)• 100% disponibilidad de datos• Virtualizació n de storage externo provista por la controladora• Copia, migració n y ré plica de datos heterogé neos• Almacenamiento por Niveles Diná mico Standard 19” rack with 200V single phase power © 2006 Hitachi Data Systems 40
  • 41. Incremento de Performance y Escalabilidad Hitachi Universal Storage• Supera la performance de los otros storage highend con 3.5 Platform V million IOPS de máxima performance• Switch Backplane de 4 Gb/s Fibre Channel• Hasta 247 PB, 512 GB cache• Discos SATA II internos de 1 TB Hitachi USP-V © 2006 Hitachi Data Systems 41
  • 42. En Sept, 2007 Hitachi AnunciaUniversal Storage Platform VM • Single-Rack Dual-Rack Todas las funcionalidades del Configuration Configuration Hitachi Universal Storage Platform V • Mismo microcode, software, interoperability, external storage support que Universal Storage Platform V • Só lo se Diferencia en : – capacidad – Performance Primary Primary Primary – Escalabilidad Rack Rack Rack – licenciamiento © 2006 Hitachi Data Systems 42
  • 43. Fibre Channel Arbitrated Loop Backplane FCPort FCPort FCPort FCPort FCPort FCPort FCPort FCPort Control MemoryCrossbar Switch Crossbar Switch Crossbar Switch Crossbar Switch Data Cache (CSW) (CSW) (CSW) (CSW) Data Cache FC-AL 0 1 2 3 © 2006 Hitachi Data Systems 43
  • 44. Hitachi USP V: Fiber Channel Switched FCPort FCPort FCPort FCPort FCPort FCPort FCPort FCPort Control MemoryCrossbar Switch Crossbar Switch Crossbar Switch Crossbar Switch Data Cache (CSW) (CSW) (CSW) (CSW) Data Cache 4Gb/s Fibre Channel Switch (FSW) 0 1 2 3 © 2006 Hitachi Data Systems 44
  • 45. Hitachi Universal Star Network™CrossBar Switch Masivamente Paralelo 3.5 Million IOPS of Maximum Performance © 2006 Hitachi Data Systems 45
  • 46. M ás allá de la Virtualizació n:Hitachi Dynamic Provisioning Hitachi USP V Thin Provisioning – Una forma poderosa de virtualización El restose alocan tradicional Provisionamiento los 300 GB Y sólo (1.7 vol de 2 TB300 GB Con Hitachi TB) queda Dynamic Aunqueunun se usen crea sólo vol de SeSe2 TB completos2 TB crea para otras disponible alocase consumen que provisioning… aplicaciones © 2006 Hitachi Data Systems 46
  • 47. Virtualizació n : Consolidació n Inteligente• Al consolidar infraestructura : – Se simplifica la complejidad – Se reduce la diversidad – Facilita la escalabilidad – Reduce costos de admin – Reduce requerimientos ambientales – Habilita funciones avanzadas• Definició n – source: www.wikipedia.org – En IT virtualizació n se refiere a una capa de abstracció n de los recursos de IT, que permite aislar sus características físicas de las formas de interactuar con ellos – Por ej haciendo que un solo recurso físico parezca funcionar como múltiples recursos ló gicos – O haciendo que mú ltiples recursos físicos parezcan un único recurso ló gico• Có mo ? © 2006 Hitachi Data Systems 47
  • 48. USP V/VM Externally Attached Storage Virtualization Implementation Fibre ESCON / Mainframe Host ELUN is a LUN that Channel FICON SAN IBM® z/OS® is mapped to a LUN in external storage device Target Port Target Port ELUN Host View Devices LUN LUN LUN LUN LUN LUN LUN VDEV VDEV VDEV LDEVInternal Devices LDEV LDEV LDEV LDEV LDEV LDEV External Port Mapped Mapped 1:1 N:1 USP V/VM Internal Disks Target Port Externally Attached p ou HDD Storage System gr ay LUN LUN LUN r Ar © 2006 Hitachi Data Systems 48
  • 49. La misma solució n de copia local para storage heterogé neos Simplifica la administración heterogénea • Dentro del USP V/VM Beneficios: SAN • Reduce costos de • En otros storage Hitachi licenciamiento • Simplifica la administración de• En storage de terceros distintas plataformas Hot Backup Hot Backup Storage Backup USP V/VM Primary Primary Volume Volume Backup Pool Thunder IBM EMC 9585V DS4000 WMS100 CLARiiON Series Hot Hot Hot Hot Backup Primary Backup Backup Backup Primary Primary Backup Volume Primary Backup Volume Backup Backup Volume Volume © 2006 Hitachi Data Systems 49
  • 50. La misma solució n de copia entre Nivelesde storage heterogé neos Data mirroring entre tiers con ShadowImage software SAN Secondary Secondary Primary Primary Volume Volume Volume Volume USP V/VM IBM EMC AMS 500 DS4000 AMS200 CLARiiON Series Backup Backup Copy for Copy for Copy Copy Testing Testing © 2006 Hitachi Data Systems 50
  • 51. Private Virtual Storage Machines Windows UNIX UNIX UNIX Windows Host Host Host Testing Testing Exchange SAP SAP SAN USP V-VM • Disk Cache • Cache Exchange SAP Test Partition • Ports Vol1 Vol1 Vol2 Vol2 Vol3 Vol3 Vol4 Vol4 Vol1 Vol1 Vol2 Vol2 Vol1 Vol1 Vol2 Vol2 Vol1 Vol1 Vol2 Vol2 Vol5 Vol5 Vol3 Vol3 Vol4 Vol4 Vol3 Vol3 Vol1 Vol1 Vol1 Vol2 Vol3 Vol2 Vol2 Vol3 Vol3 Vol3 Vol3 Vol4 Vol4 Vol1 Vol2 Vol3 Lightning ThunderAMS500 9585V IBM ESS EMC DMX 9980V with SATA © 2006 Hitachi Data Systems 51
  • 52. Ajuste de particionamiento PVM segúnrequerimientos cambiantes de lasaplicaciones Windows UNIX UNIX UNIX Windows Host Host Host Testing Testing Exchange SAP SAP SAN USP V-VM Cache Allocate resources as needed Exchange SAP Test Partition Vol1 Vol1 Vol2 Vol2 Vol3 Vol3 Vol4 Vol4 Vol1 Vol1 Vol2 Vol2 Vol1 Vol1 Vol2 Vol2 Vol1 Vol1 Vol2 Vol2 Vol5 Vol5 Vol3 Vol3 Vol4 Vol4 Vol3 Vol3 Vol1 Vol1 Vol1 Vol2 Vol3 Vol2 Vol2 Vol3 Vol3 Vol3 Vol3 Vol4 Vol4 Vol1 Vol2 Vol3 Lightning ThunderAMS500 9585V IBM ESS EMC DMX 9980V with SATA © 2006 Hitachi Data Systems 52
  • 53. Migració n No-Disruptiva de Datosentre Niveles de Storage HiCommand® Tiered Storage Manager software Utilización Eficiente de Recursos SAN Last Week Last Week USP V/VM Month Last Last Month Thunder IBM EMC DS4000 WMS100 9585V Series CLARiiON This This Week Week © 2006 Hitachi Data Systems 53
  • 54. Consideraciones para las Categorías de Almacenamiento•Atributos Storage Tiers • Availability • Performance • Cost • Protection and complianceTipos de Tiers • High-performance tiers • Lower performance tiers • Virtual Tape Library • Archive © 2006 Hitachi Data Systems 54
  • 55. Un modelo Maduro en laCategorizació n de Datos • Simplify Management • Align tiers to business • Increase Utilization • needs (manual) Optimize tiered IntelligentSLAs storage • Consolidate assets • Integrate NAS, VTL, Reduce management Lowest TCO • Enhancetiers Archive business • Optimize automation Complete performance • continuity • Optimize SLA (manual) Reduce risk Self-optimizing/healing • Lower operational cost • Reduce TCO • Reduce complexity © 2006 Hitachi Data Systems 55
  • 56. Niveles de almacenamiento -Categorizació n Services Oriented Virtualized Storage Solutions Tiered Storage Storage Tiers Islands • On demand service • Clases de servicios standard • Totalmente dinámico y • Consolidado • Infraestructura de Admin Comun virtualizado • Definición de niveles • Cost-awareness • SLAs de servicio base • Políticas y Métricas • Alineación de storage • Islas de Adm Storage con servicios IT • Altametne Business Focus descentralizado • Caro de mantener Utility Services-based Functional Base Reactiva Proactiva Servicio Valor Infrastructure Maturity © 2006 Hitachi Data Systems 56
  • 57. Ré plica Remota Datos entre Storage Heterogé neos • Remote Replication with Universal Replicator SoftwareWindows UNIX UNIX Windows UNIX UNIX Host Host Host Host Host HostExchange SAP SAP Exchange SAP SAP SAN SAN Primary Primary Remote Remote USP V/VM USP V/VM Volume Volume Copy Copy Lightning EMC Lightning EMC 9900 V AMS500 Symmetrix 9900 V AMS500 Symmetrix Series Series Secondary Secondary Volume Volume Buenos Aires Va Angostura Secondary Secondary Backup Backup © 2006 Hitachi Data Systems 57
  • 58. 2000-2007: Hitachi lidera el mercado de almacenamiento High-End 75.0% 72.5% 70.0% 67.5% 65.0%High-end Enterprise Disk Storage Share (End-users) 62.5% 60.0% 57.5% 55.0% 52.5% 50.0% 47.5% 45.0% 42.5% 40.0% 37.5% 35.0% 32.5% 30.0% 27.5% 25.0% 22.5% 20.0% 17.5% 15.0% 12.5% 10.0% 7.5% 5.0% 2.5% 0.0% 01 02 03 04 05 00 00 00 00 01 01 01 02 02 02 03 03 03 04 04 04 05 05 05 06 06 06 06 07 07 07 2 4 4 1 2 4 1 2 4 1 3 4 1 1 2 3 1 3 4 3 2 3 4 1 3 1 2 3 2 2 3 Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Q Source: Wachovia Capital Markets (October 2007) © 2006 Hitachi Data Systems 58
  • 59. Hitachi Storage Command SuiteUna Suite de Software de Admin Común a toda lalínea Hitachi is the first storage company to provide common software management across its entire product line!Functionality & Performance Hitachi Storage Command Suite Configuration, Provisioning, Performance Monitoring, Replication, Reporting, Data Migration SMS100 WMS100 AMS20 AMS500 0 AMS1000 USP VM USP V © 2006 Hitachi Data Systems 59
  • 60. Tendencias en la explotació n de la informació n© 2006 Hitachi Data Systems
  • 61. Tendencias en el uso de la informació n USER TYPE Legal HR CXO Miscellaneous Attorney Employment Retail Online Services Financial Lending Consumer Services Financial Services Financial Networking Social Provider Service Energy Energy Trends Ease of use Federation of IT administration Information integrity Certifiable data removal/deletion Verifiable chain of custody (auditing) Relates more to file storage platforms y e n r o t t A P Frequently has to search for information Limiting risk and protecting brand identity © 2006 Hitachi Data Systems 61
  • 62. Explotando el valor agregado de susdatos • NAS Services – Essential NAS – HighPerformance NAS • Content Services – Content Archive Platform • Virtual Tape Libraries © 2006 Hitachi Data Systems 62
  • 63. Hitachi NAS PortFolio Essential NAS High Performance NAS© 2006 Hitachi Data Systems
  • 64. Por qué NAS?• Permite compartir archivos desde entornos cliente multi-OS – Reduce la proliferació n de file server y los gastos - TCO – Interoperabilidad y alta performance entre mú ltiples OS• Reduce costo y mejora eficiencia operativa – Menos hardware y menos licencias de software• Facilidad de instalació n y administració n de aplicaciones basadas en archivos – Reutiliza la red IP – Simplifica data management y los gastos operativos• Data Protection : backup / replication – Centralizació n del backup de oficinas remotas – Protecció n geográ fica © 2006 Hitachi Data Systems 64
  • 65. Hitachi NAS Portfolio High-Performance High-performance Applications & NAS Platform Consolidation 3200 3100Price 2000 2000 Nearline Standard Solution Essential for File Server NAS Consolidation, File Sharing and Backup Platform 1500c 1300c 1100c Performanc e © 2006 Hitachi Data Systems 65
  • 66. Hitachi Essential NAS Platform • Interfaz de Management: – GUI Amigable – Integrada con HiCommand® Device Manager y Tiered Storage Manager• Altos Niveles de Performance y • Escalabilidad y Escalabilidad Disponibiidad:• – 512TB en 2-node active- Interfaz de Management de fácil uso active cluster – Hitachi hardware-based• Integrado con Hitachi RAID Software• Capacidades Avanzadas de • Data Protection Avanzada: Data Protection – Replicació n Sincr y Asincr – NDMP Backup © 2006 Hitachi Data Systems 66
  • 67. Hitachi Essential NAS Platform M odels• Sencillo upgrade desde entry model al high end models• Permite upgrade desde NAS Blade y/o AMS NAS Option• Filer o Gateway 1500c 1300cPrice 1100c 32GB RAM 24,000 CIFS Concurrent 16GB RAM Sessions 12,000 CIFS concurrent 8GB RAM 91K IOP (with AMS); sessions 5,000 CIFS concurrent AMS1000 sessions 54K IOP (with AMS); 35K IOP (with AMS); 80K IOP (with USP 52K IOP (with USP V) V) AMS500-AMS1000 AMS200-AMS500-AMS1000 Performance © 2006 Hitachi Data Systems 67
  • 68. Essential NAS - CaracterísticasDestacadas• Interfaz Amigable de Administració n: – GUI para NAS management. – Integrada con HiCommand • Device Manager, • Tiered Storage Manager • Tuning Manager• La mejor Escalabilidad y Disponibilidad – Escala hasta 512TB de capacidad en cluster de 2 nodos – Garantiza high availability con failover de cluster active-active• Data Protection - Replicació n – Sincró nica con Hitachi TrueCopy* – Asincró nica con • Universal Replicator software, • TrueCopy Extended Distance • NAS Replication Utility for SyncImage (RUS)* IP-Based © 2006 Hitachi Data Systems 68
  • 69. Essential NAS at a glance Data Access Data Access Data Scalability Data Scalability Data Availability Data Availability - Multiprotocol access - Scalable capacity & throughput - Multipath IO -NFS, CIFS, FTP - High performance - Active/active cluster - Single file data sharing support Hitachi Essential NAS Platform - VLAN tagging con - Link aggregation and trunking Midrange Storage Systems Data Protection Data Protection HighEnd Storage Systems - Sync Image snapshots Simplified Management Simplified Management - File and file system backup and recovery- Hitachi HiCommand® Suite - Backup NDMP - HiCommand Tuning manager - FC tape ready- Remote monitoring using Hi-Track® - Hitachi ShadowImage™ In- System Replication Modular Volume Migration Modular Volume Migration -Antivirus agent- Hitachi Storage Navigator Utility Pack Thin Provisioning Thin Provisioning - Modular Volume Migration- On-line, non-disruptive data migration - Hitachi Dynamic Provisioning- Tiered Storage Management - High performance © 2006 Hitachi Data Systems 69
  • 70. Essential NAS Platform High-performance Essential NAS NAS Platform 2000 Platform (2-node Model (Single Node) Cluster)Single Node Yes NAN-Node Clustering 2-node clustering 2-node clusteringCapacity 128 TB 512TBFile System Size 64 16Number ofsnapshots per filesystem 1,024 124iSCSI Yes NA10GbE Yes NARead Cache Yes NAHierarchicalStorageManagement Yes NAVirtual server Yes NAThin Provisioning Yes With USP FamilyMetroCluster Yes NACluster NameSpace Yes NA © 2006 Hitachi Data Systems 70
  • 71. Essential NAS Platform and High-performance NAS High-performance Essential NAS NAS Platform 2000 Platform (2-node Model (Single Node) Cluster)Single Node Yes NAN-Node Clustering 2-node clustering 2-node clusteringCapacity 128 TB 512TBFile System Size 64 16Number ofsnapshots per filesystem 1,024 124iSCSI Yes NA10GbE Yes NARead Cache Yes NAHierarchicalStorageManagement Yes NAVirtual server Yes NAThin Provisioning Yes With USP FamilyMetroCluster Yes NACluster NameSpace Yes NA © 2006 Hitachi Data Systems 71
  • 72. HNAS – Líder en Rendimiento y Funcionalidad Línea 2000 - 3000• 200K SPEC SFS IOPS, 1600MB/sec through por nodo• 4 PB en un storage pool• 256TB file system vs. 16TB del competidor más cercano • Ideal para Verticales: – Life Sciences,• 4 million files per directory vs. – Entertainment, 64K del competidor más cercano – Internet Service, – Oil and Gas,• Thin Provisioning , Auto-grow basado en thresholds – Gov and Edu – Electronic Engineering,• CIFS, NFS, iSCSI, y NDMP protocols – Electronic Discovery, © 2006 Hitachi Data Systems 72
  • 73. Combinando la Virtualizació n de NAS y SAN Content Awareness MP3 PPT DOC Client files • Hierarchical Storage XLS MDB MOV PST Management – Mueve archivos y deja un …. HNAS cluster puntero (stub)Management • Tiered storage FS1 – Soporte de External/Internal Station FS2 multi-tiered storage MP3 USP • Políticas basadas en With internal – File type (PPT, MP3, JPG DOC disks etc.) XLS – File size Virtualized – Last access time storage – File location – Capacity threshold • Data classification Ejemplos FS4 FS3 • Move all files bigger than 10MB FC SATA to SATA • Move all files older than 90 days Tiered storage LUN migration after classification using to FC Tier 2 Tiered Storage Manager from the USP/NSC • Move all XLS to Tier 1 © 2006 Hitachi Data Systems 73
  • 74. Protecció n de DatosVirtual Tape Libraries© 2006 Hitachi Data Systems
  • 75. Mé todos actuales de protecció n insuficientesCuáles son los principales problemas de su solución actual de backup y restore ? 66.0% Backups demasiado largos 49.0% Recoveries tardan demasiado 40.0% Requiere esfuerzo manual 37.0% El éxito del bup/restore es difícil de medir 33.0% Administración de medios trabajosa ESG 2005 © 2006 Hitachi Data Systems 75
  • 76. Có mo mejorar las soluciones de backup ? Virtual Tape Library - VTL• Performance: Backup y restore de tape es lento – VTL permite la rápida recuperació n de los datos desde el disco Virtual Tape Library “Librería standard• Confiabilidad: Los tapes son más suceptibles a fallas físicas de VTF Server – VTL agrega la confiabilidad del disco al backup” servicio de backup• Seguridad: El manejo de los tapes agrega riesgos – VTL reduce la pé rdida potencial de medios FC físicos Disk• Integración con aplicaciones y Storage procedimientos ya existentes para el System backup Backup Server VTF Open Application © 2006 Hitachi Data Systems 76
  • 77. VTF - Escalabilidad Disk Arrays FC SwitchBackup Server VTF Open ApplicationBackup Server © 2006 Hitachi Data Systems 77 77
  • 78. Pero el problema fundamental es el Volumen de datosLa tecnología VTL mejora las Disk Spaceoperaciones de backup yrecovery, pero no el crecimientodel volumen ocupado por losdatos de respaldo. La tecnología innovadora de ProtecTier reduce la capacidad de storage requerida por backup en un factor de 25:1, lo que permite : © 2006 Hitachi Data Systems 78
  • 79. Un cambio en el paradigma de la protecció n de datos La tecnología innovadora de ProtecTier reduce la capacidad de storage requerida por backup en un factor de 25:1, lo que permite : Proteger más – Almacenar menos Con un costo de adquisición menor al de una solución de tape comparable © 2006 Hitachi Data Systems 79
  • 80. ProtecTIER™ Có mo funciona New Data Stream 4 GB para mapear 1 PB de disco físico De-duplicacion de data en tiempo real a 400MB/sec/server Repository HyperFactor™ Memory Resident Index Disk Arrays Fibre ProtecTIER™ Existing Data Channel Server SwitchBackup Servers “Filtered” data © 2006 Hitachi Data Systems 80 80
  • 81. ProtecTIER™ Ré plica RemotaSite Primario Master Server Requiere menos ancho de bandaSite Secundario © 2006 Hitachi Data Systems 81
  • 82. El contenido Open Archiving© 2006 Hitachi Data Systems
  • 83. Una problemática en ascenso• El contenido no estructurado crece má rá s pido que el tradicional estructurado• El 75% - 90% de los datos es no estructurado Exabytes – Requiere facilidades únicas de manejo de contenidos• Aumento – Volumen – Criticidad – Diversidad – Vida ú til © 2006 Hitachi Data Systems 83
  • 84. Los períodos de Retention están creciendo Retention timeframes by industry Life Science/Pharmaceutical Processing food 2 years after commercial release Manufacturing drugs 3 years after distributionManufacturing biologics 5 years after manufacturing of product Health care HIPAARecords in original form 5 year minimum for all records Medical records <18 From birth to 21 years Full life patient care Length of patient’s life + 2 years Financial services 17a-4 Financial statements 3 years Member registration End-of-life of enterpriseTrading account records End of account + 6 years OSHA Records 30 years from end of audit Sarbanes - Oxley Records Original correspondence 4 years after financial audit 1 2 3 4 5 10 15 20 25 50 © 2006 Hitachi Data Systems 84 Source: ESG
  • 85. Vista tradicional de un Archive NAS Tape Library• No tiene politicas de retencion • No hay protección en el borrado de archivos.• Protección limitada para los usuarios en el ?? (WORM) borrado de archivos. • Recuperacion cuestionable en terminos de tiempo ?? Archive ?? ?? RAID Array Optical Jukebox• No hay protección en el borrado de archivos. • Tecnología Worm limitada (WORM) • Tecnología obsoleta• Politicas limitadas de deduplicación, busqueda, etc. © 2006 Hitachi Data Systems 85
  • 86. La Clave del ArchivingPresiones Externas Presiones Internas/ExternasLos clientes DEBEN archivar Los clientes QUIEREN archivar Regulaciones Conservació n Archiving TIERING Presiones Internas Los clientes NECESITAN archivar © 2006 Hitachi Data Systems 86
  • 87. Los desafíos de archivar contenidos fijos• Desafíos de Gestió n – El contenido archivado debe ser fá de buscar cil y accesible para todos los tipos de datos y aplicaciones – El contenido archivado debe ser inmutable – El contenido archivado necesita cada vez accesos má frecuentes y a menores costos s• Desafíos de IT – Los Archives deben guardarse a travé s de las generaciones de tecnología – Se requiere archives basados en disco para las necesidades de negocios de hoy – Debe ser altamente escalable – Políticas para administrar la retenció n © 2006 Hitachi Data Systems 87
  • 88. Entorno típico de ArchivingSilos IndependientesAplicaciones Email Server Document Management General Accounting Web Applications que crean datos Search #2 Search #3 Search #4 Search #1 SMTP CIFS NFS HTTP No escalable No hay búsqueda cross storage Tape Library Optical Jukebox NAS RAID Array © 2006 Hitachi Data Systems 88
  • 89. Hitachi Content Archive Platform:Có mo funciona E-mail Archive Document File System Home Grown Software management Application Imaging Medical Discovery Module• Soporte de mú ltiples aplicaciones y tipos de contenidos• Indexació n y bú squeda completa de textos• Almacenamiento de alta performance y escalabilidad © 2006 Hitachi Data Systems 89
  • 90. Access Protocols HTTP NFS CIFS/SMB WebDAV SMTP NDMP Fastest gateway  Compatibility  Compatibility  Performance  Fastest  Standard interface, primarily interface, close to HTTP gateway using backup/restore Many good for UNIX primarily for gateway batch mode gateway for the client libraries Windows archive.  Mount by cluster  Supports RFC  Supports GET, PUT, reference  Map network 2518 compliant standard SMTP  Data and EXISTS, drive to cluster clients mail clients metadata DELETE  High protocol file system path packaged into operations overhead  “MountPoint”  Ingestion only – (data or transportable just part of URL no read Can specify metadata) objects metadata in URL*HTTPs is also supported for secure connectionsto the archive © 2006 Hitachi Data Systems 90
  • 91. Hitachi Content Archive Platform:Integración con Software Partners File File ECM ECM Email Email Healthcare Healthcare Database Database Mainframe MainframeComplianceCompliance © 2006 Hitachi Data Systems 91
  • 92. Servicio ininterrumpido de Archiving P• Self-Protection • Self-Balancing Políticas de retenció n, autenticació n, Ajuste de carga por minitoreo the actividad y replicació n capacidad de todos los nodos• Self-Healing • Remotely Serviceable* Resistente a fallas de drives, nodos, sin Diagnó stico, patches y upgrades mediante IP impacto en la integridad de datos modem o VPN © 2006 Hitachi Data Systems 92
  • 93. © 2006 Hitachi Data Systems 93
  • 94. HCAP - Search M odesBy keywordThroughvisual querybuilder © 2006 Hitachi Data Systems 94
  • 95. HCAP - Search M odesThroughqueryexpressionlanguageQueries canbe savedand re-used © 2006 Hitachi Data Systems 95
  • 96. Resultados de las búsquedas Set or release a retention hold, or delete filesNavigatorsprovide drilldown by keyterms, file Viewtype, and additionalretention file system and archive metadata © 2006 Hitachi Data Systems 96
  • 97. Otras característicasProtocolos STANDARD – Acceso STANDARD• La información puede ser comprimida y encriptada• Single Instance Store• Backup con NDMP• Eventos por SNMP - servers SNMP o SYSLOG , HPOpenView, CA Unicenter. Notificaciones via e-mail, SMS, pager, etc.• Seguridad : Integración con LDAP y Radius authentication• WORM immutability• Escalable en volumen y en performance May May 21 21 2036• Shredding• Réplica Remota por objetos X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X X © 2006 Hitachi Data Systems 97
  • 98. The Hitachi Content Archive Platform Hitachi Content Archive Platform • Fully Integrated Appliance • Fully-Integrated Appliance • Hitachi Content Archive Platform – utility device with integrated WMS100 storage • Provides the highest-level of security OR Content Archive Platform USP-V Support with HCAP-DL NSC55Functionality Demanded AMS1000 , Disk Less, integrated w/ existing AMS500 HDS storage AMS200 WMS100 © 2006 Hitachi Data Systems 98
  • 99. Soluciones con Valor Agregado de negocioOperación del Negocio Servicios y Mejores Prácticas Servicios de Desempeño, Cargo revertido, Aprovisionamiento, Aplicación Manejo de problemas Servicios de Archivado, Indexado, Búsqueda, Extracción Contenido Servicios de Respaldo, Migración, Replicación, Recuperación, Datos Seguridad Servicios de Capacidad a Todo Nivel, Conectividad Universal, Almacenamiento Administración Heterogénea © 2006 Hitachi Data Systems 99
  • 100. 2008 Green Storage Hitachi Líder en Virtualización de Storage Hitachi is the only company with storage virtualization technology embedded in its flagship solutions—competitors have virtualization in peripheral products only With its Intelligent Virtual Controllers, Hitachi has separated the “brain” from the “body” of storage—the innovation from the commodity—disrupting the markets once again Hitachi exhibits the highest levels of hardware and software sophistication, as can be seen by its platform direction and its portfolio of common storage services © 2006 Hitachi Data Systems 100
  • 101. Hitachi Data Systems :Soluciones integradas para suInformació n Unstructured DataStructured Data/RDB, Apps (Files, Metadata, Content) High-end Enterprise Application / DB Archiving/Object/Content Level Awareness • Tiered Storage/Virtualization via USP and NSC hardware platforms • Foundation for open, scalable and • Common Protection Solutions integrated content solutions • Common storage management COMMON Storage Management Integrated Security Dynamic Tiered Storage Data Protection Midrange Application/DB Enablement Discovery & Search Two Key Segments: NAS – • NSC, AMS/WMS hardware • High Performance NAS platforms • Focused on high throughput • Common storage management environments Standard NAS • Common Protection Solutions • Focused on file and print environments © 2006 Hitachi Data Systems 101
  • 102. Gracias por su atenció n..! Tecnologías SAN: Ventajas y beneficios© 2006 Hitachi Data Systems