Good morning, and welcome to Oracle Days 2013. My name is _________________________, and today I will be covering some of the ways that Oracle Application Engineered Storage can help you optimize software performance and data center TCO.
It seems like everyday you are inundated with a new vendor-generated buzzword that is going to solve all of your IT challenges and raise questions about your IT direction. Software defined networks, storage, even data centers, but do you have to do all of the work that vendors normally would do…Many choices of specialized clouds as a service from many providers – and do you build a private cloud, go public or hybrid?…Everything will run better solid state disks expensive for now, do you choose all flash or hybrid?…Big data is promising, but do I need to hire a bunch of data scientists?…Facebook, Twitter, LinkedIn am I getting closer to my customers?...Everyone has two or three mobile devices which can be used at work, should they?...Virtualization is well along -- VMware, Hyper-V, OVM -- potentially thousands of VMs to manage…Consumers are in many ways leading the IT revolution, do you follow?But everyday you are inundated with vendor-generated buzzwords that can be confusing and distractions to meeting the objectives for your organization. I’m sure you’ve seen these. Are they trial balloons, trends, real, should you change direction and implement any of them? Is one of them a solution to your business problem? Software defined networks, software defined data centers, software defined storage, what’s next software defined software? The cloud is the solution regardless of the problem you’re facing – and there’s an endless variety: public, private, hybrid, OpenStack, CloudStack. And you have everything-as-a-service. Should you outsource every aspect of IT to a service provider? Do you go all flash storage? All-SSD? A mixture flash and disk? Of course, you have to get into big data, hire some data scientists, get Hadoop installed. Wait, is there a business need or objective to meet with analytics? Should you virtualize you whole IT environment—databases, applications? Should you support all devices that employees that employees may bring to work? You have the real challenge of meeting business objectives with a mostly flat budget. But there are all of these vendor-generated buzzwords coming at you every day and distracting you from your main mission that you’re getting measured on. Where do you go from here?All of these buzzwords naturally create some amount of noise in the industry and lead to confusion about which actions you should take to address your business requirements. They also defocus your attention from as you spend time listening to vendors, investigating which of these technologies are “real” and which you can actually make a positive impact in your particular environment.Meanwhile…
Data is growing exponentially. An IDC report from Dec. 2012 the total amount of digital data generated will reach 40 zettabytes by 2020, doubling every two years. One zettabyte equals 1 billion terabytes. By any scale that’s a lot of data. A majority of the information in the digital universe, 68% in 2012, is created and consumed by consumers — watching digital TV, interacting with social media, sending camera phone images and videos between devices and around the Internet, and so on. Yet enterprises have liability or responsibility for nearly 80% of the information in the digital universe. In addition, as much as 80% of this data is unstructured in the form of photos, video and audio files; documents and PPT files; emails, etc. The proportion of data in the digital universe that requires protection is growing faster than the digital universe itself, from less than a third in 2010 to more than 40% in 2020, and significant proportion needs to preserved for ever longer periods of time, including healthcare and financial records. There’s also the potential for further increases in data accumulation as the internet of things, estimated at some 50 billion devices by 2020, and machine-to-machine communications accelerates.[Click]In total, that’s 50x growth in managed data over this time span. Clearly, the IT professional population cannot grow at the same rate. In fact, it’s growing 1.5x over the same time period. The same is true for storage budget which for most organizations is flat or down. We realize that your data is most likely nor going to grow 50x. for most companies data is growing 50%-70% per year – essentially doubling every two years. This includes data that your OLTP applications are generating, data you’re collecting from your website, data that you’re amassing for an analytics project, and so on. Then there is the data that you have to retain “forever” to comply with regulations. However, your head count and budget are probably in line with the 1.5x or no growth. It all adds up and you have big storage challenges. So, storage vendors and customers alike have to come up with new strategies for coping with the new normal. [Note: for more information on the digital universe in 2020, download the IDC report (sponsored by EMC) http://www.emc.com/collateral/analyst-reports/idc-the-digital-universe-in-2020.pdf]
Storage systems have not kept up with the increasing demands of the data explosion. Not only has disk capacity growth slowed down, leading to many more disks being needed to cope with the data explosion, but storage systems overall lack application awareness. Storage systems are essentially blind to diverging application requirements. Applications do not generally see or directly control data storage. That’s usually accomplished via the operating system, hypervisor, or file system, although there are exceptions such as relational databases. In all cases that relationship is fixed or predetermined meaning that the storage does what it’s told to do in a very narrow set of pre-configured parameters. It serves up capacity in the amount that has been allocated. It provides RAID-based data protection on pre-arranged parameters. It delivers performance based on what was set up. In other words it is an inflexible relationship that can only be altered with admin intervention. Consider that application performance has peaks and valley. And yet, data storage cannot for the most part read or anticipate those peaks and valleys. It knows the IO and/or throughput demands at any given moment in time and will respond to them based on the performance pre-sets and other application demands being placed on the data storage system at that moment in time. There is no integration of application and storage, no cooperative processing or communication, no dynamic adaptation to unexpected application needs, and no flexibility. Storage is not customarily designed to allocate capacity and performance resources on-demand. Resources are typically manually allocated in advance. As those storage resources are consumed and more are required, the storage admin will then manually allocate more. It’s not a dynamic automated process for the vast majority of storage systems. Yet, VMware vSphere, virtual data center technologies, Oracle databases plus business applications that run on Oracle servers and storage, backup and replication software, Microsoft applications such as Hyper-V, Exchange, SharePoint, SQL Server and more, are all demanding a lot more from their storage systems than raw performance, capacity, and data protection. They are demanding that their attached external shared data storage rise up to be peers with the application and not just simply a resource. They are demanding that each have intimate knowledge of the other. At the same time, there is a lack of skills to keep storage tuned to applications. Application and hypervisor administrators have become more specialized and narrower in their scope and depth. Storage knowledge is viewed through the lens of the application or hypervisor and is nominal at best. These admins commonly lack both the basic storage knowledge and the experience to set up, configure, manage, and operate data storage optimally for their applications, servers, or virtual machines. Storage admins on the other hand are generalists with a dearth of application and/or hypervisor knowledge. They know how to tweak their storage to get the best performance or utilization out of it, but not optimized for every application, server, and VM that connects to that storage. They commonly lack specific application tuning knowledge, skills, and experience. Even when they do have them for a specific application, their cycles are way too limited to constantly tune the storage for optimum application performance. How are you going to address these issues?
With the increasing prevalence of confusing buzzwords in the industry today, many companies are talking about whether it makes sense to buy or build their IT infrastructure. When it comes to deploying mission-critical business apps, the clear choice is leveraging storage systems that were specifically architected in conjunction with those apps. By attempting to build your system, the DIY model, the degrees of separation between the application and the storage significantly increase, requiring companies to spend more time on manual storage management. So the trade off is you can spend your time building storage or you can spend your time generating business value from storage pre-engineered with critical business apps to obtain maximum performance and efficiency from the application.
By attempting to build your system, the DIY model, the degrees of separation between the application and the storage significantly increase, requiring companies to spend more time on manual storage management. In fact, approx. 2/3rds of all enterprise storage costs are related to management. So when it comes to upgrading, maintaining, tuning, or troubleshooting, for example, storage management costs could triple if customers actually took storage vendor responsibilities upon themselves. Enterprise Storage Quality is Driven by Rigorous System Test and Continuous Component Verification, so if customers are going to start validating and testing drives, controllers, enclosures and application integration, their time to focus on the elements that bring value to the business becomes greatly diminished.
DIY storage is not practical for most companies. As we’ve seen across many customers, skyrocketing data growth has led to inefficient data silos, storage sprawl, and resulting complex, costly, hard to manage storage infrastructures. Getting ahead of issues like these requires re-thinking the storage platforms and storage strategies that your business runs on. With hardware and software, engineered to work together from the storage layer up through the database and application stack,Oracle can help you:optimize performanceimprove efficienciesreduce risklower costsLet’s see how we do this.
Oracle cuts through the buzzword fog and brings clarity and to the data center with our unique advantage….
Customers clearly benefit from the Oracle stack comprised of applications, middleware, databases, virtualization software, operating systems and hardware. With Oracle you can rest assured that everything is Engineered, Tested, Certified, Deployed, Upgraded, Managed, and Supported Together. With its roots as a software company, Oracle designs storage systems to consume less storage, whereas competitors intentionally design systems to consume more, resulting in data center issues such as massive filer sprawl. Other vendors build boxes intended to be consumed very quickly so they can sell you more storage—with complete disregard for efficiencies. At Oracle, hardware and software are developed together to maximize your value, in contrast to buying servers from one company, operating systems from another company, storage from a third company, applications from a fourth company and then you as the customer having to integrate everything together, test and validate it, deal with multiple support and service organizations, not to mention the obvious finger pointing that results during a failure of some sort.With Application Engineered Storage, Oracle offers databases and applications that are storage-aware and storage systems that are database and application-aware. Oracle’s database is storage-aware and ensures the storage system is aware of the application requirements in advance of the request actually coming to the storage controller. With co-engineering between Oracle applications and Oracle storage systems, unique capabilities such as the Hybrid Columnar Compression, which compresses data up to 50x and accelerates query times, and the Oracle Intelligent Storage Protocol provide customers with significant efficiency and performance gains. The key is that customers spend time focusing on strategic business initiatives by knowing that their storage and applications are pre-tested and co-engineered by Oracle.Oracle’s Application Engineered Storage solutions are the best platforms for Oracle software environments; unique points of integration between the storage and software help you optimize performance and efficiency in OLTP, data-intensive or virtualized Oracle software environments. The New Normal is also requiring storage admins to ramp up quickly to become more knowledgeable and skilled in their jobs. Oracle solutions can assist the challenges of application centricity.
As we saw, the trajectory ahead of us to 2020, data under management will grow 50x while the IT professional population itself will only grow 1.5x. Clearly, we’ve got to do more with our current resources to bridge this constantly increasing gap.The solution to doing more with less is dynamic automation—the ability to automate previously manual processes so that less manual user intervention is required and fewer manual errors can occur. As part of Oracle’s Application Engineered Storage strategy, we ensure that the ZS3 Series is synergistically engineered with Oracle’s databases to solve pressing business and technology problems at their root cause. It’s important to note that Oracle views database storage different from other vendors, in that we see it through the eyes and tasks of a DBA. With that in mind today we have introducing unique capabilities,available only on Oracle storage, that are designed to help customers leverage the dynamic automation of database tuning, to automatically compress or decompress data based on heat maps and usage patterns, and to consume less storage than ever before. In fact, with today’s innovations, only Oracle storage enables you to use the full potential of Oracle Database 12c with the industry’s first fully automated database-to-storage tuning and compression capability.
We believe that dynamic automation is the solution for the shortage of IT professionals to manage data growth. Let’s see how the co-engineering of Oracle Database and Oracle storage provides unique resource-saving capabilities.
Oracle Intelligent Storage Protocol,unobtainable by other storage vendors, opens a direct line of communication between the Oracle Database 12c and ZS3 Storage so that critical metadata is passed to the storage with information about the incoming database data so that the storage can automatically and dynamically setup and tune itself, on the fly, optimizing performance for the precise incoming data. OISP represents a unique form of dynamic database-to-storage automation.
OISP reduces over 65% of the manual tuning and re-tuning required to maintain superior Oracle Database 12c performance – literally freeing up administrative resources by 3x to spend their time on more strategic revenue generating activities.
And, leveraging additional Oracle Database and Oracle storage co-engineering, you can further save IT resources by working smarter, not harder.
Automatic Data Optimization combines real-time heat map tiering with Hybrid Columnar Compression (HCC). You can use real-time heat maps to know exactly what data is ideally suited for HCC compression, you can set a policy for it to turn on at the precise moment your data becomes best suited for deeper compression from HCC query or HCC archive and let the optimization happen dynamically and automatically throughout the life cycle of your data. So for example, you wouldn’t compress your quarterly data because you have high read/write access. But as it ages and becomes read mostly you have a policy automatically apply HCC Query (average of 10x compression) and as that data becomes even more inactive and is ready for HCC Archive (up to 50x compression) you have a policy ready for that too. Now you’ve automatically leveraged 10-50x compression immediately as the data was well suited for it and you’ve achieved a performance improvement on average of 5x and you’ve perpetuated all of those savings throughout your backup/test/development and other environments where this data is being used. ADO & HCC automatically determine what data sets in Oracle Database 12c should be compressed for deep archival and which should be left uncompressed for frequent data access. Similar to OISP, HCC is only available on Oracle storage.
Another way you can conserve IT resources is by consuming less storage. With less storage, you have fewer systems to manage and operate. Not only do you save upfront costs, but you also reduce your data center costs, including administrators – lowering TCO.As we said, Oracle has its roots as a software company, we design software that consumes less storage. Storage hardware-focused competitors intentionally design systems to consume more, resulting in data center issues such as massive filer sprawl. Other vendors build boxes intended to be consumed very quickly so they can sell you more storage—with complete disregard for efficiencies.
Given these unique co-engineering advantages of Oracle Database and Oracle storage, why would you run your Oracle Database on any other storage? You wouldn’t run a high performance car on regular fuel, would you?
Or, to paraphrase Jerome Wendt, an analyst at DCIG: Running your Oracle Database without HCC, one of our co-engineered advantages that can increase performance, it’s like operating a car in degraded mode.
You may be familiar with Oracle Hybrid Columnar Compression and its ability to compress Oracle Database data 10-50x—a unique business-enabling capability that is ONLY available when using Oracle Database 11gR2 and later with Oracle storage. HCC sets the bar for efficiency, reducing CAPEX and OPEX and improving data management. Oracle storage and HCC can generate up to 40% storage savings through all of the capacity that’s reduced. Saving you 3-5x on storage versus competitive offerings and perpetuating that savings throughout your secondary processing – less data to store in your backups, development/test environments, and more. You have fewer systems, less management, lower power, and space costs.As industry analyst firm DCIG stated, ZS3 Series with HCC could radically change the dynamics in enterprise data centers and whose backend storage they use to host Oracle databases.
Hybrid Columnar Compression is not just a technology for saving IT expenditures. If you think about it, with only 10x compression – when you used to read 10 bits of information, now you only have to read one – so the performance benefit – 5x faster query speed – is extremely tangible as well. These two charts show actual results from a number of companies. Customers get not only high levels of compression with HCC but also better performance with significant increases in query speed. HCC is bringing real-world change to companies worldwide today.
Now let’s look at the Oracle ZS3 storage systems themselves and how the hardware/software architecture expedites time to insight and queries.
As you probably know,Oracle recently introduced the next generation of Application Engineered Storage from Oracle – the ZS3 Series…
Oracle ZS3 storage systems delivered world record performance in throughput and response time in SPC-2 and SPECsfs benchmarks. For you this means accelerated business analytics and time to insight, plus faster database queries. In these benchmarks, the ZS3 Series also showed leading price/performance, with much lower costs than competitive storage systems from NetApp and IBM.And, we have described, the ZS3 storage systems are co-engineered with Oracle software, with resulting unique advantages, such as HCC and OISP, that are not available on storage systems from other vendors.
The ZS3 Series is an enterprise-class unified storage system, supporting all major connectivity protocols, including InfiniBand. It is available in two models, the ZS3-2 and the ZS3-4 and delivers:a massive amount of cache—up to 25TB, including large DRAM pools, while other vendors are in the GB or single digit rangehigh levels of sustained bandwidth for write-intensive, high throughput environmentsgrows with you up to 3.5PB in the ZS3-4 to deliver high capacity expandable through adding multiple disk trays and, extensive PCIe slots for expandability and multi-network support. On the software side the ZS3 has:An advanced multithreaded SMP OS which takes full advantage of the multi-core CPUsCoupled with Hybrid Storage Pools, an intelligent cache architecture which automates dynamic data tieringThis hardware/software combination enabled the ZS3 Series to achieve world record throughput and latency results. Our architecture automatically adapts to workload changes for optimal performance at all times – for Oracle software environments, highly-virtualized environments, and heterogeneous workloads. Oracle’s new ZS3 Series brings massive capacity and superior performancefor your database and key applications
The ZS3 architecture enables customers get better answers by analyzing more data, faster. Let’s see how.
The new Oracle ZS3 storage systems are designed for the time sensitive parts of your company’s business cycle. What new market opportunities and revenue generating breakthroughs would be attainable if you could accelerate your business? For example, what if you could….-Run database queries faster, say in seconds instead of minutes-What if you could analyze data warehouses faster, looking through 2, 4, even 10 times more data than what you do in your current timeframe today?-What if you could check your order, sales, and shipping status and optimize your supply chain in real-time?-What if you could close your quarter and year-end books in hours instead of days?-What if you could maximize your ROI by evaluating a greater number of investment scenarios?-What if you could get to profitability and business value faster-And, on the operational side, What if your storage could respond dynamically and automatically to changing application workloads without hiring specialized talent?These and other activities are highly dependent on the speed and responsiveness of your storage – and have a direct impact on both the top-line and the bottom line.
The new Oracle ZFS Storage ZS3-4 set new world records in performance and price/performance for the SPC-2 and SPECsfs industry-standard benchmarksThese are the results for SPC-2.The Oracle ZFS Storage ZS3-4 outperformed IBM and HP with World Record industry-standard SPC-2 benchmark results. The ZS3-4 delivered greater throughput and better price/performance for data mining and business intelligence than high-performance systems from IBM and HP, specifically beating the price/performance ratio delivered by the IBM System Storage DS8870 and HP P9500 XP Disk Array by over 3x. These world-record results demonstrate the performance and lower TCO possibilities that organizations using high-throughput business analytics and reporting can achieve with the new Oracle ZFS Storage ZS3 Series to realize real-time business advantage. [Details if needed]The Oracle ZFS Storage ZS3-4 set a new world record with aggregate throughput of 17,244.22 SPC-2 MBPS™ with SPC-2 Price Performance™ of $22.53. The SPC-2 benchmark consists of three application-oriented sequential I/O performance tests, which include large file processing, large database queries and video on demand. The Oracle ZFS Storage ZS3-4 has 11 percent higher throughput and 5.8x better price/performance than the IBM DS8870, which is IBM’s best SPC-2 Result.The Oracle ZFS Storage ZS3-4 has 31 percent higher throughput and 3.9x better price/performance than the HP P9500 XP Disk Array, which is HP’s best SPC-2 result.
And these are the results for the SPECsfs2008 NFS Benchmark:The Oracle ZFS Storage ZS3-4 storage system set a new world record with a SPECsfs2008_nfs.v3 ORT (overall response time) of 0.70 milliseconds, beating by 40% the best NAS storage system from NetApp, the 2-node FAS 6240 which has a SPECsfs2008_nfs.v3 ORT of 1.17 milliseconds, and shortening critical IO wait times to improve database responsiveness and business value. Oracle ZFS Storage ZS3-4 and ZS3-2 Storage Systems deliver 2x better throughput and 3.5x better value than comparable high-end and mid-range NetApp SystemsNew world-record SPECsfs2008_nfs.v3 overall response times. The combination of extreme throughput and exceptional value enable customers to run database systems faster, for more users, and at less cost using Oracle ZFS Storage systems than competitive NAS systems from NetApp. The Oracle ZS3-4’s record latency also beat EMC’s 56-node Isilon and an all-flash version of EMC’s newest 8-node VNX8000 at a fraction of the cost. Both EMC systems cost over US$3 million, while the ZS3-4 cost US$490K.The SPECsfs2008 NFSv3 benchmark measures the throughput and response time of NFS servers based on workloads that represent the activity of large-scale servers in real customer environments. SPECsfs2008 results summarize the storage server's capabilities with respect to the number of operations that can be handled per second, as well as the overall latency of the operations.[Note: Use any of the following statements, as needed]The Oracle ZFS Storage ZS3-4 storage system set a new world record for dual-node NAS system throughput with 450,702 SPECsfs2008_nfs.v3 ops/sec, delivering 2.3x the performance of the best dual-node NetApp NAS system, the FAS6240 storage system, which offers 190,675 SPECsfs2008_nfs.v3 ops/sec, and supporting more users and transactions per storage system to increase operational value. The Oracle ZFS Storage ZS3-4 storage system also enables customers to save money by delivering its higher performance at 40% less cost than a two-node NetApp FAS 6240, resulting in 3.9x better value than the NetApp FAS6240. The Oracle ZFS Storage ZS3-4 storage system also enables NetApp customers using 4-node FAS 6240 clusters to reduce complexity by 50%, obtain 1.7x their current performance when compared to the NetApp performance of 260,388 SPECsfs2008_nfs.v3 ops/sec, and obtain 3.5x better value than the NetApp solution.The mid-range Oracle ZFS Storage ZS3-2 storage system delivers 210,535 SPECsfs2008_nfs.v3 ops/sec with a SPECsfs2008_nfs.v3 ORT of 1.12 milliseconds, resulting in 2x the performance and 36% lower latency than NetApp’s mid-range FAS3250 which delivers 100,922 SPECsfs2008_nfs.v3 ops/sec with a SPECsfs2008_nfs.v3 ORT of 1.76 milliseconds .The Oracle ZFS Storage ZS3-2 storage system delivers its higher performance at 70% less cost than the NetApp FAS3250, resulting in 7.7x better value than the NetApp FAS 3250.
Based on these results, Dave Vellante, Co-Founder of Wikibon stated: “ZS3 means tier 1 performance at tier 3 economics.”
Beyond application and database performance, the architecture of the ZS3 storage systems is also superb for highly virtualized environments,
As we discussed before , a key area of differentiation with the ZS3 Series is its highly multi-threaded symmetric multiprocessing architecture (SMP) and operating system. This means that memory is shared across processors and the ZS3 can take full advantage of advancements in multi-core, multi-processor CPU architectures, enabling multiple workloads to run at the same time. This architecture is exceptionally suited for VMware environments since VMware is an SMP workload. Otherwise, as shown here, NetApp filers, a non-SMP architecture, easily get saturated and are able to serve only a few hundred VMs before full CPU utilization, which forces to users to purchase additional filers leading filer sprawl and increased management complexity in the data center. In contrast, the ZS3 serves up exponentially more VMs in a single system with low CPU utilization.
For example, a global provider of mobile web services has 30+ petabytes of data and 40,000 VMs running on 20 ZFS storage systems.
These Oracle innovations and unique advantages add up to a better value storage system for our customers. This independent third party economic business value analysis from industry analysts at Wikibon showed just that. Due to its massive filer sprawl issues, a typical NAS filer costs 266% more to own and operate than ZFS storage in a 400TB environment where 4M IOPS is required with 50% write IO. Filer sprawl translates to 252% in more hardware and software costs and 136% more for operations and maintenance.
Beyond data growth and compliance, new use cases in particular industries and applications are driving digital arching needs. In media & entertainment, proposals have been made to store, preserve and make available motion pictures for 100 years.Cloud service providers looking to monetize costs for deep archive or “cold” storage of data that will rarely, if ever, be accessed.Research institutes (CERN, Lawrence Livermore National Laboratory) keep multi-petabyte datasets accessible for years.In the healthcare field, medical records need to be kept at least during the patient’s lifetime.
Front Porch Digital and T3 Media are good examples of these new uses and applications for tape. Oracle has an OEM pact with Front Porch Digital, a leading provider of Digital Asset Management solutions for the media & entertainment industry. Specifically, Front Porch Digital is now offering Oracle’s latest T1000D tape drive with Front Porch Digital’s award-winning DIVA digital asset management software suite and LYNX cloud offering. As Front Porch Digital is a prominent provider to the media & entertainment sector, this partnership will provide customers with an end-to-end solution for cost-effectively digitizing, accessing, and preserving digital media assets. This relationship will also enable Oracle StorageTek tape solutions to reach a broader audience of customers in need of cost-effective storage for their digital film content and unstructured file archives. T3Media is a global leader in providing cloud-based storage for the media & entertainment sector especially for licensing and accessing of enterprise-scale video librariesT3 Challenges (more detail)Digitize, store and provide fast access for more than 100,000 new media assets annuallyEnsure long-term data integrity of client’s media assets Offer a highly available, “always-on” environment for clientsOracle StorageTek Solution (more detail)Petabyte scale storage via StorageTek SL8500 tape librariesFast access and retrieval, and long term data integrity assurance for digital content with StorageTek T10000 tape drivesAutomated file management via StorageTek Storage Archive Manager (SAM). SAM manages files across multiple tiers of mixed storage media to ensure that T3Media SLAs are achieved. So essentially here is a prime example of tape storage in action in a large-scale cloud services environment.
As we’ll see, Oracle continues to innovate in tape storage to deliver performance, usability, and manageability and lower costs. These innovations enable tape to be used not only in traditional backup, data protection, and disaster recovery applications, but also in new use and emerging markets with requirements for long-term retention of data: healthcare, financial services, digital media masters, and deep cloud archives.
or many years, the industry has been saying that tape is dead, while some companies are dedicated to eradicate tape. Case in point is EMC’s latest “Tape Sucks” marketing campaign. With the trifecta of flat-to-shrinking IT budgets, exponential data growth and the trend toward extended retention periods are challenging IT organizations as never before, tape storage –with its unrivaled economics and capacity advantages – has reemerged as a clear and essential choice for long term data retention and disaster recovery for today’s modern data centers. In many cases, it is the only viable option for tempering the skyrocketing cost of storing long-term data and for addressing many of today’s toughest storage challenges.Not only is tape storage being tapped for traditional uses in backup, disaster recovery and compliance, it is experiencing accelerating growth in active file archive, low-cost NAS storage, and cloud computing deployments. These trends, along with several unique industry use cases in media & entertainment, healthcare, life sciences, and the cloud are driving growth in the amount of data being stored to tape and leading to a resurgence. Or as Dave Vellante, Co-Founder of Wikibon put it, “ Oracle 10KD represents the rise of the machines—the rebirth of tape.”
Oracle extended its leadership in archive and data protection with new StorageTek hardware and software solutions that drive down the cost of long term data retention for compliance, preservation and business continuance.Oracle is driving storage efficiency to new heights with the world’s highest capacity tape drive and re-inventing file management with new LTFS software for tape libraries. Announced at the International Broadcasting Convention (IBC), new innovations in tape storage from Oracle StorageTek ease the pain of managing and archiving high definition digital assets for media, broadcast and entertainment enterprises. Storage is a business advantage: store more, manage less, retain longer, improve access, increase efficiency, achieve compliance objectives and reduce risk with Oracle StorageTekHere are the highlights:World’s Largest & Fastest Tape Drive: Enables 68 Exabytes of Capacity Under a Single Point of Control.New T1000D Tape Drive for SL8500, SL3000New Linear Tape File System Library Edition (LTFS LE)Oracle is Making Tape as Easy to Use and Manage as Flash and DiskExabyte-Scale Archival Storage w Drag-and-Drop User InterfaceBoth Announced at the recent International Broadcasting Conference Focused on Media & Entertainment, Enterprise Archives & Cloud, both public and private instantiations BOTTOM LINE: NEW TAPE NAS. NEW ARCHIVING BREAKTHROUGHS. NEW USE CASES FOR A NEW ERA.World’s Largest & Fastest Tape Drive: Enables 68 Exabytes of Capacity Under a Single Point of Control.New T1000D Tape Drive for SL8500, SL3000New Linear Tape File System Library Edition (LTFS LE)Oracle is Making Tape as Easy to Use and Manage as Flash and DiskExabyte-Scale Archival Storage w Drag-and-Drop User InterfaceFocused on Media & Entertainment, Enterprise Archives & Cloud, both public and private instantiations BOTTOM LINE FOR TODAY: NEW TAPE NAS. NEW ARCHIVING BREAKTHROUGHS. NEW USE CASES FOR A NEW ERA.
Linear Tape File System is an open format that enables customers can easily move data between disk and tape storage without a proprietary archive or backup application. In the graphic displayed we have some files on the left stored on disk, and on the right we have some files repositories on tape. In order to move the data from expensive disk to low cost tape, all we have to do is drag-and-drop the file from one folder to the other.LTFS Library Edition extends LTFS for an entire enterprise tape library.StorageTek LTFS, Library Edition software manages the file index of every file, every tape drive and every tape cartridge in an enterprise tape library, so that you don’t have too. All the low cost and performance benefits of tape without any of the management overhead. StorageTek LTFS, Library Edition is simplifying tape storage like never before. LTFS LE further broadens the appeal of tape storage in industries with large file assets, such as media and entertainment, enabling customers to take advantage of tape’s low cost per terabyte for backup, large-scale data retention, archive and preservation projects, while allowing customers to save 40 percent in acquisition cost over IBM’s TS3500 20PB Tape Library solution.In addition to simplifying data movement between sites, LTFS also simplifies how users interact with tape storage.
As Forbes says Oracle is keeping unique database and storage awareness features to drive the greatest business value for its customers.
Our vision for Application Engineered Storage is to:With Oracle Database 12c, we do this by fully automating database to storage tuning using Oracle Intelligent Storage ProtocolWe eliminate the rigidity of fixed storage systems so you can dynamically redeploy resources to meet changing business needsThus, you are no longer required to hire specialists schooled in the art of NetApp management but can use more general purpose staffs to manage your storage.By doing so, Oracle’s Application Engineered Systems can reduce the TCO of your storage compared to NetApp, and if you take into consideration Hybrid Columnar Compression, Oracle can reduce your storage TCO by 80% 85% or even 90% when compared to NetApp.
As we’ve seen throughout this presentation, co-development and deep integration with Oracle software delivers significant benefits not available to other storage vendors, enabling you to achieve:SUPERIOR APPLICATION, DATABASE, AND STORAGE PERFORMANCE. – Run your software faster.UNRIVALLED levels of operational efficiency that EXCEED what is possible with other vendors, such as NetApp filers by 3x, or in some cases potentially as much as 10x, and you buy less storage!LOWER RISK through fewer integration points with Oracle software, streamlined management, and the fact that the Oracle stack is engineered, tested, certified, deployed, upgraded, managed and supported TOGETHER by one vendor. And, that due to the increased storage efficiency, lower number of storage systems you need to purchase, and streamlined and in many cases automated management, that you will be able to have LOWER COSTS – NOT JUST FOR STORAGE, BUT FOR YOUR SOFTWARE INFRASTRUCTRE AS WELL. You get lower TCO and higher software ROI.
"Oracle's Application Engineered Storage: Your Application Advantage" - Zbigniew Swoczyna, Senior Manager HW Sales Consulting, Oracle Polska