There are many ways Oracle is optimizing and transforming existing data centers, including any or a mixture or the following:First of all, all of those best in class products are engineered to provide the best results in their respective categories.In addition, they are engineered to work together, and that alone simplifies a great deal for IT organizations. These become the building blocks that customers can build their solutions, but they have already been engineered to work together.» Or we can to all the way to the massively simplified, purpose-built Engineered Systems, such as Exadata, Exalogic and the new Exalytics, and the general purpose engineered system, the SPARC SuperCluster. » But since customers have a wide variety of needs, we take this even further with our Optimized Solutions, which provide flexible, predictable guided deployments that reduce risk, lower TCO and improve productivity that customer build on top of Best-in-Class products or the SPARC SuperCluster.With Oracle Optimized Solutions, you are able to select the best solution for a particular problem and then integrate the parts. In this case, you have a guided deployment, you know that all of the parts are designed to work together in an optimal fashion, you have the flexibility to change or substitute various components, but you know it’s going to work at the end and you purchased it all from one company so service is easier. A that’s a big win.Additional Information:In fact, this illustrates Oracle’s overall strategy for transforming the datacenter. Today’s IT infrastructure is massively customizedwith mix & match technologies that have led to this enormous IT complexity and rising costs.Duplicate systems, proliferating and inconsistent data about customers, employees, products and services, and makeshift integrations lead to longer times to market, poor customer service, inefficient processes and lost opportunities for achieving economies of scale. The complex web of systems and processes results in long lead-times for projects and increasing pressures on datacenter power and cooling. Systems are relatively fragile and risky to modify. Focused on components (perhaps pools of compute or storage components) Services/Labor intensive (internal or external resources): implementation, integration and ongoing maintenance Application / Services deployments are Unique drives up cost/time/fragility, slows down agility/speed of changesMost Vendors and Customers are focused on Infrastructure Building Blocks as the future directionIncremental Improvements via low-level building blocks (e.g. Server virtualization):Most companies and vendors are able to achieve incremental improvements and efficiencies through server virtualization, and moving towards virtualized pools of compute, storage & network resources. Typically low-level hardware and hypervisor/server building blocks, not application oriented Limited to incremental improvements Oracle offers this too because there is real value in standardizing even at the IaaS levelMassive simplification is Game changing We see an inflection point in the industry – an opportunity to move past the typical incremental improvements and fundamentally change the IT operating model. Oracle is focused on the game changing opportunities for how IT buys, deploys & runs services. Two key areas are elastic cloud services and pre-integrated, workload optimized systems. The goal is take out massive amounts of labor – not just upfront, but throughout the lifecycle of the services you deliver to the business.Oracle is uniquely positioned to deliver vertically integrated systems, delivering a transformation of how technology for the datacenter. Integration happens at multiple layers of the technology stack, starting with “best in class” component technology, database, middleware, and applications, and ultimately through tightly integrated, highly optimized, engineered systems for specific and general purpose workloads.
What we find is that there are storage-related challenges that are common across customers. First: How do I make sure that my storage systems meet user and application performance requirements under the strain of all of this data Second, I need to make sure that my infrastructure can adapt to dynamic workloads. Can I afford to have long applications development cycles, or can I afford to have dedicated islands of storage for independent applications. Increasingly, the answer is not.Third, how do I maximize application availability and meet my SLAs? Meet backup windows? And if something does happen, can I recover quickly?Fourth, how do I avoid storage sprawl and the accompanying data silos? This adds directly to complexity and cost as well as integration headaches as the number of storage systems increase. that lead to storage management headaches and having to hire more people? How do I know that the storage I buy will integrate seamlessly and be optimized for Oracle Database and applications? How many vendors will you have to call to get support? How much time will you spend dealing with vendors finger pointing when issues come up, and you know they will.And can I find the right people to hire making sure that they have those rare application-to-storage skills so they can performance and availability across the whole path.What’s should be my archiving and long-term data retention strategy? Keep using disk for all storage tiers—disk keeps getting cheaper—a lot of my data doesn’t get accessed much after ninety days, do I move to tape? What if I need to access something from tape, will it be fast enough?How do I maximize Software ROI? SW is the most expensive part of my data center, so maximizing my return on investment means making sure that the licenses I buy run as fast as possible. This requires storage with high throughput and balanced performance across varied workloads.Sixth, how do you make sure that the storage you buy will integrate seamlessly with the SPARC T5 servers and will be optimized for Oracle Database and applications? How many vendors will you have to call to get support? How much time will you spend dealing with vendors finger pointing when issues come up, and you know they will. SPARC T5 servers come with numerous hooks to speed installation, integration and management with ZFS Storage Appliances.And finally, what’s the total cost of ownership? If I keep on with storage sprawl, how much is it going to add to power, cooling, floor space beyond the hardware costs. Will there be hidden software costs? We need to have storage solutions which reduce the total amount of storage, which require fewer discrete systems which need less management, and few extra licensing fees. But everyday you are inundated with vendor-generated buzzwords…
But everyday you are inundated with vendor-generated buzzwords that can be confusing and distractions to meeting the objectives for your organization. I’m sure you’ve seen these. Are they trial balloons, trends, real, should you change direction and implement any of them? Is one of them a solution to your business problem? Software defined networks, software defined data centers, software defined storage, what’s next software defined software? The cloud is the solution regardless of the problem you’re facing – and there’s an endless variety: public, private, hybrid, OpenStack, CloudStack. And you have everything-as-a-service. Should you outsource every aspect of IT to a service provider? Do you go all flash storage? All-SSD? A mixture flash and disk? Of course, you have to get into big data, hire some data scientists, get Hadoop installed. Wait, is there a business need or objective to meet with analytics? Should you virtualize you whole IT environment—databases, applications? Should you support all devices that employees that employees may bring to work? You have the real challenge of meeting business objectives with a mostly flat budget. But there are all of these vendor-generated buzzwords coming at you every day and distracting you from your main mission that you’re getting measured on. Where do you go from here?Customers are
One problem you are immediately faced with is data that’s growing exponentially. Of course, you probably don’t have 8,000 exabytes on your data center floor. This is data being accumulated by every person and company globally. However, for most companies data is growing 50%-70% per year – essentially doubling every two years. This includes data that your OLTP applications are generating, data you’re collecting from your website, data that you’re amassing for an analytics project, and so on. Then there is the data that you have to retain “forever” to comply with regulations. It all adds up and you have big storage challenges.
Oracle has a strong position in the storage market. IDC recently published a report in which they stated that Oracle storage portfolio is in excess of a billions. This includes disk storage systems, tape, and storage in Engineered Systems. The report also states that “More storage innovation is on the horizon with Oracle. Product refreshes are coming; even tighter integration with Oracle is coming. More synergy with big data is coming. And Oracle is working to get the go-to-market model right to better compete with incumbents such as EMC and NetApp.” Going along with these strong statements from IDC , Steve Duplessie, Founder and Senior Analyst at the Enterprise Strategy Group, stated Oracle is the number 3 NAS vendor.
Applications and compute engines continue to evolve. Multiple applications per server images are a relic of the past. Today the vast majority of application providers will only support it if it’s the only app running on the physical or virtual server. They won’t support the app if there are other applications running. And there is a lot innovation around this compute this model, including: 1U serversBlade serversMicro serversVirtual serversNow virtual applicationsAnd evolution continues
Storage systems have not evolved much. They’re still primarily architect as general-purpose storage. They’re designed and built to work with as many applications as they can. This makes some degree of sense since from the IT department perspective, they can share the load and the cost across applications and business units. And, of course, storage vendors want to sell into as many environments as they can, so general-purpose work for them.. However, the result is that general-purpose storage systems don’t have much application awareness. They provide the same services to every application. In fact, applications have diverging requirements. Some need high-performance, some require a higher level of protection, and some may have dynamically changing performance requirements depending on time of day, month of year, etc. For example, retailers need maximum performance for the fall shopping season when 80-90% of revenues are generated, but not so much say in February. So, how can you address this issue?
By attempting to build your system, the DIY model, the degrees of separation between the application and the storage significantly increase, requiring companies to spend more time on manual storage management. In fact, approx. 2/3rds of all enterprise storage costs are related to management. So when it comes to upgrading, maintaining, tuning, or troubleshooting, for example, storage management costs could triple if customers actually took storage vendor responsibilities upon themselves. Enterprise Storage Quality is Driven by Rigorous System Test and Continuous Component Verification, so if customers are going to start validating and testing drives, controllers, enclosures and application integration, their time to focus on the elements that bring value to the business becomes greatly diminished.
Oracle cuts through the buzzword fog and brings clarity and to the data center with our unique advantage….
When we look at trends effecting Oracle Database storage, we turn to both traditional analysts like IDC and Gartner, but also to customers like yourselves in the form of the Oracle User Group community. And, we hear the same thing from all sources, that there are several key drivers that are working together to drive storage. These forces and their impacts are not new, but a smaller and smaller number of the drivers are playing pivotal roles in defining your environments and our solutions to meet your needs.Specifically, we see the growth of automated data sources from everything from RFID tags to health monitors to solar panels on your roof that are generating continuous streams of structured data that are used in multiple ways. The effect of this is that the requirements for database storage are increasing by between 40% and 45% per year – not quite as high as unstructured data, but it means that on average our database size doubles every two years.Secondly, we see more consumers of data, much of which is driven by the mobile computing and access revolutions but also the fact that we have figured out how to do “mash ups” of data from different sources to better understand our environments – some may call this “Big Data”, but whatever you call it – it is increasing the number of times that “current” and “historical” data are accessed which increases the need for high performance databases and the storage supporting them at rates that are much higher than Moore’s law would imply that database servers are able to keep up with which entails increasingly complex database installations with more servers, more storage devices, more software integration points, and typically more integration costs associated with the storage, so we need some new thinking here as well.Virtualization is our fourth driving factor for several reasons, and while people may think that I’m talking about storage virtualization, I’m actually referring to traditional server virtualization that is underlying most efforts at data center modernization and the cloud.Server virtualization is key makes it possible to support the growing number of applications that we just mentioned, and it is also making it more difficult for storage to keep up with your needs since it fairly effectively defeats 30 years of optimizations put together by the storage industry. And finally, regulation and compliance are creating very long retention requirements – some of which have to be met with on-line resources, and these requirements drive up the amount of data in any individual database as well as the challenges in managing it.================
And, what we find is that if you look at a typical data center, each individual application has their own Oracle Database instance running on different servers and with dedicated NAS or SAN storage silos. While volume is increasing, it is really the need to access this data faster and faster which leads to inefficiencies as traditional NAS and SAN storage can’t keep up with performance demands so utilization as measured in terms of Tbytes of Data per Tbytes of disk keeps going down while total storage costs required by continuing operations keep going up and up <HIT RETURN>, shrinking the amount of funds that can be dedicated to supporting business innovation.
However, if you turn to Oracle storage for your database and non-database storage needs <HIT RETURN>, what you will find is that our ability to permanently shrink your storage, access your data faster, and reduce overall storage complexity and integration costs will bring back the 20% or 30% of your IT budget that you would like to dedicate to initiatives that directly support new businesses, better business processes, and overall business innovation.
While we are talking about storage for Oracle Database today, we know that you have a need to store all kinds of data in your environment, so as a result we have developed an optimized storage portfolio that offers best-of-breed stand alone capabilities plus integrates specific Oracle on Oracle optimizations that allow Oracle software to run faster on Oracle storageOur portfolio consists of best-of-breed Database, NAS, SAN and Tape storage solutions that support multi-purpose computing environments.For each of the areas and for broader based Oracle Optimized Data Center, our storage solutions provide three basic benefits. With Oracle Optimized Storage, we enable you to permanently shrink the amount of storage you need – typically by 40% but in some cases by 80% or more, we allow you to run your databases and applications faster than with storage from other vendors – typically around 50% faster, but again as much as 2x or 3x or more when you look at the additional impact of low cost, and we reduce your risk profiles from both implementation and operational perspectives with streamlined management that has enabled Oracle IT and other organizations to reduce management overhead by up to 60%.So, no matter what mix of database storage, SAN storage, NAS storage and Tape that is appropriate for your environment we will allow you to do more while spending less and simplifying your environment.
And there are benefits to using Hybrid Columnar Compression that extend beyond storage efficiency. It turns out that HCC is not just an “on-disk” format, but is used directly by the Oracle Database to increase the efficiency of caches, networks and even memory. This accelerates many databases which now don’t have to go to disk to read more data and reduces the amount of memory needed for remote replication for disaster recovery purposes.One retail database that we worked with was compressed using “Query high” compression and not only achieved a 19x improvement in storage efficiency, but also ran faster because less data had to be moved to the system. Four of the queries we ran against this showed between a 4.7x and 8.4x improvement in overall query performance. This setup was over a gigabit ethernet link, but similar tests over dedicated 8Gbit/second Fibrechannel showed lower improvements that were still significant that ran in the 2x to 4x range.There is a whole session in this event on Hybrid Columnar Compression, so I won’t go into any more details here.
With our new Automatic Data Optimization software, or ADO, customers can now leverage automatic tiering and compression based on heat maps and usage patterns. This basically provides fully automated data lifecycle management, once again lowering your costs on manually performing such processes and making your environment more efficient.
Let’s take a look at four tangible differentiators that we now offer as part of our Application Engineered Storage portfolio. Firstly, you may have heard of our Hybrid Columnar Compression software, which compresses Oracle Database data in both rows and columns, leading up to a 50x reduction in storage capacity required and a 8x improvement in Database query speeds. (Click)We also have a new capability known as Automatic Data Optimization that provides fully automated tiering and compression based on heat maps and usage patterns. This sofware automatically determines what data should be compressed at the maximum level for deep archives and what should be left uncompressed for frequent access. The days of doing this manually are over.(Click)And there’s the unique speed dial capability I mentioned earlier—Oracle Intelligent Storage Protocol or OISP. This enables us to dynamically tune up to 65% or more of the critical paramaters for database tuning, particularly logbias and record size. This is dynamic automation, which is the solution to the data growth deluge. (Click)And then there’s our storage systems unique integration with our Engineered Systems lineup—Exadata, among other offerings. By being co-engineered with Exadata we have levels of efficiency and performance—particularly with a direct Infiniband link—that surpass what else is available in the market from a backup perspective.Now let’s take a look at what pain pains, what problems each one of these solve:Hybrid Columnar Compression – (Click) – Without it customers have to acquire more and more expensive storage. At Oracle we see the world from a software point of view, a software vantage point, so we are always looking at finding new forms of efficiency through software. This in stark contrast to pure hardware vendors whose vantage point is limited to the confines of a physical box. Their mentality is to fill up one box as fast as possible so they can sell you another. You won’t hear another storage vendor focused on making your Oracle Database environment more efficient, requiring you to actually use less storage in your data center.Automatic Data Optimiation – (Click) – Without it customers have to spend more on manual tiering, manual compression and more on third party software licenses.OISP – (Click) – Without it customers have to spend more on manual tuning, and cope with the challenges of more human errors.And, with Engineered Systems Integration – (Click) – what other storage vendors won’t tell you is that the RMAN backup block format is opaque to all third party dedupe devices, such as DataDomain, which have very low deduplication ratios on Oracle Database. Basically any form of deduplication is ineffective in reducing RMAN backup data.Whereas with Oracle ZFS Backup Appliance with our compression capability you can use up to 72% less storage than DataDomain.
(Notes: this slide has builds. There are detailed speaker notes to explain the slide. Please continue to read the speaker notes on the next page, which is marked as ‘Hide’ in Slideshow mode).If you look at the world today, this is the picture, right? Every server has its own network adapters, its own storage adapters. This architecture is inflexible and not scalable. It does not allow you to make changes to the architecture easily. With Oracle Virtual Networking, we can get rid of those network devices and storage devices you have on the server. Instead every server is connected with a one-time point-to-point, high-speed, high bandwidth, extremely low latency link into these two hardware boxes you see in the middle of the picture, which are called the Oracle Fabric Interconnects. And the reason we show two boxes, instead of one, is that this architecture gives you high availability.Now what you do is install some drivers on every server, and the two Oracle Fabric Interconnects will contain all your network adapters and storage adapters. In this picture, you see that each one of those Fabric Interconnects has four I/O module devices in the bottom. So those are the devices that connect into your 1 Gig or 10 Gig Ethernet fabrics or into your 8 Gig fibre channel fabric. And what we do is we take a single 8 Gig fibre channel port for example, bind into a switch, and we virtualize that port. We make that port look like 64 fibre channel HBA devices. Similarly, if we take a 10 Gig Ethernet link, bind it into a switch port and we make that one 10 Gig link look like 128 Ethernet NIC devices. So essentially these two fabric interconnect boxes that you see in the middle contain all your network adapters and storage adapters for a rack or two of servers. Once the systems are installed and set up, you never have to go in and touch the server again. Using our management software, you can simply go and add as many network ports or storage ports you need on any one of the servers that are bound into this fabric. For example, you are running Oracle virtual machine on that server in the left of the picture and you need to install 12 Ethernet devices and 4 fibre channel devices. You simply go into the Oracle Fabric Manager console and you say, give this server 12 Ethernet NICs and 4 fabric channel HBAs. They show up on the host and the beauty of the solution is that as far as the server is concerned, it is as if somebody has gone in and plugged in 12 Ethernet adapters and 4 fibre channel adapters. So, they look and function and behave just like native adapters.
Now, let’s look at the attributes of Oracle Virtual Networking. First, it's a fairly simple architecture. When we converge everything on to two pipes, you have fewer switches, fewer cards and cables. From the Oracle Fabric Interconnect systems, you go straight to a core Ethernet switch or a core Fibre channel switch. Consequently, you flatten your topology, you have fewer tiers, and you have the software defined networking capability that I will explain more later, which means that this entire infrastructure can be defined in software. You can add Ethernet connections, Fibre channel connections. You can create SDN Networks all on demand.
So, let's summarize. With Oracle Virtual Networking technology you basically get the ability to build a true cloud infrastructure. Everything that you have in your racks can be defined in software for connectivity purposes. That means you can take any application that's running on a server whether it be a database or virtual machine or a VDI session, and you could bind into any network or storage fabric on demand.Oracle Virtual Networking dynamically manages East and Wets traffic, not just North and South. East-West traffic is VM-to-VM links on a server cluster within a data center or across data centers. North-South traffic runs up from servers through switches and out onto the network. The traditional way of handling East-West traffic is to rely on virtual LANs –VLANs – in the Layer 2 network. Each time you move a virtual machine, you have to reconfigure the VLANs, which can take hours and are prone to human errors. And if you use a virtual switch, then there is another layer of complexity added to the stack that is not particularly agile because you still have to configure switch ports, virtual servers, and virtual switches by hand. With Oracle Virtual Networking, connections are configured within Oracle SDN, using a supremely elegant resource: the private virtual interconnect. The private virtual interconnect is an isolated link that connects virtual machines to other data center resources. Being a software-defined network, it can be deployed in seconds.You can use private virtual interconnects to join any number of virtual machines, virtual appliances, networks, storage devices, and bare metal servers in isolated layer 2 (L2) domains. Private virtual interconnects do not rely on VLAN or port configurations.The three most important points around Oracle Virtual Networking is that it is simple. It saves you money. You wire once and you walk away, you never need to touch that infrastructure again. It is agile, which means that you can dynamically make connectivity changes in your data center depending on what your profiles are for your application need.And it is extremely fast. You get a lot more throughput than you would with any other architecture and it eliminates many I/O bottlenecks that you might face in a traditional architecture.
New Technology for Big, “Forever” ArchivesRegulation, compliance, Big Data analytics, digitization, cultural preservation and extreme growth in social media are coalescing to drive a mindset of “keep everything forever.” Add to that the onset of new high definition digital formats that create ever larger file sizes and customer requirements for data archiving are being driven to new, even more extreme heights, than ever before. There are many drivers of the archive storage and software market. The more historical drivers of archive are storage efficiency – driven by the lifecycle content management framework. I.e. looking at the value and access of data over time and moving it to the appropriate tier of storage based on that pattern. To improve storage efficiency of the primary and even backup systems, archive is the way to reduce the content within the primary and backup storage sites. The less to keep in primary, the less to backup. The archive system still accomplishes data protection for the data. Archive software applications are also focused on supporting the ILM concept through setting of policies and the support for multiple types of storage behind the software, disk, tape and even cloud repositories. Compliance and e-discovery are mostly driven by legal requirements. Life of the patient + 2 years is an example in healthcare. In financial markets, 7 years is standard for certain types of information. Other archives may be focused on “forever”. Compliane really sets the stage for what data needs to be kept for what period of time. While e-discovery mandates are about search / retrieval and ensuring that the data itself has not been altered in any way. Many software products have built in special functions to support e-discovery requirements. Of course overall primary data growth continues to push the growth of data in archive and in backup markets. When primary data grows out of control, it is that much more critical to have a strong archive and backup strategy so that you can make efficient use of all storage. Business Opportunity may be getting the most review in 2013 as big data analytics is gaining traction. It is driving organizations to save data that may once have been thought non-critical but now could be used for data mining. The latest push with big data analytics is to keep the raw inputs, that were once thought of as being of little value, as future analytics may apply to them. Digitization is also driving the archive market. This is mainly driven by analog to digital conversions within media / entertainment. It is also driven by knowledge institutions looking digitize photos, documents, etc for posterity. Some businesses will continue to digitize their older records, although the size of the content is significantly less.
* Source: ESG Digital Archive Market Forecast 2010 to 2015ESG notes in their last Digital Archive Market forecast that digital archive are growing at 56% / year. While the exabytes behind the numbers are a little more generous than IDC’s numbers (unclear if any of them factored in compression for storage), the growth rates themselves are very similar. The % of unstructured vs. structured data is also in line with other analyst predictions. 88% of the data is unstructured and as we go through the StorageTek archive strategy, the focus will be clearly on this unstructured portion of the market. Note – Database archive by ESG and by many database vendors simply views archive as a secondary instance of the database itself, very little of this would go to a near-line / off-line system.
A quick note about the efficiency of tiered storage for archiving - the more data you store for longer periods of time the more likely you will want to look at tiered storage solutions. This slide simply shows the cost savings, in “average $/TB” that could be recognized with a tiered storage approach. Of course, every situation is different and your percentage mix may vary depending on a number of different factors including amount of data, the type or “types” of data, the specific use case driving frequency of access, etc. So, be careful to point out that this is simply a “rule of thumb” estimate and not meant to be any kind of an absolute, but you can see that the average cost per TB can drop quite significantly when tiered storage is built in to your archive strategy. ================================================================================ Cost is: the acquisition cost of the equipmentAccess is: the speed and ease of making stored data accessible to the applicationCapacity is: the percentage of capacity that should be stored on each mediumAs an example average selling prices (costs) per Gbyte for the various storage technologies generally falls into the following ranges … Enterprise Disk - $25 - $60Modular - $15 - $35
The latest clipper group study has increased the cost advantage of tape over disk. In the industry analysts study, released in 2013, the clipper group walks through the costs associated with deploying storage on tape vs. disk. The study concludes that the average disk based solution costs 26 times the TCO of the average tape based solution. With StorageTek LTFS customers can deploy diskless storage solutions that will leverage the cost advantage of tape over disk.
Not all data is created equal. As data ages it is not utilized as often and so it makes pure financial sense to offload less accessed data to cheaper storage media and it can improve performance of the overall system when your primary systems are not weighted down by stale data. The alignment of different categories of data to different types of storage media directly reduces cost and effectively manages large volumes of data.To meet those intensive data demands, tiered storage is a critical approach for storage success. Tiered storage aligns the value of your data assets with the most appropriate storage media in order to reduce cost and effectively manage data throughout it’s lifecycle. Software… PRIMARY High-performance apps.Mission-critical databasesConsolidationFlash StorageDisk StorageHybrid StorageAnalyticsSECONDARY Fixed Content ServingE-mail and CollaborationBusiness ContinuityNEAR-LINESTORAGE Video, Medical, LegalDeep ArchivesTape LibrariesTape Drives/MediaDisaster Recovery
What's holding back your application performance? It could very well be your storage. Hard disk drives, even the fastest 15K RPM can not feed your servers fast enough. They are some 260 times slower than what today's servers are capable of so they spend most of their time waiting for data after a request. They are starving. The traditional remedy of adding more expensive DRAM may no longer suffice as data sets double every 2 years.Today your applications are being chocked by spinning disk drives which are causing storage latencies and I/O bottlenecks.
Flash Attributes:Delivers Low LatencySolves the IO bottleneckLower latency means applications respond more quicklyProvides Higher ThroughputMore bandwidth means applications can send more data at the same timeRequires Less PowerLess power mean you can save on your energy bill while increasing performanceSmaller FootprintMore space for you to use for other projects
Flash storage technology can help bridge this gap by sitting in between the server and spinning hard disks. This allows applications to get the fast response time they require from flash while storing infrequently used data on slower HDD technology.
Transcript of "Tendencias Storage"
Hardware and Software
Engineered to Work Together
Principal Sales Consultant
Foco en almacenamiento
And Data is Growing Exponentially
An Expanding $1B Storage Portfolio
Oracle disk storage systems, disk storage for
databases and tape sales are in excess of $1B —
“I believe Oracle has just become the #3 NAS guy with ZFS
behind NTAP and EMC.” —Steve Duplessie, ESG
Oracle is Driving the Storage System
Enterprise software drives system
performance and efficiency
Storage software drives system
performance and efficiency
Custom hardware drives system
performance and efficiency
Do-It-Yourself Storage Poses Issues
2/3rds of all enterprise
storage costs are related
Storage management costs could
triple if customers took upon storage
Oracle Intelligent Storage Protocol (OISP)
Cut Database and Storage Tuning Time in Half
Oracle Intelligent Storage Protocol: Unique language that
enables dynamic communication between an Oracle
Database and Oracle’s ZFS Storage Appliances.
DB I/O metadata
• Available only for Oracle Database 12c customers using
Oracle Direct NFS (dNFS) with Oracle ZFS Storage
Appliances that are running software version OS8
0.03ms 2TB DRAM
0.10ms 10TB FLASH (R / W)
30.00ms 2PB DISK
DB Control File
among other IO in
Oracle’s ZS3 systems dynamically
assign system resources to optimize
Oracle Database performance and efficiency
OS8 | OISP: Auto Tuning of Record Size, LogBias
Multiple shares, each with
its own Record Size and
/mnt/dbname/redo (Record Size, LogBias)
/mnt/dbname/control (Record Size, LogBias)
/mnt/dbname/pfile (Record Size, LogBias)
/mnt/dbname/datafile (Record Size, LogBias)
/mnt/dbname/tempfile (Record Size, LogBias)
/mnt/dbname/chgtrack (Record Size, LogBias)
/mnt/dbname/backup (Record Size, LogBias)
logfile and datafile
/mnt/dbname/logfile (OISP sets Record Size, LogBias)
/mnt/dbname/datafile (OISP sets Record Xize, LogBias)
What About the Future?
Tape’s advantage is increasing!
MLC NAND Flash up to 4 bits/cell
SLC NAND Flash
Disk improvements: ~20%/yr
Tape improvements: ~40%/yr
Source: INSIC roadmap areal density projections translated into $/TB
It’s Not Only About Cost/TB
Total cost of ownership (archive)1
Total cost of ownership (backup)2
Max shelf life (bit rot)
Best practices for data migration to new technology
( ~10’s of TB)
(~1 million TB)
Power and cooling1
Labor (TB managed per storage admin)3
Uncorrected Bit Error Rate, (probability average 1
error in x TB)
“The cost of energy alone for the average disk-based (archive) solution exceeds
the entire TCO of the average tape-based solution.”1
The Clipper Group, “In Search of the Long-term Archiving Solution”
Enterprise Strategy Group, Inc. “A Comparative TCO Study: VTLs and Physical Tape Solution”
3 Moore, F. Horison Information Strategies, “Tiered Storage Takes Center Stage,”
If a vendor offers the disk for “free,” it is a BAD DEAL!!!!
Oracle StorageTek Tape Strategy
Extend our leadership market position in tape by focusing on
technology innovation for tiered storage for heterogeneous
environments, including mainframe and open systems
Integrate with Oracle applications to deliver tiered storage business
Accelerate product development and innovation
– Extend our leadership in scalability, availability/reliability, and TCO
Oracle Tiered Storage: Automated By Software
Not flash or disk or tape, but making it easier to use them together
to migrate the
data across the
tiers of storage
• Mission Critical Databases
• Flash Storage
• Hybrid Storage
Backup and Recovery
Business Applications (11g partitions), ZFSSA, Axiom
• Video, Medical, Data
• Regulatory Compliance
• Disaster Recovery
SAM and VSM
90 Days to Forever
Oracle’s new ZS3 Series
Hybrid Storage Appliance
3x More Scalable
OS8 Storage OS Support
Single or Dual Controllers
8 PCIe Slots
16 Disk Enclosures*
12TB Read Flash
2TB Write Flash
* ZS3-2 will release with expansion to 8 disk enclosures. Scale to 16 expected within 6 months of release.
Single or Dual Controllers
14 PCIe Slots
36 Disk Enclosures
12TB Read Flash
10TB Write Flash
All Flash Configuration