There is a new class of data rich companies emerging where the company’s value is based on the amount of information it can store and exploit. We call these “hyperscale data companies.” Examples of the hyperscale data companies today are Google, Amazon, and Facebook. In order for these companies to grow revenue and profit, their business models require that be able to store vast amounts of data. As a result, storage becomes a core competence for these companies. We see that over time, companies in various industries will need to collect, store, and exploit very large amounts of data and will move towards becoming hyperscale data companies.Two examples: A large healthcare company currently has 3.5 petabytes of data and is installing new imaging scanners that generate 1 terabyte per session and over 2 ½ petabytes per year. In order to provide high quality healthcare to their patients and offer more services, they will need to store this data for years to come and have that data readily accessible. A large insurance company currently has 20 petabytes of data and grows by over 300 terabytes a month – every month. In addition to using this data to process claims, they want to be able to exploit this data to provide services to other claims processors and to provide services across the healthcare ecosystem.