We are a managed service AND a solution provider of elite database and System Administration skills in Oracle, MySQL and SQL Server
We want the data, the whole data and nothing but the data.
You can no longer just throw one database at the problem and expect it to solve all your problems. Different parts of the solution require different technologies.I’ll talk mostly about Hadoop
Bad schema design is not big dataUsing 8 year old hardware is not big dataNot having purging policy is not big dataNot configuring your database and operating system correctly is not big dataPoor data filtering is not big data eitherKeep the data you need and use. In a way that you can actually use it.If doing this requires cutting edge technology, excellent! But don’t tell me you need NoSQL because you don’t purge data and have un-optimized PL/SQL running on 10-yo hardware.
We always wanted more data. We never wanted to have to aggregate and then delete old data. We knew we were missing details, subtleties, opportunities. But we had to – because we wanted better performance and couldn’t afford unlimited disks.With new technologies, more data is more feasible.
One of the main reasons for the explosion of data stored in the last few years is that many problems are easier to solve if you apply more data to them.Take the Netflix Challenge for example. Netflix challenged the AI community to improve the movie recommendations made by Netflix to its customers based on a database of ratings and viewing history. Teams that used the available data more extensively did better than teams that used more advanced algorithms on a smaller data set.More data also allows businesses to make better, more informed decisions. Why have focus groups to decide on new store design, if you can re-design several stores and compare how customers proceeded through each store and how many left without buying? On-line stores make the process even easier.Modern businesses become more scientific and metrics driven, and rely less on “gut feeling” as the cost of making business experiments and measuring the results decrease.
Data also arrives in more forms and from more sources than ever. Some of these don’t fit into a relational database very well, and for some, the relational database does not have the right tools to process the data.One of Pythian’s customers analyses social media sources and allow companies to find comments of their performance and service and respond to complaints via non-traditional customer support routes.Storing facebook comments and blog posts in Oracle for later processing, results in most of the data getting stored in BLOBs, where it is relatively difficult to manage. Most of the processing is done outside of Oracle using Nature Language Processing tools. So, why use Oracle for storage at all? Why not store and process the documents elsewhere and only store the ready-to-display results in Oracle?
Data, especially from outside sources is not in a perfect condition to be useful to your business.Not only does it need to be processed into useful formats, it also needs:Filtering for potentially useful information. 99% of everything is crapStatistical analysis – is this data significant?Integration with existing dataEntity resolution. Is “Oracle Corp” the same as “Oracle” and “Oracle Corporation”? De-DuplicationGood processing and filtering of data can reduce the volume and variety of data. It is important to distinguish between true and accidental variety.This requires massive use of processing power. In a way, there is a trade-off between storage space and CPU. If you don’t invest CPU in filtering, de-duping and entity resolution – you’ll need more storage.
Data warehouses require the data to be structured in a certain way, and it has to be structured that way before the data gets into the data warehouse. This means that we need to know all the questions we would like to answer with this data when designing the schema for the data warehouse.This works very well in many cases, but sometimes there are issues:The raw data is not relational – images, video, text and we want to keep raw data for future useThe requirements from the business frequently changeIn these cases it is better to store the data and create patterns from it as it is parsed and processed. This allows the business to move from large up-front design to just-in-time processing.For example: Astrometry project searches Flickr for photos of night sky, identifies the part of the sky its from and the prominent celestial bodies and creates a standard database of the position of elements in the sky.
The new volume of data, and the need to transform it, filter it and clean it up require:Not only more storage, but also faster access ratesReliable storage. We want high availability and resilient systemsYou also need access to as many cores as you can get, to process all this dataThese cores should be as close to the data as possible to avoid moving large amounts of data on the netThe architecture should allow to use many of the cores in parallel for data processing
Hadoop is the most common solution for the new Big Data requirement. It’s a scalable distributed file system, and a distributed job processing system on top of the file system.It is a PLATFORM, not a solution – so Hadoop is unlikely to make your life easier. A lot of querying and processing tasks are more difficult with Hadoop than without. But it makes previously expensive things cheaper, and previously impossible things, possible. This lets companies keep massive amounts of unstructured data and efficiently process it. The assumption behind Hadoop is that most jobs will want to scan entire data sets, not specific rows or columns. So efficient access to specific data is not a core capability.Hadoop is open source, and there is a large eco-system of tools, products and appliances built around it.Open source tools that make data processing on Hadoop easier and more accessible, BI and integration products, improved implementations of Hadoop that are faster or more reliable, Hadoop cloud services and hardware appliances.
There is growing demand for:Real time analyticsServing data processed by Hadoop to customers with very low latency
The exact balance depends on your workload – more CPU heavy?Just lots of data? Lots of disk bandwidth?More nodes: cheaper scalability, more resilient. But – higher cost of administration.
Modern data centers generate huge amounts of logs from applications and web services.These logs contain very specific information about how users are using our application and how the application performs.Hadoop is often used to answer questions like:How many users use each feature in my site?Which page do users usually go to after visiting page X?Do people return more often to my site after I made the new changes?What use patterns correlate with people who eventually buy a product?What is the correlation between slow performance and purchase rates?Note that the web logs can be processed, loaded into RDBMS and parsed there. However, we are talking about very large amounts of data, and each piece of data needs to be read just once to answer each question. There are very few relations there. Why bother loading all this to RDBMS?
Hadoop has large storage, high bandwidth, lots of cores and was build for data aggregation.Also, it is cheap.Data is dumped from the OLTP database (Oracle or MySQL) to Hadoop. Transformation code is written on Hadoop to aggregate the data (this is the tricky part) and the data is loaded to the data warehouse (usually Oracle).This is such a common use case that Oracle built an appliance especially for this.
A lot of the modern web experience revolves around websites being about to predict what you’ll do next or what you’d like to do but don’t know about yet.People you may knowJobs you may be interested inOther customers who looked at this product eventually bought…These emails are more important than othersTo generate this information, usage patterns are extracted from OLTP databases and logs, the data is analyzed, and the results are loaded to an OLTP database again for use by the customer.The analysis task started out as daily batch job, but soon users expected more immediate feedback. More processing resources were brought in to speed up the process. Then the system started incorporating customer feedback into the analysis when making new recommendations. This new information needed more storage and more processing power.
One customer uses tweeter as input for customer support. They search tweeter and save all feeds that mention their customers (say, AT&T) to their Hadoop cluster, mine them for relevant information (user, location, what is the problem, how popular he is). They then open tickets in traditional customer support system based on this information. They can also go back and mine all the saved data for re-occurring complaints, problem areas, etc.Another use case is analysts and marketing departments who mine blogs and job postings to find trending topics.
Start with well-defined and obviously important requirement
Oracle’s Big Data machine was built to move data between Oracle RDBMS and Hadoop fast, and I doubt if anyone can beat Oracle at that.Both the tools that are bundled with the machine and the fast IB connection to Exadata make it very attractive for businesses wishing to use Hadoop as ETL solution. Note that the tools should also be avba