Over the years unplanned growth in transactional databases and associated systems have complicated data infrastructure. As variety of data sources with growing volume are increasing, it is causing severe latency issues. Data once generated from transactions gets replicated to several locations for reporting. It gets processed for various end systems for analytics. This proliferations of data through various stages and processes has significantly increased latency resulting in slow response. As data from different sources is simultaneously pushed to the users, it results in usability issues.The overall architecture is extremely complex to manage given limited resources. This not only increases the risk of data management, such as security, privacy & availability. More data get dispersed, higher the complexity and risk.
With existing technologies, optimizing across all five dimensions in the spider diagram is not possible. Trade offs need to be made: Do you want a report that provides broad and deep data analysis at a bearable speed? That is normally only doable after a lot of data manipulation like aggregation and normally has run times in the minutes, hours or days.Alternatively, you could decide for a report design that is simple and fast, but it will normally not provide for any deep and broad insights.Lastly, in both scenarios, real time updates are not possible per design; in a data warehouse environment they occur overnight via nightly batch jobs.In summary, this shows todays typical tradeoffs between Broad and Deep analysis vx Speedy and Simple reports.
3 copies of dataIn different data modelsInherent data latencyAccelerate through cacheIn recent years, computer systems have increased number of processing cores with large integrated caches. Main memory space has become practically unlimited with the ability to hold all the business data of enterprises of every size. Falling prices have moved processing from Disk/SSD to In-Memory.Memory access is 1M – 10M times faster than disk. Disk-centric computing was also one of the major factors that forced separation of transactional and analytical workloads. Moving data to various locations was necessary for reporting to circumvent network issues. Pre-processing of data then became the necessity to optimize linear data transfers. We do not have to live with those limitations any more. Feasibility is given.Through advances in data sciences combined with relevant hardware trends, SAP is leading the real-time computing revolution… leveraging the power of in-memory computing to bringing OLAP and OLTP back together in one database.This transforms how we construct business applications and our expectations in consuming them. Adopting this new technology will sharpen your competitive edge by dramatically accelerating not only data querying speed but also business processing speed.
SAP HANA permits OLTP and OLAP workloads on the same platform by storing data in high-speed memory, organizing it in columns, and partitioning and distributing it among multiple servers. This delivers faster queries that aggregate data more efficiently yet avoid costly full-table scans and single column indexes.The SAP HANA Studio delivers an all-in-one support environment for system monitoring, backup and recovery, and user provisioning.
SAP launched a next generation platform with SAP HANA 18 months ago. It represents the fastest growing product in the + 40 years of SAP’s history.The SAP HANA platform leverages the power of in-memory computing technology. The platform analyses massive quantities of data in local memory so that the results of complex analyses and transactions are available at your fingertips – and business decisions can be executed without delay. This means that complex analyses, plans and simulations can be done based on real-time data and made available immediately. SAP HANA is able to remove existing constraints (slow, processes, restricted business users, limitations to innovate) and deliver information for making strategic as well as operational business decisions in real-time, with little to no data preparation. Only SAP HANA can deliver on these 5 dimensions today. And, SAP HANA now also makes it possible to bring together transactional and analytics into one platform.
Real-Time Replication: replicate real-time data from multiple sources into SAP HANA over wide-area network or within Hybrid On-Premise Cloud Deployments Batch Data Load: high-performance and highly-scalable engine for extremely fast loads of large data volumes into SAP HANA. Support native data and metadata connectivity to all major enterprise data sources, databases, files, text batch loading Provide rich transformations for data manipulations, data quality, trust and confidence. Support unstructured text data processing. Analyze and process streaming or machine data from integrated ESP in combination with data in SAP HANA. Synchronize Mobile/Machine data using available MobiLink capabilities for SAP HANA synchronization with SQL Anywhere Data virtualization leverages remote database’s unique processing capabilities by pushing processing to remote database. Remote data access like “local” data.Leverage remote database’s unique processing capabilities by pushing processing to remote database; Compensate missing functionality in remote database with SAP HANA capabilities.
A very common question related to migrating an existing landscape to SAP HANA is “How will this affect my landscape? Will everything change? Will nothing be anymore like it used to be?”. Be assured, this will not be the case. Technically a migration to SAP HANA is really only a change of the database. Most other things in your landscape will stay as they are.A SAP Business Suite system running on SAP HANA can still connect to and be integrated with other systems and hubs the same way as a Business Suite system running on any other database.You can still use the same frontends and clients to connect to your system. And even the application servers can be reused as they are, given they are running on separate servers and not on the database host.What will change is the kind of database you are running – SAP HANA - which has a few operational specifics. Like for example that it runs on SUSE Linux Enterprise. Though even this is nothing you need to be concerned about, as due to the appliance model you can leave most of these specifics to your hardware and technology partners if your choose to.
Reference: http://www.bluefinsolutions.com/insights/blog/the_sap_hana_hardware_faq/HANA allows changes to the data schema's on the fly (new attributes). Try this with db2/blu. These are only a few out of my head.
HANA SPS07 Architecture & Landscape
What´s New? SAP HANA SPS 07
Architecture & Landscape
SAP HANA Product Management