SAP HANA Training
- Learn how to do SAP HANA Modeling
- Prepare for SAP HANA Application Associate
Certification "C_HANAIMP_131"
- Practice Hands on business scenarios
MongoDB for Spatio-Behavioral Data Analysis and VisualizationMongoDB
T-Sciences offers iSpatial - a web-based Spatial Data Infrastructure (SDI) to enable integration of third-party applications with geo-visualization tools. The iHarvest tool further enables the mining and analysis of data aggregated in the iSpatial platform for spatio-temporal behavior modelling. At the back-end of both products is MongoDB, providing fundamental framework capabilities for the spatial indexing and data analysis techniques. Come witness how Thermopylae Sciences and Technology leveraged the aggregation framework, and extended the spatial capabilities of MongoDB to tackle dynamic spatio-behavioral data at scale.
This presentation contains following slides,
Introduction To OLAP
Data Warehousing Architecture
The OLAP Cube
OLTP Vs. OLAP
Types Of OLAP
ROLAP V/s MOLAP
Benefits Of OLAP
Introduction - Apache Kylin
Kylin - Architecture
Kylin - Advantages and Limitations
Introduction - Druid
Druid - Architecture
Druid vs Apache Kylin
References
For any queries
Contact Us:- argonauts007@gmail.com
SAP/HANA Financial Closing can help you ACCELERATE your financial closing cycle. Benefit from increased governance, higher user efficiency and automation, strong collaboration, and real-time insight.
SAP HANA Training
- Learn how to do SAP HANA Modeling
- Prepare for SAP HANA Application Associate
Certification "C_HANAIMP_131"
- Practice Hands on business scenarios
MongoDB for Spatio-Behavioral Data Analysis and VisualizationMongoDB
T-Sciences offers iSpatial - a web-based Spatial Data Infrastructure (SDI) to enable integration of third-party applications with geo-visualization tools. The iHarvest tool further enables the mining and analysis of data aggregated in the iSpatial platform for spatio-temporal behavior modelling. At the back-end of both products is MongoDB, providing fundamental framework capabilities for the spatial indexing and data analysis techniques. Come witness how Thermopylae Sciences and Technology leveraged the aggregation framework, and extended the spatial capabilities of MongoDB to tackle dynamic spatio-behavioral data at scale.
This presentation contains following slides,
Introduction To OLAP
Data Warehousing Architecture
The OLAP Cube
OLTP Vs. OLAP
Types Of OLAP
ROLAP V/s MOLAP
Benefits Of OLAP
Introduction - Apache Kylin
Kylin - Architecture
Kylin - Advantages and Limitations
Introduction - Druid
Druid - Architecture
Druid vs Apache Kylin
References
For any queries
Contact Us:- argonauts007@gmail.com
SAP/HANA Financial Closing can help you ACCELERATE your financial closing cycle. Benefit from increased governance, higher user efficiency and automation, strong collaboration, and real-time insight.
The next generation user experience should move to customer engagement zones along their preferred channels with desired action to outcome approaches. With scores of information ranging from inventory to inquiry, weather to warehouse alerts, product to promotion info at disposal, enterprise digitization can create value at every customer touch point. Attendees witnessed the manifestation of TCS’ Thought Leadership in the Game of Retail.
Keen It Technologies is providing SAP Online Courses. SAP HANA Online Course is also available here. Candidates who are intrested can do the cerfication course in SAP HANA.
Clarify how System Integrator / Vendor Must know what is Big Data and How To Implement it in Developing Countries such as Indonesia.
This is very lightweight introduction, some animation don't work in this presentation, suitable viewed as pptx.
Jason Huang, Solutions Engineer, Qubole at MLconf ATL - 9/18/15MLconf
Sparking Data in the Cloud: Data isn’t useful until it’s used to drive decision-making. Companies, like Pinterest, are using Machine Learning to build data-driven recommendation engines and perform advanced cluster analysis. In this talk, Praveen Seluka will cover best practices for running Spark in the cloud, common challenges in iterative design and interactive analysis.
Webinar: SAP HANA - Features, Architecture and AdvantagesAPPSeCONNECT
We recently had a Webinar on SAP HANA on 21st June 2017. Here are the key points which were covered in the Webinar:
*What is SAP HANA?
*SAP HANA Architecture.
*Key benefits which can lead to SAP HANA as your backend database system.
*Difference between SAP HANA and Traditional RDBMS.
*Use Cases of SAP HANA Database system.
*Technology basics of HANA database.
*Limitations.
Mr. Abhishek Sur, Solution Architect at APPSeCONNECT, was the speaker in the Webinar. This recorded webinar will give you knowledge on the working principles of SAP HANA database and also define mutual pros and cons of SAP HANA over traditional databases used previously. Check out the Recorded Webinar!
Check out all the SAP B1 Integrations here: https://www.appseconnect.com/sap-business-one-integrations/
Check out all the SAP ECC integrations here: https://www.appseconnect.com/sap-ecc-integrations/
ADV Slides: Platforming Your Data for Success – Databases, Hadoop, Managed Ha...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here? In this webinar, we say no.
Databases have not sat around while Hadoop emerged. The Hadoop era generated a ton of interest and confusion, but is it still relevant as organizations are deploying cloud storage like a kid in a candy store? We’ll discuss what platforms to use for what data. This is a critical decision that can dictate two to five times additional work effort if it’s a bad fit.
Drop the herd mentality. In reality, there is no “one size fits all” right now. We need to make our platform decisions amidst this backdrop.
This webinar will distinguish these analytic deployment options and help you platform 2020 and beyond for success.
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
10 июня в центре Digital October состоялась лекция эксперта в области больших данных Бьярна Берга.
http://digitaloctober.ru/events/knowledge_stream_informatsiya_dlya_biznesa
Slides presented at Great Indian Developer Summit 2016 at the session MySQL: What's new on April 29 2016.
Contains information about the new MySQL Document Store released in April 2016.
DoneDeal AWS Data Analytics Platform build using AWS products: EMR, Data Pipeline, S3, Kinesis, Redshift and Tableau. Custom built ETL was written using PySpark.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
The next generation user experience should move to customer engagement zones along their preferred channels with desired action to outcome approaches. With scores of information ranging from inventory to inquiry, weather to warehouse alerts, product to promotion info at disposal, enterprise digitization can create value at every customer touch point. Attendees witnessed the manifestation of TCS’ Thought Leadership in the Game of Retail.
Keen It Technologies is providing SAP Online Courses. SAP HANA Online Course is also available here. Candidates who are intrested can do the cerfication course in SAP HANA.
Clarify how System Integrator / Vendor Must know what is Big Data and How To Implement it in Developing Countries such as Indonesia.
This is very lightweight introduction, some animation don't work in this presentation, suitable viewed as pptx.
Jason Huang, Solutions Engineer, Qubole at MLconf ATL - 9/18/15MLconf
Sparking Data in the Cloud: Data isn’t useful until it’s used to drive decision-making. Companies, like Pinterest, are using Machine Learning to build data-driven recommendation engines and perform advanced cluster analysis. In this talk, Praveen Seluka will cover best practices for running Spark in the cloud, common challenges in iterative design and interactive analysis.
Webinar: SAP HANA - Features, Architecture and AdvantagesAPPSeCONNECT
We recently had a Webinar on SAP HANA on 21st June 2017. Here are the key points which were covered in the Webinar:
*What is SAP HANA?
*SAP HANA Architecture.
*Key benefits which can lead to SAP HANA as your backend database system.
*Difference between SAP HANA and Traditional RDBMS.
*Use Cases of SAP HANA Database system.
*Technology basics of HANA database.
*Limitations.
Mr. Abhishek Sur, Solution Architect at APPSeCONNECT, was the speaker in the Webinar. This recorded webinar will give you knowledge on the working principles of SAP HANA database and also define mutual pros and cons of SAP HANA over traditional databases used previously. Check out the Recorded Webinar!
Check out all the SAP B1 Integrations here: https://www.appseconnect.com/sap-business-one-integrations/
Check out all the SAP ECC integrations here: https://www.appseconnect.com/sap-ecc-integrations/
ADV Slides: Platforming Your Data for Success – Databases, Hadoop, Managed Ha...DATAVERSITY
Thirty years is a long time for a technology foundation to be as active as relational databases. Are their replacements here? In this webinar, we say no.
Databases have not sat around while Hadoop emerged. The Hadoop era generated a ton of interest and confusion, but is it still relevant as organizations are deploying cloud storage like a kid in a candy store? We’ll discuss what platforms to use for what data. This is a critical decision that can dictate two to five times additional work effort if it’s a bad fit.
Drop the herd mentality. In reality, there is no “one size fits all” right now. We need to make our platform decisions amidst this backdrop.
This webinar will distinguish these analytic deployment options and help you platform 2020 and beyond for success.
Engineering Machine Learning Data Pipelines Series: Streaming New Data as It ...Precisely
Tackling the challenge of designing a machine learning model and putting it into production is the key to getting value back – and the roadblock that stops many promising machine learning projects. After the data scientists have done their part, engineering robust production data pipelines has its own set of challenges. Syncsort software helps the data engineer every step of the way.
Building on the process of finding and matching duplicates to resolve entities, the next step is to set up a continuous streaming flow of data from data sources so that as the sources change, new data automatically gets pushed through the same transformation and cleansing data flow – into the arms of machine learning models.
Some of your sources may already be streaming, but the rest are sitting in transactional databases that change hundreds or thousands of times a day. The challenge is that you can’t affect performance of data sources that run key applications, so putting something like database triggers in place is not the best idea. Using Apache Kafka or similar technologies as the backbone to moving data around doesn’t solve the problem of needing to grab changes from the source pushing them into Kafka and consuming the data from Kafka to be processed. If something unexpected happens – like connectivity is lost on either the source or the target side, you don’t want to have to fix it or start over because the data is out of sync.
View this 15-minute webcast on-demand to learn how to tackle these challenges in large scale production implementations.
Enterprise Data World 2018 - Building Cloud Self-Service Analytical SolutionDmitry Anoshin
This session will cover building the modern Data Warehouse by migration from the traditional DW platform into the cloud, using Amazon Redshift and Cloud ETL Matillion in order to provide Self-Service BI for the business audience. This topic will cover the technical migration path of DW with PL/SQL ETL to the Amazon Redshift via Matillion ETL, with a detailed comparison of modern ETL tools. Moreover, this talk will be focusing on working backward through the process, i.e. starting from the business audience and their needs that drive changes in the old DW. Finally, this talk will cover the idea of self-service BI, and the author will share a step-by-step plan for building an efficient self-service environment using modern BI platform Tableau.
10 июня в центре Digital October состоялась лекция эксперта в области больших данных Бьярна Берга.
http://digitaloctober.ru/events/knowledge_stream_informatsiya_dlya_biznesa
Slides presented at Great Indian Developer Summit 2016 at the session MySQL: What's new on April 29 2016.
Contains information about the new MySQL Document Store released in April 2016.
DoneDeal AWS Data Analytics Platform build using AWS products: EMR, Data Pipeline, S3, Kinesis, Redshift and Tableau. Custom built ETL was written using PySpark.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
Cosmetic shop management system project report.pdfKamal Acharya
Buying new cosmetic products is difficult. It can even be scary for those who have sensitive skin and are prone to skin trouble. The information needed to alleviate this problem is on the back of each product, but it's thought to interpret those ingredient lists unless you have a background in chemistry.
Instead of buying and hoping for the best, we can use data science to help us predict which products may be good fits for us. It includes various function programs to do the above mentioned tasks.
Data file handling has been effectively used in the program.
The automated cosmetic shop management system should deal with the automation of general workflow and administration process of the shop. The main processes of the system focus on customer's request where the system is able to search the most appropriate products and deliver it to the customers. It should help the employees to quickly identify the list of cosmetic product that have reached the minimum quantity and also keep a track of expired date for each cosmetic product. It should help the employees to find the rack number in which the product is placed.It is also Faster and more efficient way.
Industrial Training at Shahjalal Fertilizer Company Limited (SFCL)MdTanvirMahtab2
This presentation is about the working procedure of Shahjalal Fertilizer Company Limited (SFCL). A Govt. owned Company of Bangladesh Chemical Industries Corporation under Ministry of Industries.
Welcome to WIPAC Monthly the magazine brought to you by the LinkedIn Group Water Industry Process Automation & Control.
In this month's edition, along with this month's industry news to celebrate the 13 years since the group was created we have articles including
A case study of the used of Advanced Process Control at the Wastewater Treatment works at Lleida in Spain
A look back on an article on smart wastewater networks in order to see how the industry has measured up in the interim around the adoption of Digital Transformation in the Water Industry.
2. What to Expect ? What would you achieve?
1) You will be able to do Hana modelling , Hana development
in Hana DB system.
2) You will understand the concepts of ETL and load data into
Hana.
3) Understand why Hana is required in today’s Big data
Environment.
4) Attend interviews and answer questions .
What is Required ?
1) Everyday practice of at least 25 to 30 mins.
2) Complete the assignments, which would help for project
prep work.
3. Course Contents:
• MODULE 1: Approaching SAP HANA – 1 Hour
• Introduction to In-Memory Computing – Fundamentals of SAP HANA, What SAP
HANA Can do, What SAP HANA Can’t do. High Performance functionalities in SAP
HANA – In-Memory computing, Columnar store database, Massive Parallel
Processing, Data Compression. SAP HANA for Non-SAP Use Cases – SAP HANA for
Non – SAP Analytics, SAP HANA for Non-SAP Applications. SAP HANA Database
Architecture, SAP HANA Landscapes and Scenarios.
• MODULE 2: Introduction to HANA Studio – 3 Hours
• Add SAP HANA System – Perspectives, Administration, Modeling, Development Plan
Viz. Folders- Catalog, Content, Provisioning, Security. SAP HANA Database SQL Script
– HANA Database SQL Basics, types of statements, data types, operators,
expressions, basic query execution, Sub-query, Joins, Expressions, Loops, Sub-
queries. Catalog – Schema, Table, Views, Functions, Stored Procedures, Index,
Synonyms, Sequences, Triggers. Provisioning – SDA [Smart Data Access]. Security –
Users, Roles.
• MODULE 3: SAP HANA Modelling – 2 Hours
• Introduction – Types of Models, Attribute Views, Joins, Using Filter Operations,
Creating Restricted & Calculated Columns, Using Hierarchies. Analytic Views – Star
Schema design, Multi-Dimensional modeling, Using Variables, Using Input
parameters, Advantages & Limitations.
• MODULE 4: Calculation Views (GUI) – 3 Hour
• Dimension Calculation View – Star Join Calculation view, OLTP Calculation view,
Using Projection, Using Join, Using Aggregation, Using Union, Using Rank.
Calculation Views (Scripted) – CE functions Introduction, Creating Content
Procedure.
• MODULE 5: Analytic Privileges – 2 Hour
• Classical Analytic Privileges, SQL Analytic Privileges, Dynamic analytic Privileges.
Turning Business Rules into Decision tables. Table Functions.
4. • MODULE 6: In-depth Modeling – 3 Hour
• Union Pruning, Refactoring information models, Schema Mapping, propagate to
schematics, Show Lineage, Find Where used, Schema Mapping, Generating Time
Data.
• MODULE 7: Modeling (Cont.) – 2 Hour
• Using Time Travel, migrating deprecated Information models, Using Currency
Conversion. Web based Modeling Work bench. Advanced HANA SQL script -
Temporary tables, Triggers, Exceptions Handling.
• MODULE 8: Full Text Search – 2 Hour
• Overview, Datatypes & full text Indexes, Using Full text search. Application Life
Cycle Management – Transport Using Developer mode, Transport Using Delivery
Unit mode, Change management. Analyzing Query Performance- Explain Plan,
Visualize plan, Performance trace.
• MODULE 9: Data Provisioning – 2 Hours
• Data Provisioning – Data provisioning using SLT, Data Acquisition with BODS, Data
Provisioning with Flat File upload
• MODULE 10: Data Provisioning (Cont.) – 2 Hour
• Data Provisioning using Direct, Extractor Connection, Security and Authorizations,
Introduction to Lumira, Using Lumira Prepare, visualize compose data.
• MODULE 11: ABAP programming for SAP HANA – 3 Hours
• How SAP HANA affects the ABAP development process, introducing the ABAP
development tools (ABAP in Eclipse), how to take ABAP to HANA, using SAP HANA
as a secondary database, the various issues in performance and functional aspects
encountered in SAP HANA migration, understanding the ABAP Test Cockpit, Code
Inspector, Profiler, Trace and SQL Trace, improving the performance with SQL
performance tuning and monitoring, guidelines and rules when deploying ABAP for
SAP HANA.
• MODULE 12: Accessing data stored in SAP HANA – 3 Hours
• What is New Open SQL, definition of advanced views by deploying Core Data
Services in ABAP, creating CDS associations, how to implement authorization checks
using CDS with ABAP, SAP HANA Objects in ABAP, how to consume SAP HANA views
using ADBC (ABAP Database Connectivity) and native SQL in ABAP, using the ADBC
and native SQL to consume SAP HANA database procedures.
• MODULE 14: Advanced Topics Overview – 2 Hours
• SAP HANA Dynamic tiring, Delta Merge, SDI [Smart Data Integration], SAP HANA
has Application Platform, SAP HANA cloud, L Procedures, R Procedures, Partitioning
of tables, Introduction to AFL, PAL, BFL.
5. • What is HANA?
• Why the Need for HANA?
o IT and Business
• Why Use HANA?
• Top 10 Reasons Companies Use
HANA
• Technology Basics
• HANA Architecture
• The “Heart” of HANA
• In-Memory Computing
• What is In-Memory?
• HANA Column Storage vs.
Row-Based Storage
6. Applying SAP HANA
Where can I use HANA?
Use cases for HANA
Key takeaways about HANA
7. What is SAP HANA?
SAP HANA is the in-memory analytics product. Using HANA, companies can do ad hoc analysis of large volumes of
big data in real time
SAP HANA is a completely re-imagined platform for real-time business
SAP HANA transforms businesses by streamlining transactions, analytics, planning, and predictive data processing
on a single in-memory database so business can operate in real time
Why the Need for SAP HANA?
IT Challenges:
1) “Big Data” (volume) growing and challenge for real-time access to Operational Enterprise Systems
2) Costly for IT to purchase and maintain hardware to handle increasing data volumes
3) IT not the hero – dissatisfied business users
4) Processing and analysis results delayed
5) Data not in real time
Challenges for Business:
1) Inadequate access to real-time operational information
2)Need to react faster to events impacting business
3)Need to quickly uncover trends and patterns by functional users – empower users/organizations
4)Need to improve business processes
Why Use SAP HANA?
HANA enables businesses to make smarter, faster decisions through real-time analysis and reporting, combined
with dramatically accelerated business processes
Lack of delay between insight and action turns business into a “real time business”
8. Top 10 Reasons Why Companies Choose SAP HANA
• Speed – Manage massive volumes at high speed
• Agility – Enable realtime interactions across Value Chain
• Any Data – Gain insights from structured and unstructured data
• Insight – Unlock new insights with predictive, complex analysis
• Applications – Run next-generation applications
• Cloud – Step up to next advanced platform
• Innovation – Deploy ultimate platform for business innovation
• Simplicity – Manage fewer layers and landscapes for lower costs
• Value – Innovate without disruption and add value to legacy investments
• Choice – Work with preferred partner at every level
9. Key Terminology
• Aggregation: To enable the calculation of key figures, the data from the info provider has to be
aggregated to the detail level of the query, and formulas may also need to be calculated. The
system has to aggregate using multiple characteristics.
• Business Warehouse (BW): SAP BW provides standard application data for program usage over
various systems.
• Compression: Compression features help reduce space requirement dramatically, resulting in
lower storage cost and improved input and output performance.
• Data Stripping: Technique of segmenting logically sequential data, such as a file, in a way that
accesses of sequential segments are made to different physical storage devices. Striping is useful
when a processing device requests access to data more quickly than a storage device can provide
access.
• In Memory Computing Engine (IMCE): The heart of Hana solution is the In-memory Computing
Engine (IMCE) allowing to create and perform accelerated calculations on data.
• Online Analytical Processing (OLAP): OLAP makes multi-dimensionally formatted data available
using special interfaces.
• Partitioning: You use partitioning to split the total dataset for an info provider into several smaller,
physically independent and redundancy-free units. This separation improves system performance.
• Structured Query Language (SQL): SQL is a specialpurpose programming language designed for
managing
13. What is In-Memory?
• In-memory means all the data is stored in the memory
(RAM)
• There is no time wasted in loading the data from hard disk
to RAM, or while processing, keeping some data in RAM
and some data on disk temporarily
• Everything is in-memory all the time, which gives the
CPUs quick access to data for processing
SAP HANA Column Storage Vs. Row-Based
Storage
Storing data in columns is not a new
technology, but it has not been
leveraged to its full potential, yet
The columnar storage is read
optimized,
that is, the read operations
can be processed very fast. However,
it’s not write-optimized, as new insert
might lead to moving a lot of data to
create a place for new data
HANA handles this well with delta
merge. The columnar storage
performs very well while reading and
the write operations are taken care of
by the In-Memory Computing Engine
(IMCE) in some other ways
14. Column Storage Opportunities
• Compression: As the data written next to each other is of same type,
there is no need to write the same values again and again
• Partitioning: HANA supports two types of partitioning. A single
column can be partitioned to many HANA servers, and different
columns of a table can be partitioned in different HANA servers.
Columnar storage easily enables this partitioning
• Data stripping: When querying a table, there are often times where a
lot of columns are not used
• Parallel Processing: It is always performance-critical to make full use
of the resources available. With the current boost in the number of
CPUs, the more work they can do in parallel, the better the
performance.
15. Multiple Engines
• HANA has multiple engines inside its computing engine
for better performance
• HANA supports both SQL & OLAP reporting tools; there
are separate engines to perform operations respectively
• There is a separate calculation engine to do calculations.
There is also a planning engine used for functional
reporting. Above all sits something like a controller which
breaks incoming requests into multiple pieces and sends
sub queries to these engines. There are separate row and
column engines to process operations between tables
stored in rows and tables stored in column format
16. Where Can I Use SAP HANA?
Anywhere there are large volumes of data
• Aerospace & Defense
• Automotive
• Banking
• Chemical
• Consumer Products
• Cross Industry
• Customer Service
• Finance
• Healthcare
• High Tech
• Industrial Machinery & Components
17. Use Cases For Using SAP HANA
• Sales Reporting (CRM):
Quickly identify top customers and products by channel – with real-time sales reporting. Improve order fulfillment rates and accelerate
Key sales processes at the same time, with instant analysis of credit memo and billing list
• Financial Reporting (FICO)
Obtain immediate insights across your business – into revenue, accounts payable and receivable, open, and overdue items
Top general ledger transaction and days sales outstanding (DSO). Make the right financial decisions, armed with real-time information
• Shipping Reporting (LE-SHP)
Rely on real-time shipping reporting for complete stock overview analysis. One can better plan/monitor outbound delivery
Assess and optimize stock levels – with accurate information at one’s fingertips
• Purchasing Reporting (P2P/SRM)
Gain timely insights into purchase orders, vendors, and the movement of goods –with real-time purchasing reporting
Make better purchasing decisions based on a complete analysis of order history
• Master Data Reporting (DG/MDM)
Obtain real-time reporting on main master data, including customer, vendor, and materials lists for improved productivity and accuracy
18. Key Takeaways About SAP HANA
• Empowers Your Organization
Reduced reliance on IT resources
Real-time visibility to complete data for transaction and analytics
processing
• Real-Time Analytics for Operational Data
Go from “what happened yesterday” to real time
Close to zero latency
Ability to leverage and analyze large volumes of data
• Low Total Cost of Ownership (TCO)
Non-disruptive to existing Enterprise Data Warehousing (EDW) Strategy
Low TCO by leveraging the latest technology and delivery as preconfigured
appliance