This document contains a resume for Arti Patel summarizing her work experience and qualifications. She has over 6 years of experience working with SAP MM and SAP PI, including implementing subsidiary rollouts for various companies. Her experience includes requirements gathering, process mapping, configuration, interface development, and support. She holds a Bachelor's degree in Information Technology.
SQL Server 2016 introduces new features for business intelligence and reporting. PolyBase allows querying data across SQL Server and Hadoop using T-SQL. Integration Services has improved support for AlwaysOn availability groups and incremental package deployment. Reporting Services adds HTML5 rendering, PowerPoint export, and the ability to pin report items to Power BI dashboards. Mobile Report Publisher enables developing and publishing mobile reports.
The strategic relationship between Hortonworks and SAP enables SAP to resell Hortonworks Data Platform (HDP) and provide enterprise support for their global customer base. This means SAP customers can incorporate enterprise Hadoop as a complement within a data architecture that includes SAP HANA, Sybase and SAP BusinessObjects enabling a broad range of new analytic applications.
Leveraging SAP, Hadoop, and Big Data to Redefine BusinessDataWorks Summit
The document discusses leveraging SAP, Hadoop, and big data technologies to redefine businesses. It describes how the volume of digital data is exploding and includes both relational and non-relational machine-generated data. The document outlines how SAP focuses on providing an end-to-end value chain through its HANA data platform, which provides in-memory analytics, dynamic data tiering between HANA and Hadoop, smart data integration and quality features, and the ability to consume, compute and store data. Key features of HANA's integration with Hadoop include smart data access to Hive and Spark, support for MapReduce jobs, and access to HDFS.
This guide was generated in Jan-Feb'2014 timeframe.
Using the feature of SAP HANA Smart Data Access(SDA), it is possible to access remote data, without having to replicate the data to the SAP HANA database beforehand. The following are supported as sources(till 2013):
- Teradata database,
- SAP Sybase ASE,
- SAP Sybase IQ,
- Intel Distribution for Apache Hadoop,
- SAP HANA.
SAP HANA handles the data like local tables on the database. Automatic data type conversion makes it possible to map data types from databases connected via SAP HANA Smart Data Access to SAP HANA data types.
This guide will explain the step-by-step approach SAP HANA SDA for Hadoop data - which also include the following :
- Hadoop Installation
- Data Load in Hadoop system
- Activities on Unstructured Data in Hadoop system
- ODBC Driver installation & configuration on HANA Server for Hadoop system data access
- Smart Data Access in SAP HANA (through SAP HANA Studio), using HADOOP as a remote data source
Setup used for this guide :
1) Hadoop : HDP 1.3 for Windows(Hortonworks Data Platform) - Standalone - on Dell Laptop, OS Win7 64bit with 8GB RAM
2) SAP HANA Sever : running on VM – 24GB Standalone HANA 1.0 SPS 7 – SLES 11 SP1
This is a point of view document showing the various possible techniques to integrate SAP HANA and Hadoop and their pros & cons and the scenarios where each of them is recommended.
HAWQ: a massively parallel processing SQL engine in hadoopBigData Research
HAWQ, developed at Pivotal, is a massively parallel processing SQL engine sitting on top of HDFS. As a hybrid of MPP database and Hadoop, it inherits the merits from both parties. It adopts a layered architecture and relies on the distributed file system for data replication and fault tolerance. In addition, it is standard SQL compliant, and unlike other SQL engines on Hadoop, it is fully transactional. This paper presents the novel design of HAWQ, including query processing, the scalable software interconnect based on UDP protocol, transaction management, fault tolerance, read optimized storage, the extensible framework for supporting various popular Hadoop based data stores and formats, and various optimization choices we considered to enhance the query performance. The extensive performance study shows that HAWQ is about 40x faster than Stinger, which is reported 35x-45x faster than the original Hive.
This document discusses harnessing big data in real-time. It outlines how business requirements are increasingly demanding real-time insights from data. Traditional systems struggle with high latency, complexity, and costs when dealing with big data. The document proposes using SAP HANA and Hadoop together to enable instant analytics on vast amounts of data. It provides examples of using this approach for cancer genome analysis and other use cases to generate personalized and timely results.
HAWQ is an enterprise platform that provides the fewest barriers, lowest risk, and fastest way to perform big data analytics on Hadoop. It combines SQL with Hadoop by providing ANSI SQL capabilities on Hadoop for high performance analytics. HAWQ stores all data directly on HDFS and runs on various Hadoop distributions like Pivotal HD, HDP and IBM BigInsights.
SQL Server 2016 introduces new features for business intelligence and reporting. PolyBase allows querying data across SQL Server and Hadoop using T-SQL. Integration Services has improved support for AlwaysOn availability groups and incremental package deployment. Reporting Services adds HTML5 rendering, PowerPoint export, and the ability to pin report items to Power BI dashboards. Mobile Report Publisher enables developing and publishing mobile reports.
The strategic relationship between Hortonworks and SAP enables SAP to resell Hortonworks Data Platform (HDP) and provide enterprise support for their global customer base. This means SAP customers can incorporate enterprise Hadoop as a complement within a data architecture that includes SAP HANA, Sybase and SAP BusinessObjects enabling a broad range of new analytic applications.
Leveraging SAP, Hadoop, and Big Data to Redefine BusinessDataWorks Summit
The document discusses leveraging SAP, Hadoop, and big data technologies to redefine businesses. It describes how the volume of digital data is exploding and includes both relational and non-relational machine-generated data. The document outlines how SAP focuses on providing an end-to-end value chain through its HANA data platform, which provides in-memory analytics, dynamic data tiering between HANA and Hadoop, smart data integration and quality features, and the ability to consume, compute and store data. Key features of HANA's integration with Hadoop include smart data access to Hive and Spark, support for MapReduce jobs, and access to HDFS.
This guide was generated in Jan-Feb'2014 timeframe.
Using the feature of SAP HANA Smart Data Access(SDA), it is possible to access remote data, without having to replicate the data to the SAP HANA database beforehand. The following are supported as sources(till 2013):
- Teradata database,
- SAP Sybase ASE,
- SAP Sybase IQ,
- Intel Distribution for Apache Hadoop,
- SAP HANA.
SAP HANA handles the data like local tables on the database. Automatic data type conversion makes it possible to map data types from databases connected via SAP HANA Smart Data Access to SAP HANA data types.
This guide will explain the step-by-step approach SAP HANA SDA for Hadoop data - which also include the following :
- Hadoop Installation
- Data Load in Hadoop system
- Activities on Unstructured Data in Hadoop system
- ODBC Driver installation & configuration on HANA Server for Hadoop system data access
- Smart Data Access in SAP HANA (through SAP HANA Studio), using HADOOP as a remote data source
Setup used for this guide :
1) Hadoop : HDP 1.3 for Windows(Hortonworks Data Platform) - Standalone - on Dell Laptop, OS Win7 64bit with 8GB RAM
2) SAP HANA Sever : running on VM – 24GB Standalone HANA 1.0 SPS 7 – SLES 11 SP1
This is a point of view document showing the various possible techniques to integrate SAP HANA and Hadoop and their pros & cons and the scenarios where each of them is recommended.
HAWQ: a massively parallel processing SQL engine in hadoopBigData Research
HAWQ, developed at Pivotal, is a massively parallel processing SQL engine sitting on top of HDFS. As a hybrid of MPP database and Hadoop, it inherits the merits from both parties. It adopts a layered architecture and relies on the distributed file system for data replication and fault tolerance. In addition, it is standard SQL compliant, and unlike other SQL engines on Hadoop, it is fully transactional. This paper presents the novel design of HAWQ, including query processing, the scalable software interconnect based on UDP protocol, transaction management, fault tolerance, read optimized storage, the extensible framework for supporting various popular Hadoop based data stores and formats, and various optimization choices we considered to enhance the query performance. The extensive performance study shows that HAWQ is about 40x faster than Stinger, which is reported 35x-45x faster than the original Hive.
This document discusses harnessing big data in real-time. It outlines how business requirements are increasingly demanding real-time insights from data. Traditional systems struggle with high latency, complexity, and costs when dealing with big data. The document proposes using SAP HANA and Hadoop together to enable instant analytics on vast amounts of data. It provides examples of using this approach for cancer genome analysis and other use cases to generate personalized and timely results.
HAWQ is an enterprise platform that provides the fewest barriers, lowest risk, and fastest way to perform big data analytics on Hadoop. It combines SQL with Hadoop by providing ANSI SQL capabilities on Hadoop for high performance analytics. HAWQ stores all data directly on HDFS and runs on various Hadoop distributions like Pivotal HD, HDP and IBM BigInsights.
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the CloudDataWorks Summit
This document discusses how organizations can leverage data and analytics to power their business models. It provides examples of Fortune 100 companies that are using Attunity products to build data lakes and ingest data from SAP and other sources into Hadoop, Apache Kafka, and the cloud in order to perform real-time analytics. The document outlines the benefits of Attunity's data replication tools for extracting, transforming, and loading SAP and other enterprise data into data lakes and data warehouses.
SAP HANA SPS10- Text Analysis & Text MiningSAP Technology
The document describes new features and improvements in SAP HANA SPS 10 for text analysis and text mining. Key updates include a new text analysis XS API, improved performance for text preprocessing steps, addition of grammatical role analysis and metadata extraction, expanded language support for Polish and Chinese, and new text mining SQL extensions and configuration options. The document provides details on each new feature and where to find additional documentation on SAP's website.
Ingesting Data at Blazing Speed Using Apache OrcDataWorks Summit
Big SQL is a SQL engine for Hadoop that excels at performance and scalability at high concurrency. Big SQL complements and integrates with Apache Hive for both data and metadata. An architecture that separates compute from storage allows Big SQL to support multiple open data formats natively. Until recently, Parquet provided a significant performance advantage over other data formats for SQL on Hadoop. The landscape changed when ORC became a top level Apache project independent from Hive. Gone were the days of reading ORC files using slow, single-row-at-a-time Hive Serdes. The new vectorized APIs in the Apache ORC libraries make it possible to ingest ORC data at blazing speed. This talk is about the journey leading to ORC taking the crown of best performing data format for Big SQL away from Parquet. We'll have a look under the hood at the architecture of Big SQL ORC readers, and how to tune them. We'll share lessons learned in walking the fine line between maximizing performance at scale and avoiding dreaded Java OOMs . You'll learn the techniques that SQL engines use for fast data ingestion, so that you can leverage the full potential of Apache ORC in any application.
Speaker:
Gustavo Arocena, Big Data Architect, IBM
SAP HANA Dynamic Tiering is a new feature in SAP HANA SPS09 that allows for management of "hot" and "warm" data within a single SAP HANA database. It addresses the need to cost effectively manage very large and growing datasets for analytics while maintaining performance. Data is automatically tiered between the in-memory "hot store" and disk-based "warm store" based on priority and access frequency. This provides a single database solution for both real-time and less frequently accessed data within SAP HANA.
This document discusses Talend's integration capabilities with Cloudera's Distribution including Hadoop (CDH). It highlights Talend's ability to connect external data sources to Hadoop and HDFS, leverage MapReduce in Talend job design, and provides an overview of Talend's Hadoop integration features such as graphical flow design, connecting over 450 data sources to Hadoop, processing data inside Hadoop using HiveQL and Pig, and mass importing/exporting between Hadoop and relational databases.
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdIBM Analytics
Originally Published on Oct 27, 2014
An overview of IBM's audited Hadoop-DS comparing IBM Big SQL, Cloudera Impala and Hortonworks Hive for performance and SQL compatibility. For more information, visit: http://www-01.ibm.com/software/data/infosphere/hadoop/
The document discusses Seagate's plans to integrate hard disk drives (HDDs) with flash storage, systems, services, and consumer devices to deliver unique hybrid solutions for customers. It notes Seagate's annual revenue, employees, manufacturing plants, and design centers. It also discusses Seagate exploring the use of big data analytics and Hadoop across various potential use cases and outlines Seagate's high-level plans for Hadoop implementation.
This document provides an overview of Hadoop and its ecosystem. It discusses the evolution of Hadoop from version 1 which focused on batch processing using MapReduce, to version 2 which introduced YARN for distributed resource management and supported additional data processing engines beyond MapReduce. It also describes key Hadoop services like HDFS for distributed storage and the benefits of a Hadoop data platform for unlocking the value of large datasets.
YARN: the Key to overcoming the challenges of broad-based Hadoop AdoptionDataWorks Summit
The document discusses how YARN (Yet Another Resource Negotiator) in Hadoop 2.0 overcomes challenges to broad adoption of Hadoop by allowing applications to directly operate on Hadoop without needing to generate MapReduce code. It introduces RedPoint as a YARN-compliant data management tool that brings together big and traditional data for data integration, quality, and governance tasks in a graphical user interface without coding. RedPoint executes directly on Hadoop using YARN to make data management easier, faster and lower cost compared to previous MapReduce-based options.
Today, when data is mushrooming and coming in heterogeneous forms, there is a growing need for a flexible, adaptable, efficient and cost effective integration platform which will take minimum on-boarding time and interact and entertain n number of platforms. Talend fits just perfect in this space with a proven track record, so learning talend makes lot of sense for anybody associated with data world.
If you understand how to manage, transform, store your organisation data (retail, banking, airlines, research, insurance, cards etc.) and effectively represent it which is the backbone behind any successful MIS system/reporting/dash board then you are a key person that organisation most sought after.
Sap hana platform sps 11 introduces new sap hana hadoop integration featuresAvinash Kumar Gautam
SAP HANA Platform SPS 11 introduces new features that improve integration with Hadoop, including enhanced performance and SQL support in SAP HANA Spark Controller 1.5, along with new authentication options. It also extends integration by supporting bi-directional data relocation between SAP HANA and Hadoop using Data Lifecycle Manager and enabling in-memory processing on Hadoop clusters with SAP HANA Vora.
Open innovation and collaboration between IBM and other technology companies is fueling advances in cloud computing, big data analytics, and software development. This includes contributions to open source projects like Linux as well as partnerships through organizations like the OpenPOWER Foundation. New systems based on IBM's Power architecture and optimized for Linux are helping customers improve the performance and efficiency of their analytics, database, and application workloads.
Big SQL provides an SQL interface for querying data stored in Hadoop. It uses a new query engine derived from IBM's database technology to optimize queries. Big SQL allows SQL users easy access to Hadoop data through familiar SQL tools and syntax. It supports creating and loading tables, standard SQL queries including joins and subqueries, and integrating Hadoop data with external databases in a single query.
Big Data, Big Thinking: Simplified Architecture Webinar Fact SheetSAP Technology
This webinar discusses how to simplify IT architecture for handling big data. It explains that SAP's HANA platform allows consolidating transactional and analytical systems onto one platform to process and deliver data in real-time. The webinar also outlines the benefits of Cloudera's Hadoop working with SAP HANA, including keeping historical or unstructured IoT data in Hadoop without duplicating it, and enhancing security and performance through Intel partnerships.
The document summarizes the Cask Data Application Platform (CDAP), which provides an integrated framework for building and running data applications on Hadoop and Spark. It consolidates the big data application lifecycle by providing dataset abstractions, self-service data, metrics and log collection, lineage, audit, and access control. CDAP has an application container architecture with reusable programming abstractions and global user and machine metadata. It aims to simplify deploying and operating big data applications in enterprises by integrating technologies like YARN, HBase, Kafka and Spark.
This document summarizes a presentation about managing Apache HAWQ, an open source massively parallel processing (MPP) database, using Apache Ambari. It discusses how Ambari integrates with HAWQ for installation, configuration, topology recommendations, high availability, alerts and more. Challenges in the integration are addressed as HAWQ is not part of the Hortonworks Data Platform stack. The presentation recommends future work for Ambari like supporting automated HAWQ upgrades and enabling dynamic configuration reloads without requiring a service restart.
This document demonstrates using Hadoop, R, and Google Chart Tools for data visualization. It describes preparing the environment by installing necessary software. It then walks through writing an R script to analyze birth data on HDFS using MapReduce. The results are loaded into a Shiny application which renders interactive visualizations using the googleVis package. This showcases an end-to-end workflow for analyzing large datasets with R on Hadoop and visualizing the results.
The document summarizes several popular options for SQL on Hadoop including Hive, SparkSQL, Drill, HAWQ, Phoenix, Trafodion, and Splice Machine. Each option is reviewed in terms of key features, architecture, usage patterns, and strengths/limitations. While all aim to enable SQL querying of Hadoop data, they differ in support for transactions, latency, data types, and whether they are native to Hadoop or require separate processes. Hive and SparkSQL are best for batch jobs while Drill, HAWQ and Splice Machine provide lower latency but with different integration models and capabilities.
This document contains structural design calculations for a residential building. It includes the design of slabs, beams, and stairs. For the slab, load calculations are shown to determine the factored load on the slab of 0.211 ksf. Bending moment coefficients are identified for calculating slab moments. Reinforcement for the slab is designed to meet minimum requirements in the ACI code. Beam sizing, load calculations, and design are also presented. Stair details like riser height and tread depth are provided. References cited include concrete design textbooks.
Bring Your SAP and Enterprise Data to Hadoop, Kafka, and the CloudDataWorks Summit
This document discusses how organizations can leverage data and analytics to power their business models. It provides examples of Fortune 100 companies that are using Attunity products to build data lakes and ingest data from SAP and other sources into Hadoop, Apache Kafka, and the cloud in order to perform real-time analytics. The document outlines the benefits of Attunity's data replication tools for extracting, transforming, and loading SAP and other enterprise data into data lakes and data warehouses.
SAP HANA SPS10- Text Analysis & Text MiningSAP Technology
The document describes new features and improvements in SAP HANA SPS 10 for text analysis and text mining. Key updates include a new text analysis XS API, improved performance for text preprocessing steps, addition of grammatical role analysis and metadata extraction, expanded language support for Polish and Chinese, and new text mining SQL extensions and configuration options. The document provides details on each new feature and where to find additional documentation on SAP's website.
Ingesting Data at Blazing Speed Using Apache OrcDataWorks Summit
Big SQL is a SQL engine for Hadoop that excels at performance and scalability at high concurrency. Big SQL complements and integrates with Apache Hive for both data and metadata. An architecture that separates compute from storage allows Big SQL to support multiple open data formats natively. Until recently, Parquet provided a significant performance advantage over other data formats for SQL on Hadoop. The landscape changed when ORC became a top level Apache project independent from Hive. Gone were the days of reading ORC files using slow, single-row-at-a-time Hive Serdes. The new vectorized APIs in the Apache ORC libraries make it possible to ingest ORC data at blazing speed. This talk is about the journey leading to ORC taking the crown of best performing data format for Big SQL away from Parquet. We'll have a look under the hood at the architecture of Big SQL ORC readers, and how to tune them. We'll share lessons learned in walking the fine line between maximizing performance at scale and avoiding dreaded Java OOMs . You'll learn the techniques that SQL engines use for fast data ingestion, so that you can leverage the full potential of Apache ORC in any application.
Speaker:
Gustavo Arocena, Big Data Architect, IBM
SAP HANA Dynamic Tiering is a new feature in SAP HANA SPS09 that allows for management of "hot" and "warm" data within a single SAP HANA database. It addresses the need to cost effectively manage very large and growing datasets for analytics while maintaining performance. Data is automatically tiered between the in-memory "hot store" and disk-based "warm store" based on priority and access frequency. This provides a single database solution for both real-time and less frequently accessed data within SAP HANA.
This document discusses Talend's integration capabilities with Cloudera's Distribution including Hadoop (CDH). It highlights Talend's ability to connect external data sources to Hadoop and HDFS, leverage MapReduce in Talend job design, and provides an overview of Talend's Hadoop integration features such as graphical flow design, connecting over 450 data sources to Hadoop, processing data inside Hadoop using HiveQL and Pig, and mass importing/exporting between Hadoop and relational databases.
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdIBM Analytics
Originally Published on Oct 27, 2014
An overview of IBM's audited Hadoop-DS comparing IBM Big SQL, Cloudera Impala and Hortonworks Hive for performance and SQL compatibility. For more information, visit: http://www-01.ibm.com/software/data/infosphere/hadoop/
The document discusses Seagate's plans to integrate hard disk drives (HDDs) with flash storage, systems, services, and consumer devices to deliver unique hybrid solutions for customers. It notes Seagate's annual revenue, employees, manufacturing plants, and design centers. It also discusses Seagate exploring the use of big data analytics and Hadoop across various potential use cases and outlines Seagate's high-level plans for Hadoop implementation.
This document provides an overview of Hadoop and its ecosystem. It discusses the evolution of Hadoop from version 1 which focused on batch processing using MapReduce, to version 2 which introduced YARN for distributed resource management and supported additional data processing engines beyond MapReduce. It also describes key Hadoop services like HDFS for distributed storage and the benefits of a Hadoop data platform for unlocking the value of large datasets.
YARN: the Key to overcoming the challenges of broad-based Hadoop AdoptionDataWorks Summit
The document discusses how YARN (Yet Another Resource Negotiator) in Hadoop 2.0 overcomes challenges to broad adoption of Hadoop by allowing applications to directly operate on Hadoop without needing to generate MapReduce code. It introduces RedPoint as a YARN-compliant data management tool that brings together big and traditional data for data integration, quality, and governance tasks in a graphical user interface without coding. RedPoint executes directly on Hadoop using YARN to make data management easier, faster and lower cost compared to previous MapReduce-based options.
Today, when data is mushrooming and coming in heterogeneous forms, there is a growing need for a flexible, adaptable, efficient and cost effective integration platform which will take minimum on-boarding time and interact and entertain n number of platforms. Talend fits just perfect in this space with a proven track record, so learning talend makes lot of sense for anybody associated with data world.
If you understand how to manage, transform, store your organisation data (retail, banking, airlines, research, insurance, cards etc.) and effectively represent it which is the backbone behind any successful MIS system/reporting/dash board then you are a key person that organisation most sought after.
Sap hana platform sps 11 introduces new sap hana hadoop integration featuresAvinash Kumar Gautam
SAP HANA Platform SPS 11 introduces new features that improve integration with Hadoop, including enhanced performance and SQL support in SAP HANA Spark Controller 1.5, along with new authentication options. It also extends integration by supporting bi-directional data relocation between SAP HANA and Hadoop using Data Lifecycle Manager and enabling in-memory processing on Hadoop clusters with SAP HANA Vora.
Open innovation and collaboration between IBM and other technology companies is fueling advances in cloud computing, big data analytics, and software development. This includes contributions to open source projects like Linux as well as partnerships through organizations like the OpenPOWER Foundation. New systems based on IBM's Power architecture and optimized for Linux are helping customers improve the performance and efficiency of their analytics, database, and application workloads.
Big SQL provides an SQL interface for querying data stored in Hadoop. It uses a new query engine derived from IBM's database technology to optimize queries. Big SQL allows SQL users easy access to Hadoop data through familiar SQL tools and syntax. It supports creating and loading tables, standard SQL queries including joins and subqueries, and integrating Hadoop data with external databases in a single query.
Big Data, Big Thinking: Simplified Architecture Webinar Fact SheetSAP Technology
This webinar discusses how to simplify IT architecture for handling big data. It explains that SAP's HANA platform allows consolidating transactional and analytical systems onto one platform to process and deliver data in real-time. The webinar also outlines the benefits of Cloudera's Hadoop working with SAP HANA, including keeping historical or unstructured IoT data in Hadoop without duplicating it, and enhancing security and performance through Intel partnerships.
The document summarizes the Cask Data Application Platform (CDAP), which provides an integrated framework for building and running data applications on Hadoop and Spark. It consolidates the big data application lifecycle by providing dataset abstractions, self-service data, metrics and log collection, lineage, audit, and access control. CDAP has an application container architecture with reusable programming abstractions and global user and machine metadata. It aims to simplify deploying and operating big data applications in enterprises by integrating technologies like YARN, HBase, Kafka and Spark.
This document summarizes a presentation about managing Apache HAWQ, an open source massively parallel processing (MPP) database, using Apache Ambari. It discusses how Ambari integrates with HAWQ for installation, configuration, topology recommendations, high availability, alerts and more. Challenges in the integration are addressed as HAWQ is not part of the Hortonworks Data Platform stack. The presentation recommends future work for Ambari like supporting automated HAWQ upgrades and enabling dynamic configuration reloads without requiring a service restart.
This document demonstrates using Hadoop, R, and Google Chart Tools for data visualization. It describes preparing the environment by installing necessary software. It then walks through writing an R script to analyze birth data on HDFS using MapReduce. The results are loaded into a Shiny application which renders interactive visualizations using the googleVis package. This showcases an end-to-end workflow for analyzing large datasets with R on Hadoop and visualizing the results.
The document summarizes several popular options for SQL on Hadoop including Hive, SparkSQL, Drill, HAWQ, Phoenix, Trafodion, and Splice Machine. Each option is reviewed in terms of key features, architecture, usage patterns, and strengths/limitations. While all aim to enable SQL querying of Hadoop data, they differ in support for transactions, latency, data types, and whether they are native to Hadoop or require separate processes. Hive and SparkSQL are best for batch jobs while Drill, HAWQ and Splice Machine provide lower latency but with different integration models and capabilities.
This document contains structural design calculations for a residential building. It includes the design of slabs, beams, and stairs. For the slab, load calculations are shown to determine the factored load on the slab of 0.211 ksf. Bending moment coefficients are identified for calculating slab moments. Reinforcement for the slab is designed to meet minimum requirements in the ACI code. Beam sizing, load calculations, and design are also presented. Stair details like riser height and tread depth are provided. References cited include concrete design textbooks.
The document provides a review of Mt. Lebanon's proposed 2016 budget. It analyzes the budget based on the Government Finance Officers Association's guidelines for distinguished budget presentations. The review finds that while the budget adequately describes organizational units and programs, it is lacking in areas like performance measures, long-term financial planning, debt information, and process descriptions. The review provides recommendations for how to improve the budget based on comparisons to award-winning budgets from Carlisle and Lower Merion. The goal is for the analysis to help Mt. Lebanon strengthen its budgeting practices and better communicate budget information to policymakers and the community.
El documento presenta 5 estilos diferentes de bolsos y zapatos, incluyendo estampados de cebra y guepardo, negro, rojo pasión y verde. Describe cada estilo y ofrece ejemplos de cómo combinar los accesorios, desde looks arriesgados hasta más clásicos.
This document contains a resume for Sukumar T. with over 5.5 years of experience in manual testing for healthcare, web, and mobile applications. He currently works as a senior quality analyst at Emvigo Technologies testing web and mobile applications. Previous experience includes testing Picture Archiving and Communication Systems for Merge Healthcare as an associate quality assurance engineer, and testing banking and mobile applications for CitiBank and Xerago.
El documento argumenta que la educación virtual debe articular de manera consciente el aprendizaje autónomo para potenciar las competencias y el pensamiento crítico de los estudiantes. El aprendizaje autónomo requiere elementos como la tecnología, la investigación y contenidos significativos para que los estudiantes dirijan su propio proceso de aprendizaje y se conviertan en sujetos activos.
Microsoft Publisher es un programa fácil de usar para crear boletines, folletos, páginas web y otros materiales de marketing de forma profesional. Aunque tiene una cuota de mercado pequeña dominada por InDesign y QuarkXPress, las versiones recientes tienen mayores capacidades como exportar a PDF e incrustar fuentes. Publisher ha estado disponible desde 1991 y sigue incluyéndose en algunas versiones de Microsoft Office.
Dokumen tersebut membahas tentang teori-teori perkembangan kematangan manusia, yang mencakup 5 fase perkembangan mulai dari masa pra lahir hingga dewasa. Teori-teori tersebut meliputi pandangan biologis dan psikologis tentang pertumbuhan fisik, intelektual, kognitif, dan emosional manusia sepanjang hayat. Dibahas pula faktor-faktor yang mempengaruhi perkembangan seperti genetika, lingkungan, dan pro
This PowerPoint is one small part of the Astronomy Topics unit from www.sciencepowerpoint.com. This unit consists of a five part 3000+ slide PowerPoint roadmap, 12 page bundled homework package, modified homework, detailed answer keys, 8 pages of unit notes for students who may require assistance, follow along worksheets, and many review games. The homework and lesson notes chronologically follow the PowerPoint slideshow. The answer keys and unit notes are great for support professionals. The activities and discussion questions in the slideshow and meaningful. The PowerPoint includes built-in instructions, visuals, and follow up questions. Also included are critical class notes (color coded red), project ideas, video links, and review games. This unit also includes four PowerPoint review games (110+ slides each with Answers), 38+ video links, lab handouts, activity sheets, rubrics, materials list, templates, guides, and much more. Also included is a 190 slide first day of school PowerPoint presentation. Teaching Duration = 5+ weeks. Areas of Focus in the Astronomy Topics Unit: The Solar System and the Sun, Order of the Planets, Our Sun, Life Cycle of a Star, Size of Stars, Solar Eclipse, Lunar Eclipse, The Inner Planets, Mercury, Venus, Earth, Moon, Craters, Tides, Phases of the Moon, Mars and Moons, Rocketry, Asteroid Belt, NEOs, The Torino Scale, The Outer Planets and Gas Giants, Jupiter / Moons, Saturn / Moons, Uranus / Moons, Neptune / Moons, Pluto's Demotion, The Kuiper Belt, Oort Cloud, Comets / Other, Beyond the Solar System, Types of Galaxies, Blackholes, Extrasolar Planets, The Big Bang, Dark Matter, Dark Energy, The Special Theory of Relativity, Hubble Space Telescope, Constellations, Spacetime and much more. If you have any questions please feel free to contact me. Thanks again and best wishes. Sincerely, Ryan Murphy M.Ed www.sciencepowerpoint@gmail.com
Shivansh Bhatnagar is an SAP ABAP professional with over 3.7 years of experience developing and supporting SAP R/3 systems. He has experience in various industries including manufacturing, chemicals, and coatings. Some of his responsibilities have included requirements analysis, technical specification design, coding, testing, and performance tuning. He has knowledge of various SAP modules including SD, FI, MM, and PP.
Patcha Naga Swapna has 9 years of experience in SAP/ABAP development. She has participated in 6 implementations and 6 support projects. She is highly skilled in debugging, testing, and preparing technical specifications. Her most recent role was as Team Lead for a project with Sigma Aldrich where she designed an interface to update physical sample data between ECC and EWM systems.
Pravin Murarkar is an SAP ABAP/4 professional with over 2 years of experience working as a consultant. He has experience implementing SAP R/3 modules including reporting, internal tables, ALV reports, smart forms, BDC, and custom ABAP programming. Currently he is working as an associate consultant for Control Tech Private Limited in Pune on an SAP R/3 ECC6.0 implementation project for Spentex Industries Ltd. in Baramati. He is seeking a challenging career opportunity where he can contribute his SAP and software development skills.
The document contains the professional profile and work experience of Goutam Sahoo. It summarizes that he has over 8 years of experience working with SAP modules like MM, SD, PP, QM, PM, FI, CO, FM, HR and SRM. Currently he is working as an ABAP Lead on an HANA migration project. It also lists his educational qualifications and work history with various employers over the past decade.
Himanshu Bhatia is a senior software engineer with over 5 years of experience in project management, solution architecture, and production support. He is currently working at Sopra Steria India as a senior software engineer. He has successfully led projects for clients such as CAPITA, AT&T, and Gucci. Bhatia is proficient in technologies like Oracle, PL/SQL, and SQL and aims to take on senior leadership roles in IT project management or solution architecture.
SILADITYA CHATTERJEE is a techno functional consultant with over 10 years of experience, including 4 years of experience with SAP modules SD and MM. He has expertise in ABAP, reports, interfaces, conversions and more. Currently he is the business development head at Sunshell Power and has previously held roles at CRS Private Limited, Swarna Technology Pvt. Ltd. and more.
S.M. Prasad has over 7 years of experience in SAP FICO and 10 years of experience in finance and accounting for manufacturing industries. He has expertise in financial accounting, taxation, auditing, and implementing, supporting, upgrading, and producing SAP modules such as FI-GL, FI-AP, FI-AR, and integrating them with other modules. He has worked on projects for clients in various industries and countries, taking on roles such as project manager, consultant, and technical lead. Currently he is studying HANA and Simple Finance concepts.
Ankita Jain has over 4 years of experience working as an SAP ABAP consultant. She has extensive experience developing reports, forms, and interfaces in both Core ABAP and Webdynpro. Some of her projects include developing a custom PO invoice form, updating NAV data from an external website, and creating a web application using BAPIs. She is proficient in ABAP, data dictionary objects, RFCs, BAPIs, and debugging. Ankita holds a Bachelor's degree in Computer Science and has received awards for her work.
Anand Gupta has over 13 years of experience as a SAP ABAP consultant. He has extensive experience implementing SAP ERP, S/4HANA, and CRM projects for various clients. Some of his responsibilities include requirements gathering, solution design, development, testing, go-live preparation, and post go-live support. He has strong skills in ABAP, HANA, Fiori, WebIDE, and SAP Cloud Platform.
Leela Munagala has over 15 years of experience with SAP SD and MM modules, including implementations, rollouts, application support, and ABAP development. She has expertise in e-commerce, CRM, warehouse management, logistics execution, and configuration of pricing, variants, BOMs, and revenue accounting. She has worked on projects for clients in various industries like medical, electronics, oil and gas, and more.
Manoj Vazirani has over 3 years of experience as an SAP ABAP consultant. He has worked on implementation projects for clients like Infosys and TCS. Some of his responsibilities include gathering requirements, implementing changes, testing, and writing documentation. He has expertise in various SAP modules like MM, PP, SD and technical skills like ABAP, IDOCs and smart forms. His most recent project with TCS involved migrating manufacturing processes from Italy to the US for ABB.
Mayank Malpani is an SAP ABAP Technical Consultant with almost 6 years of experience working on various SAP implementation and support projects across Europe. He has extensive experience with SAP modules including SD, MM, WM and FI. His technical skills include ABAP programming, forms, interfaces, conversions and more. He is currently working as an IT Analyst for Tata Consultancy Services.
- MM Consultant with over 14 years of industrial experience and 7 years of SAP MM experience. Specializes in materials management, procurement, inventory management, and Indian taxation configurations in SAP.
- Has worked on various implementation and support projects for companies like Dabur India, Abhijeet Group of Industries, GE Water, and Reliance Industries. Responsibilities included requirements gathering, blueprint design, configuration, testing, go-live support and more.
- Expertise in SAP MM, pricing procedures, tax procedures, inventory management, and integration with other SAP modules like SD, FI, and PP. Experienced with various SAP releases from R/3 to ECC 6.0.
Srinivasulu Nettem has over 11 years of experience working with SAP technologies such as ABAP, HANA, BODS, and BO. He has extensive experience in data migration, cutover activities, and developing data models in SAP HANA. Some of his roles include leading data migration projects, developing interfaces between different systems, and conducting testing and documentation. He is proficient in various SAP modules including SD, MM, FI, and CO.
Suresh Sadasivan is a senior SAP consultant with over 16 years of experience implementing and supporting SAP SD, MM, and EDI modules. He has extensive experience leading implementation projects for various industries including high tech, manufacturing, food and beverage, and public sector. Some of his responsibilities have included managing requirement gathering, blueprinting, configuration, testing, go-live preparation and support.
Pravin Murarkar is an SAP ABAP/4 professional with over 2 years of experience working as a consultant. He has expertise in areas like reports, tables, BDC, programming in ABAP, RFC, BAPI, and WebDynpro. Currently working with Controltech as an associate consultant, his responsibilities include creating ALV reports, smart forms, and conducting data conversions from legacy systems to SAP using BDC and flat files. He is seeking a challenging role where he can contribute his SAP and software development skills.
Sudhakar resume 3 - Technical n Functional Project leadSudhakar Reddy
Sudhakar Mediboyina is a senior SAP SCM functional and technical lead with over 22 years of experience implementing SAP SCM, SPP, GATP, and other supply chain modules. He has led 9 full SAP implementations and specializes in demand planning, inventory management, material management, and interface development. Currently he is the technical lead for a Cat Logistics project implementing SCM GATP and deployment modules.
Suresh Kumar Nayak has over 13 years of experience in SAP FI/CO consulting and project management. He has extensive experience implementing and supporting SAP ECC 6 and S/4 HANA projects across various industries. Some of his skills include business process analysis, solution design, project management, testing, training and post go-live support. He holds an M.Com degree from Ranchi University and has experience managing projects teams of up to 20 people.
This document is a resume for Prasad P Mobile that summarizes his professional experience and qualifications. He has over 5 years of overall IT experience including 2.5 years working with Pega Rules Process Commander. His most recent role was as a System Architect at Great Eastern in Singapore where he worked on a CRM application. He also has experience as an ETL Developer and has worked on projects for AIG and The Bank of Nova Scotia.
Chandan Kumar is a SAP ABAP Technical Consultant with over 2.7 years of experience working on offshore development and support projects. He has extensive experience with ABAP programming including reports, forms, BAPIs, user exits, enhancements and more. He currently works for Bigtech Software Pvt. Ltd. and has previously worked for Synergy's Pvt. Ltd. His skills include programming, debugging, performance tuning, and he has worked on various SAP modules including SD, MM, and BASIS. He holds a Bachelor's degree in ECE and has worked on projects for clients such as Takween - Savola Packaging System, TG Kirloskar Automotive Pvt. Ltd., Bh
1. 1 | P a g e
Arti Patel
18/Abhishek Bunglows, B/H RAJ Farm,Bhat,Gandhinagar-382428.India.
Summary:
Currently Working In SAP MM Techno Functional in ECC 6.0 & SAP PI 7.3
Good communication skill & interaction skill.
Committed to achieve long and short-term goals.
Self-motivated, dedicated, result oriented a quick learner and team player.
Career Objective:
To achieve an acme of job satisfaction and to be an efficient Team member in a growth
oriented organization. To concentrate on Business Application Programming while being
resourceful, innovative and flexible where in my skills, experience will be utilized to its fullest
potential.
Experience
6 years of experience as SAP ABAP Techno Functional & SAP PI consultant
Qualification:
B.E.IT (Bachelor of Engineering in Information Technology )
Education Qualification:
B.E.IT (Bachelor of Engineering in Information Technology )
University: North Gujarat University, Patan.
Academic Records:
Degree Degree Passing year Percentage Class
1 B.E.IT July – 2007 69% First
2 H.S.C March – 2002 70% First
2 S.S.C March – 2000 83% Distinction
Professional Experience:
Working on SAP 3.1 H as ABAPER since 17th July 2007 to 30 oct-2010 at Arvind LTD , 1
September 2010,From 1 September 2010 to 11 Feb’2016 worked at torrent
pharmaceuticals, Currently Working at Future Groups
Having good experience of END to END subsidiary rollout like USA, Philippines, Russia
rolled out in SAP R/3,SAP DMS ,SAP TAXINN MIGRATION
Core level knowledge of End to END Inbound/Outbound IDOC implementation,
Module Pool,BAPI,BADI,User Exists, ConfigTable,RFC,BDC,LSMW,Data Dictionary,
Append Structure in data dictionary,Alv reports,SAP script,smart forms,screen
exists,Authorization objects
Implemented system to restrict user login into sap from permitted IP address
only.
Core knowledge of END to END implementation of SAP PI 7.3 and single resource
handling SAP PI 7.3 in torrent
Having Experience of end to end implementation of INBOUND and OUTBOUND
IDOC
Have implemented QR barcode printing for Mexico CFDI invoicing
Complete knowledge of SAP PI interface like FILE To IDOC ,IDOC To File ,RFC To FILE vice
versa ,IDOC TO IDOC vice versa ,PROXY TO FILE vice versa, IDOC To Web services
Complete knowledge of enhanced batch characteristics, material characteristics ,Adding
additional TAB standard transaction like MM01,XD01,XK01,MIGO,ME21N
Having Knowledge of enhanced search help
Implemented substitution in FI
Having strong knowledge to use field symbol in any standard transaction
2. 2 | P a g e
Having good experience of EHP 6 implementation at torrent Pharmaceuticals LTD expertise
in clearing SPAU and SPDD objects.
Having good experience of integration of JDA with SAP R/3
Having good experience of tight integration of interface with SAP R/3
Having good experience of ASIS process understanding, process mapping ,preparing
business blue print of MM & SD module
Strong Post implementation support
Having knowledge of how to use GOS application of SAP
SAP MM Functional KNOWLWDGE : Having Knowledge of configuration of release
strategy, storage location,FTXP,material type, Number range,Material Type
,Purchase Organization ,Purchasing Group
Having good knowledge of MM module Like to create material type ,material
group, material number range, release strategy configuration ,pricing procedure
,tax code creation, activate split valuation ,storage location etc.
Having Knowledge of J1ID Functionality and Impact on Pricing
Having knowledge of procure to payment cycle .
Given training to end user for Procure to pay cycle
SAP Project Details
Period :1st
Jan 2015 to 1st
March 2015
Project Details : Subsidiary Rollout At Mexico
Company : Torrent Pharmaceuticals LTD (Mexico Subsidiary)
Implement By : In-house
Role: SAP PI Consultant / ABAP Lead
Environment : SAP R/3 ECC 6.0 ,SAP PI 7.3
Responsibility : Involved in End to END requirement Gathering for CFDI invoice verification process,
understanding of CFDI process ,Mapping of CFDI process in SAP PI as well as in ECC 6.0,
for the Project. Explore below new technology and functionality to complete the Electronic
Invoicing (CFDI) process. -> Digital Signature, End to End SAP PI mapping, 2D- QR code,
XML & PDF attachment to the object, IDOC generation from SAP.
SAP Project Details
Period :1st
August 2015 to 1st
Jan 2015
Project Details : SAP PI 7.3
Company : Torrent Pharmaceuticals LTD
Implement By : TCS
Role: Technical Lead / Project Leader
Environment : SAP PI 7.3
Responsibility : Involved in end to end requirement gathering, BBP Preparation, Process Mapping,
development Changes, Carry Out SIT ,IDOC implementation, PROXY Setting,RFC Implementation, WEB
services Call , WSDL creation ,PROXY implementation in ABAP side also, Successfully managed to
implement all the legacy interfaces over the SAP PI 7.3
3. 3 | P a g e
SAP Project Details
Period :1st
April 2015 to 1st
May 2015
Project Details : DMS (Document Management System ) SAP
Company : Torrent Pharmaceuticals LTD
Implement By : In-house
Role:Technical Lead
Environment : SAP R/3 ECC 6.0
Responsibility : Involved in getting end to end information for procure to pay cycle starting from vendor
RFQ . End to end configuration of SAP DMS for saving Vendor RFQ document, also developed Tool for create
RFQ automatically from multiple Purchase Requisition & sent it to vendor along with the data stored in DMS.
SAP Project Details
Period :1st
March 2014 to 1st
May 2014
Company :Elder Pharmaceuticals LTD (New Division Of Torrent)
Implement By : In-house
Team Size :On site 3 off shore 1
Environment : SAP R/3 ECC 6.0
Role : ABAP lead
Responsibility : Implemented split valuation concept for traded and in-house product, implemented
manufacturing plant and location at batch classification, from manufacturing plant and location flowing of
business area and assignment category. Updating of existing batch classification.
SAP Project Details
Period :1st
September 2013 to 1st
February 2014
Company : TorrentPharmaceuticals LTD USA (Subsidiary of Torrent Pharmaceuticals)
Implement By : In-house
Team Size :On site 3 off shore 2
Environment : SAP R/3 ECC 6.0
Role : ABAP lead
Responsibility : Involved in requirement gathering ,Understanding of current process and mapped interface
according to business requirement Developed complete interface starting from creation of purchase order to
invoicing, developed Module pool for reprocessing, calling web service from SAP R/3 for courier tracking
service and update directly POD flag in SAP. Added additional TAB in material master. Having good
knowledge of END to END process
4. 4 | P a g e
SAP Project Details
Period :1st
November 2012 to 1st
January 2013
Company :Torrent Pharmaceuticals LTD Philippines (subsidiary of Torrent Pharmaceuticals)
Implement By : In-house
Team Size :On site 3 off shore 2
Environment : SAP R/3 ECC 6.0
Role : ABAP lead
Responsibility : Involved in requirement gathering ,understanding of current process and mapped
interface according to business requirement Developed complete interface starting from creation of purchase
order to invoicing, developed Module pool for reprocessing, calling web service from SAP R/3 for courier
tracking service and update directly POD flag in SAP. Having good knowledge of END to END process,
Developed module pool for generate and Extend Material Master
SAP Project Details
Period :1st
April 2013 to 1st
July 2013
Company :Torrent Pharmaceuticals LTD Russia (subsidiary of Torrent Pharmaceuticals)
Implement By : In-house
Team Size :On site 3 off shore 2
Environment : SAP R/3 ECC 6.0
Role : ABAP lead
Responsibility : Developed module for procure to payment cycle, developed layout as per Russian
government requirement, enhanced MIGO BADI for capturing certificate and declaration no,also enhanced
batch classification
SAP Project Details
Period :1st
August 2012 to 1st
March 2013
Company :Torrent Pharmaceuticals LTD (Remapping Of release strategy in purchase
Order/Modifying tolerance limit)
Team Size :2
Environment : SAP R/3 ECC 6.0
Role : ABAPER
Responsibility : Understanding of DOA of organization according to that mapped release strategy as
well as re –release strategy. If anychanges in purchase order after release it should go to next higher authority
for re –release,
Redesign of all purchase order layout, enhanced BADI and user exists for set tolerance limit according to
quantity
SAP Project Details
Period :1st
February 2012 to 1st
May 2013
Company :Torrent Pharmaceuticals LTD (Implementing ARE1/ARE2 & Physical inventory
Process)
Team Size :2
Environment : SAP R/3 ECC 6.0
Role : ABAPER
5. 5 | P a g e
Responsibility : Requirement gathering and process mapping of excise and ARE1/ARE2 in standard SAP
transaction J1IA101& J1IIN, enhanced J1IN transaction for handling of number series, designed layout for
ARE1 & ARE2 form ,developedprogram for physical inventory count
SAP Implementation Project 1At Arvind LTD:
Period : 1stJanuary, 2009 to 1st
April, 2010.
Company : Arvind Ltd, Ahmedabad
Team Size : 2
Environment : SAP Version ECC 6.0
Role : SAP ABAP/4 Developer
Responsibility : ABAP developments like
Static/Interactive/ALV Reports, Layout,Developing DatabaseTable
start with z,updating Table Start with Z,Developing Payment Advance
Programming,Layouts, Data upload,BDC, etc..
6. 6 | P a g e
OTHER DETAILS:
Name : Arti Patel Phone (R) : +91-079-23969272
Address :18, Abhishek Bunglows, (M) : +91-9909075464
B/H RAJFarm, Bhat Email :Patelarti85@yahoo.co.in
Gandhinagar-382428,
Gujarat, INDIA. Date of Birth : 4th FEB 1985.