SlideShare a Scribd company logo
RAMA PRASAD OWK
ETL/Hadoop Developer
Email: ramprasad1261@gmail.com
Cell: +1 248 525 3827
_______________________________________________________________________________________________
Professional Summary
Overall 6+ years of IT experience in System Analysis, design, development and implementation of Data Warehouses
as an ETL Developer using IBM Data Stage and worked as Hadoop developer.
▪ Involved in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Server and Parallel
jobs using Data Stage to populate tables in Data Warehouse and Data marts.
▪ Expertise in using various Hadoop infrastructures such as MapReduce, Pig, Hive, HBase, Sqoop,Spark and
Spark Streaming for data storage and analysis.
▪ Experienced in troubleshooting errors in HBase Shell/API, Pig, Hive and MapReduce.
▪ Highly experienced in importing and exporting data between HDFS and Relational Database Management
systems using Sqoop.
▪ Collected logs data from various sources and integrated in to HDFS using Flume.
▪ Good experience in Generating Statistics/extracts/reports from the Hadoop.
▪ Experience in writing Spark Applications using spark-shell, pyspark, spark-submit.
▪ Developed prototype Spark applications using Spark-Core, Spark SQL, DataFrame API
▪ Good experience in generating Statistics and reports from the Hadoop.
▪ Experience in Python language to write a scripts in Hadoop environment.
▪ Capable of processing large sets of structured, semi-structured and unstructured data using hadoop.
▪ Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanism.
▪ Good Knowledge in designing Parallel jobs using various stages like Join, Merge, Lookup, Remove
duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator.
▪ Designed Server jobs using various types of stages like Sequential file, ODBC, Hashed file, Aggregator,
Transformer, Sort, Link Partitioner and Link Collector.
▪ Expert in working with Data Stage Designer and Director.
▪ Experience in analyzing the data generated by the business process, defining the granularity, source to target
mapping of the data elements, creating Indexes and Aggregate tables for the data warehouse design and
development.
▪ Good knowledge of studying the data dependencies using metadata stored in the repository and prepared
batches for the existing sessions to facilitate scheduling of multiple sessions.
▪ Proven track record in troubleshooting of Data Stage jobs and addressing production issues like performance
tuning and enhancement.
▪ Experienced in UNIXshell scripts for the automation of processes and scheduling the Data Stage jobs using
wrappers.
▪ Involved in unit testing, system integration testing, implementation and maintenance of databases jobs.
▪ Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently
with effective communication skills.
Technical Skill Set
ETL Tools IBM Web Sphere Data stage and Quality Stage 8.1 , 8.5, 9.1 & 11.5, Ascential Data
Stage 7.5
Big Data Ecosystems
Hadoop,Mapreduce,HDFS,Hive,Sqoop,Pig,Spark, HBase
Databases Oracle,Greenplum,Teradata
Tools and Utilities SQL Developer,Teradata SQL Assistant, PGAdmin III
FTP Tool Tumbleweed, Sterling Commerce
Programming Languages Python, SQL, UNIX Shell Script
Operating Systems
Windows XP, LINUX
Education
▪ Bachelor of Technology in Information Technology.
Professional Certification
▪ IBM Certified Solution Developer - InfoSphere DataStage v8.5 Certified on 09/26/2013.
Experience Summary
Client: United Services Automobile Association July 2016 – Till Date
Role: DataStage/Hadoop Developer
The United Services Automobile Association (USAA) is a Texas-based Fortune 500 diversified financial services
group of companies including a Texas Department of Insurance regulated reciprocal inter-insurance exchange and
subsidiaries offering banking, investing, and insurance to people and families that serve, or served, in the United States
military. At the end of 2015, there were 11.4 million members.
USAA was founded in 1922 by a group of U.S. Army officers as a mechanism for mutual self-insurance when they
were unable to secure auto insurance because of the perception that they, as military officers, were a high-risk group.
USAA has since expanded to offer banking and insurance services to past and present members of the Armed Forces,
officers and enlisted, and their immediate families.
Solvency project basically provides the team to build the reports on business objects which can be used for business
analysis.
Responsibilities:
 Involved in understanding of data modeling documents along with the data modeler.
 Developed Sqoop Scripts to extract data from DB2 EDW source databases onto HDFS.
 Developed custom Map Reduce Job to perform data cleanup, transform Data from Text to Avro & write
output directly into hive tables by generating dynamic partitions
 Developed Custom FTP & SFTP drivers to pull flat files from UNIX, Windows into Hadoop & tokenize an
identified sensitive data from input records on the fly parallelly
 Developed Custom Input Format, Record Reader,Mapper, Reducer,Partitioner as part of developing end to
end hadoop applications.
 Developed Custom Sqoop tool to import data residing in any relational databases, tokenize an identified
sensitive column on the fly and store it into Hadoop.
 Worked on Hbase Java API to populate operational Hbase table with Key value.
 Experience in writing Spark Applications using spark-shell, pyspark, spark-submit
 Developed severalcustom User defined functions in Hive & Pig using Java & python
 Developed Sqoop Job to perform import/ Incremental Import of data from any relational tables into hadoop in
different formats such as text,avro, sequence,etc & into hive tables.
 Developed Sqoop Job to export the data from hadoop to relational tables for visualization and to generate
reports for the BI team.
 Developed ETL jobs as per business rules using ETL design document.
 Created the reusable jobs which can be used across the project.
 Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
 Extensively used Control M tool for automation of scheduling jobs on daily, bi-weekly, weekly monthly
basis with proper dependencies.
 Wrote complex SQL queries using joins, sub queries and correlated sub queries
 Performed Unit testing and System Integration testing by developing and documenting test cases.
Environment: IBM-DataStage-9.1 & 11.5 Oracle,Hadoop, Netezza Database,Control M Scheduler
Client: Walmart Jan 2015 – June 2016
Role: DataStage/Hadoop Developer
Walmart is the largest retailer of consumer staples products in the world. It was founded by Sam Walton in 1962, who
started with the idea of a store that would offer its customers the lowest prices in the market. Walmart was
incorporated in 1969 following the success of Walton’s ‘everyday low prices’ pricing strategy. It currently operates in
27 countries around the world. The company drives growth by expansion in its retail area and investments in e-
commerce. Nonetheless, it has underperformed its competitor, Costco over the last few years due to the Costco’s better
customer service record. Walmart has lost customers to Costco over the years, primarily because it doesn’t pay its
employees well, which has led to demotivation in the workforce and poor customer service
EIM Innovations Hana project basically provides Data Cafe team to build analytical reports on Tableau which are
again used for further analysis by the business. The data supplied contains tables like Scan, Item, Comp_Hist,
Club,Store wise details for every hour and on daily basis.
Responsibilities:
 Involved in understanding of data modeling documents along with the data modeler.
 Imported data using Sqoop to load data from DB2 to HDFS on regular basis.
 Developing Scripts and Batch Job to schedule various Hadoop Program.
 Written Hive queries for data analysis to meet the business requirements.
 Creating Hive tables and working on them using Hive QL.
 Importing and exporting data into HDFS and Hive using Sqoop.
 Experienced in defining job flows.
 Involved in creating Hive tables, loading data and writing hive queries.
 Developed a custom File system plug in for Hadoop so that it can access files on Data Platform
 Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all
available processors to achieve best job performance.
 Developed ETL jobs as per business rules using ETL design document
 Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
 Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
 Imported the data residing in the host systems into the data mart developed in DB2,Green Plum etc.
 Wrote scripts to upload the data in to greenplum ,SAP hana since the current datastage 8.5 version don’t
have Greenplum and SAP Hana DB plugins.
 Extensively used CA7 tool which resides on the Mainframe for automation of scheduling jobs on daily, bi-
weekly, weekly monthly basis with proper dependencies.
 Wrote complex SQL queries using joins, sub queries and correlated sub queries
 Performed Unit testing and System Integration testing by developing and documenting test cases.
 Capable of processing large sets of structured, semi-structured and unstructured data
 Pre-processing using Hive and Pig
 Managing and deploying HBase
Environment: IBM-DataStage-9.1, Hadoop ,Greenplum ,Teradata Database,SAP HANA LINUX,CA7 Scheduler
Client: General Motors Feb 2014 – Dec 2014
Role: DataStage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
The purpose of this project (Part Launch Activity Network (PLAN)) is being developed to address a business gap in
the Global SAP solution for General Motors (GM) Customer Care and Aftersales (CCA) around the management
process for the release of new part numbers in General Motors.
Responsibilities:
 Involved in understanding of business processes and coordinated with business analysts to get specific user
requirements.
 Used Information Analyzer for Column Analysis, Primary Key Analysis and Foreign Key Analysis.
 Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all
available processors to achieve best job performance.
 Developed ETL jobs as per business rules using ETL design document
 Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
 Used DataStage maps to load data from Source to target.
 Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
 Imported the data residing in the host systems into the data mart developed in Oracle 10g.
 Extensively used Autosys for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with
proper dependencies.
 Wrote complex SQL queries using joins, sub queries and correlated sub queries
 Performed Unit testing and System Integration testing by developing and documenting test cases.
Environment: IBM-DataStage-8.5, Oracle 10g,LINUX
Client: General Motors April 2013– Jan 2014
Role: Data Stage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
The Purpose this project ( Pricing Data) is distributed to severalrecipients, that is either to support different systems or
the data is sold to agencies for them to offer it to their customers. These files were created manually by the usage of
Excel or Access and distributed via e-mail or FTP to the recipients. We developed interfaces to automate the process
of data sharing with multiple recipients.
Responsibilities:
 Worked on DataStage Designer, Manager and Director.
 Worked with the Business analysts and the DBAs for requirements gathering, analysis, testing, and metrics
and project coordination.
 Involved in extracting the data from different data sources like Oracle and flat files.
 Involved in creating and maintaining Sequencer and Batch jobs.
 Creating ETL Job flow design.
 Used ETL to load data into the Oracle warehouse.
 Created various standard/reusable jobs in DataStage using various active and passive stages like Sort, Lookup,
Filter, Join, Transformer, aggregator, Change Capture Data,Sequential file, DataSets.
 Involved in development of Job Sequencing using the Sequencer.
 Used Remove Duplicates stage to remove the duplicates in the data.
 Used designer and director to schedules and monitor jobs and to collect the performance statistics
 Extensively worked with database objects including tables, views and triggers.
 Creating local and shared containers to facilitate ease and reuse of jobs.
 Implemented the underlying logic for Slowly Changing Dimensions
 Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.
 Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit
testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
Client: General Motors Sep 2012 – March 2013
Role: Datastage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
GM ADP Payroll project will outsource the payroll processing data to a third party provider - ADP which will process
the Employee Payrolls and sends back the data that will be used by various Downstream Payroll related Applications.
Currently 2 legacy applications are doing the payroll processing - HPS (Hourly Payroll System) and SPS (Salaried
payroll system) which involves lot of manual activities and need very long time to process the complete Payroll data.
With increase in number of employees, this process is expected to become more cumbersome.
So General Motors decided to decommission the existing HPS/SPS and carry out processing through a 3rd party
automated application called ADP.
Responsibilities:
 Involved as ETL Developer during the analysis, planning, design, development, and implementation stages of
projects
 Prepared Data Mapping Documents and Design the ETL jobs based on the DMD with required Tables in the
Dev Environment.
 Active participation in decision making and QA meetings and regularly interacted with the Business Analysts
&development team to gain a better understanding of the Business Process,Requirements & Design.
 Used DataStage as an ETL tool to extract data from sources systems, loaded the data into the
ORACLE database.
 Designed and Developed Data stage Jobs to Extract data from heterogeneous sources,Applied transform
logics to extracted data and Loaded into Data Warehouse Databases.
 Created Datastage jobs using different stages like Transformer,Aggregator, Sort, Join, Merge, Lookup, Data
Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture,Sample, Surrogate Key, Column
Generator, Row Generator, Etc.
 Extensively worked with Join, Look up (Normal and Sparse) and Merge stages.
 Extensively worked with sequential file, dataset, file set and look up file set stages.
 Extensively used Parallel Stages like Row Generator, Column Generator, Head,and Peek for development and
de-bugging purposes.
 Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging
its components, and monitoring the resulting executable versions on ad hoc or scheduled basis.
 Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
 Creation of jobs sequences.
 Maintained Data Warehouse by loading dimensions and facts as part of project. Also worked for different
enhancements in FACT tables.
 Created shell script to run data stage jobs from UNIXand then schedule this script to run data stage jobs
through scheduling tool.
 Coordinate with team members and administer all onsite and offshore work packages.
 Analyze performance and monitor work with capacity planning.
 Performed performance tuning of the jobs by interpreting performance statistics of the jobs developed.
 Documented ETL test plans, test cases,test scripts, and validations based on design specifications for unit
testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
 Participated in weekly status meetings
Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
Client: General Motors Feb 2011 – Aug 2012
Role: Team Member
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
This project having Global Strategic Pricing interfaces in production environment which are determine effective
pricing structure including determining usages of parts,sellling price, demand, warranty, currency exchange rate,etc
Responsibilities:
 Supporting 350 interfaces specifications developed in different Datastage versions
 Supporting 30 interface specifications mainly to / from SAP via R/3 Plugin
 Analysed different Architecture designs for Batch Transaction Interfaces between
different Systems (Mainframe, Weblogic, SAP).
 Strictly observed GM GIF (Global Integration Foundation) EAI Standard for Error Handling,
Audit report generation using CSF (Common Services Framework)
 Develop and Deployment of MERS in Production environment.
 Meeting with Clients for requirement gathering, addressing issues and change request.
 Analysis of issues occurring in a production, resolving issues in Development and
Test environment and roll over the changes to production.
 Involved in SOX Audit process.
 Involved in Change Management Process.
 Working on Production Support Calls.
 Co-ordination between onsite and offshore team.
Environment: IBM-DataStage-8.1, IBM-DataStage-8.5,AscentialData Stage 7.5, Oracle 10g, LINUX

More Related Content

What's hot

CHETHAN RESUME_MSBI DEVELOPER
CHETHAN RESUME_MSBI DEVELOPERCHETHAN RESUME_MSBI DEVELOPER
CHETHAN RESUME_MSBI DEVELOPERChethan a
 
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopSenior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopabinash bindhani
 
Hadoop Big Data Resume
Hadoop Big Data ResumeHadoop Big Data Resume
Hadoop Big Data Resumearbind_jha
 
Resume_Abhinav_Hadoop_Developer
Resume_Abhinav_Hadoop_DeveloperResume_Abhinav_Hadoop_Developer
Resume_Abhinav_Hadoop_DeveloperAbhinav khanduja
 
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdHadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
IBM Analytics
 
Big SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on HadoopBig SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on Hadoop
Wilfried Hoge
 
Borden_resume_1JUN16
Borden_resume_1JUN16Borden_resume_1JUN16
Borden_resume_1JUN16Gary Borden
 
Resume_Asad_updated_DEC2016
Resume_Asad_updated_DEC2016Resume_Asad_updated_DEC2016
Resume_Asad_updated_DEC2016Asadullah Khan
 
Big SQL Competitive Summary - Vendor Landscape
Big SQL Competitive Summary - Vendor LandscapeBig SQL Competitive Summary - Vendor Landscape
Big SQL Competitive Summary - Vendor Landscape
Nicolas Morales
 
How can Hadoop & SAP be integrated
How can Hadoop & SAP be integratedHow can Hadoop & SAP be integrated
How can Hadoop & SAP be integrated
Douglas Bernardini
 
Sql developer resume
Sql developer resumeSql developer resume
Sql developer resumeguest9db49b8
 
Tableau Architecture
Tableau ArchitectureTableau Architecture
Tableau Architecture
Kishore Chaganti
 
Bi developer gary t
Bi developer   gary tBi developer   gary t
Bi developer gary tgaryt1953
 
How pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architectureHow pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architecture
Kovid Academy
 
Resume pragnya puspita padhi
Resume pragnya puspita padhiResume pragnya puspita padhi
Resume pragnya puspita padhi
Pragnya Puspita Padhi
 
Bi developer gary thompson
Bi developer   gary thompsonBi developer   gary thompson
Bi developer gary thompsonGary Thompson
 
Delta machenism with db connect
Delta machenism with db connectDelta machenism with db connect
Delta machenism with db connect
Obaid shaikh
 
sap hana resume
sap hana resumesap hana resume
sap hana resumesiva reddy
 
Db connect with sap bw
Db connect with sap bwDb connect with sap bw
Db connect with sap bw
Obaid shaikh
 

What's hot (19)

CHETHAN RESUME_MSBI DEVELOPER
CHETHAN RESUME_MSBI DEVELOPERCHETHAN RESUME_MSBI DEVELOPER
CHETHAN RESUME_MSBI DEVELOPER
 
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopSenior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
 
Hadoop Big Data Resume
Hadoop Big Data ResumeHadoop Big Data Resume
Hadoop Big Data Resume
 
Resume_Abhinav_Hadoop_Developer
Resume_Abhinav_Hadoop_DeveloperResume_Abhinav_Hadoop_Developer
Resume_Abhinav_Hadoop_Developer
 
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdHadoop-DS: Which SQL-on-Hadoop Rules the Herd
Hadoop-DS: Which SQL-on-Hadoop Rules the Herd
 
Big SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on HadoopBig SQL 3.0 - Fast and easy SQL on Hadoop
Big SQL 3.0 - Fast and easy SQL on Hadoop
 
Borden_resume_1JUN16
Borden_resume_1JUN16Borden_resume_1JUN16
Borden_resume_1JUN16
 
Resume_Asad_updated_DEC2016
Resume_Asad_updated_DEC2016Resume_Asad_updated_DEC2016
Resume_Asad_updated_DEC2016
 
Big SQL Competitive Summary - Vendor Landscape
Big SQL Competitive Summary - Vendor LandscapeBig SQL Competitive Summary - Vendor Landscape
Big SQL Competitive Summary - Vendor Landscape
 
How can Hadoop & SAP be integrated
How can Hadoop & SAP be integratedHow can Hadoop & SAP be integrated
How can Hadoop & SAP be integrated
 
Sql developer resume
Sql developer resumeSql developer resume
Sql developer resume
 
Tableau Architecture
Tableau ArchitectureTableau Architecture
Tableau Architecture
 
Bi developer gary t
Bi developer   gary tBi developer   gary t
Bi developer gary t
 
How pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architectureHow pig and hadoop fit in data processing architecture
How pig and hadoop fit in data processing architecture
 
Resume pragnya puspita padhi
Resume pragnya puspita padhiResume pragnya puspita padhi
Resume pragnya puspita padhi
 
Bi developer gary thompson
Bi developer   gary thompsonBi developer   gary thompson
Bi developer gary thompson
 
Delta machenism with db connect
Delta machenism with db connectDelta machenism with db connect
Delta machenism with db connect
 
sap hana resume
sap hana resumesap hana resume
sap hana resume
 
Db connect with sap bw
Db connect with sap bwDb connect with sap bw
Db connect with sap bw
 

Similar to Rama prasad owk etl hadoop_developer

Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scalaSunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Mopuru Babu
 
Kumar Godasi - Resume
Kumar Godasi - ResumeKumar Godasi - Resume
Kumar Godasi - ResumeKumar Godasi
 
Resume_kallesh_latest
Resume_kallesh_latestResume_kallesh_latest
Resume_kallesh_latestKallesha CB
 
Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_DamarlaNag Arjun
 
Informatica 5+years of experince
Informatica 5+years of experinceInformatica 5+years of experince
Informatica 5+years of experince
Dharma Rao
 
Informatica 5+years of experince
Informatica 5+years of experinceInformatica 5+years of experince
Informatica 5+years of experince
Dharma Rao
 
Informatica_5+years of experince
Informatica_5+years of experinceInformatica_5+years of experince
Informatica_5+years of experinceDharma Rao
 
Chandan's_Resume
Chandan's_ResumeChandan's_Resume
Chandan's_ResumeChandan Das
 
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_SparkSunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Mopuru Babu
 
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_SparkSunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Mopuru Babu
 

Similar to Rama prasad owk etl hadoop_developer (20)

Prashanth Kumar_Hadoop_NEW
Prashanth Kumar_Hadoop_NEWPrashanth Kumar_Hadoop_NEW
Prashanth Kumar_Hadoop_NEW
 
Pallavi_Resume
Pallavi_ResumePallavi_Resume
Pallavi_Resume
 
Resume
ResumeResume
Resume
 
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scalaSunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
 
hadoop exp
hadoop exphadoop exp
hadoop exp
 
Kumar Godasi - Resume
Kumar Godasi - ResumeKumar Godasi - Resume
Kumar Godasi - Resume
 
jagadeesh updated
jagadeesh updatedjagadeesh updated
jagadeesh updated
 
Resume_kallesh_latest
Resume_kallesh_latestResume_kallesh_latest
Resume_kallesh_latest
 
Nagarjuna_Damarla
Nagarjuna_DamarlaNagarjuna_Damarla
Nagarjuna_Damarla
 
Informatica 5+years of experince
Informatica 5+years of experinceInformatica 5+years of experince
Informatica 5+years of experince
 
Informatica 5+years of experince
Informatica 5+years of experinceInformatica 5+years of experince
Informatica 5+years of experince
 
Informatica_5+years of experince
Informatica_5+years of experinceInformatica_5+years of experince
Informatica_5+years of experince
 
Resume_VipinKP
Resume_VipinKPResume_VipinKP
Resume_VipinKP
 
DeepeshRehi
DeepeshRehiDeepeshRehi
DeepeshRehi
 
Chandan's_Resume
Chandan's_ResumeChandan's_Resume
Chandan's_Resume
 
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_SparkSunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
 
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_SparkSunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
 
Ganesh CV
Ganesh CVGanesh CV
Ganesh CV
 
BigData_Krishna Kumar Sharma
BigData_Krishna Kumar SharmaBigData_Krishna Kumar Sharma
BigData_Krishna Kumar Sharma
 
Resume_of_Vasudevan - Hadoop
Resume_of_Vasudevan - HadoopResume_of_Vasudevan - Hadoop
Resume_of_Vasudevan - Hadoop
 

Recently uploaded

The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
Paul Groth
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
Prayukth K V
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Product School
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
Product School
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
Elena Simperl
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
Bhaskar Mitra
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
RTTS
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
Product School
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
DanBrown980551
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 

Recently uploaded (20)

The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMsTo Graph or Not to Graph Knowledge Graph Architectures and LLMs
To Graph or Not to Graph Knowledge Graph Architectures and LLMs
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 previewState of ICS and IoT Cyber Threat Landscape Report 2024 preview
State of ICS and IoT Cyber Threat Landscape Report 2024 preview
 
Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...Designing Great Products: The Power of Design and Leadership by Chief Designe...
Designing Great Products: The Power of Design and Leadership by Chief Designe...
 
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
From Siloed Products to Connected Ecosystem: Building a Sustainable and Scala...
 
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdfFIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
FIDO Alliance Osaka Seminar: Passkeys at Amazon.pdf
 
When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...When stars align: studies in data quality, knowledge graphs, and machine lear...
When stars align: studies in data quality, knowledge graphs, and machine lear...
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
Search and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical FuturesSearch and Society: Reimagining Information Access for Radical Futures
Search and Society: Reimagining Information Access for Radical Futures
 
JMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and GrafanaJMeter webinar - integration with InfluxDB and Grafana
JMeter webinar - integration with InfluxDB and Grafana
 
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
De-mystifying Zero to One: Design Informed Techniques for Greenfield Innovati...
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 

Rama prasad owk etl hadoop_developer

  • 1. RAMA PRASAD OWK ETL/Hadoop Developer Email: ramprasad1261@gmail.com Cell: +1 248 525 3827 _______________________________________________________________________________________________ Professional Summary Overall 6+ years of IT experience in System Analysis, design, development and implementation of Data Warehouses as an ETL Developer using IBM Data Stage and worked as Hadoop developer. ▪ Involved in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Server and Parallel jobs using Data Stage to populate tables in Data Warehouse and Data marts. ▪ Expertise in using various Hadoop infrastructures such as MapReduce, Pig, Hive, HBase, Sqoop,Spark and Spark Streaming for data storage and analysis. ▪ Experienced in troubleshooting errors in HBase Shell/API, Pig, Hive and MapReduce. ▪ Highly experienced in importing and exporting data between HDFS and Relational Database Management systems using Sqoop. ▪ Collected logs data from various sources and integrated in to HDFS using Flume. ▪ Good experience in Generating Statistics/extracts/reports from the Hadoop. ▪ Experience in writing Spark Applications using spark-shell, pyspark, spark-submit. ▪ Developed prototype Spark applications using Spark-Core, Spark SQL, DataFrame API ▪ Good experience in generating Statistics and reports from the Hadoop. ▪ Experience in Python language to write a scripts in Hadoop environment. ▪ Capable of processing large sets of structured, semi-structured and unstructured data using hadoop. ▪ Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanism. ▪ Good Knowledge in designing Parallel jobs using various stages like Join, Merge, Lookup, Remove duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator. ▪ Designed Server jobs using various types of stages like Sequential file, ODBC, Hashed file, Aggregator, Transformer, Sort, Link Partitioner and Link Collector. ▪ Expert in working with Data Stage Designer and Director. ▪ Experience in analyzing the data generated by the business process, defining the granularity, source to target mapping of the data elements, creating Indexes and Aggregate tables for the data warehouse design and development. ▪ Good knowledge of studying the data dependencies using metadata stored in the repository and prepared batches for the existing sessions to facilitate scheduling of multiple sessions. ▪ Proven track record in troubleshooting of Data Stage jobs and addressing production issues like performance tuning and enhancement. ▪ Experienced in UNIXshell scripts for the automation of processes and scheduling the Data Stage jobs using wrappers. ▪ Involved in unit testing, system integration testing, implementation and maintenance of databases jobs. ▪ Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently with effective communication skills.
  • 2. Technical Skill Set ETL Tools IBM Web Sphere Data stage and Quality Stage 8.1 , 8.5, 9.1 & 11.5, Ascential Data Stage 7.5 Big Data Ecosystems Hadoop,Mapreduce,HDFS,Hive,Sqoop,Pig,Spark, HBase Databases Oracle,Greenplum,Teradata Tools and Utilities SQL Developer,Teradata SQL Assistant, PGAdmin III FTP Tool Tumbleweed, Sterling Commerce Programming Languages Python, SQL, UNIX Shell Script Operating Systems Windows XP, LINUX Education ▪ Bachelor of Technology in Information Technology. Professional Certification ▪ IBM Certified Solution Developer - InfoSphere DataStage v8.5 Certified on 09/26/2013. Experience Summary Client: United Services Automobile Association July 2016 – Till Date Role: DataStage/Hadoop Developer The United Services Automobile Association (USAA) is a Texas-based Fortune 500 diversified financial services group of companies including a Texas Department of Insurance regulated reciprocal inter-insurance exchange and subsidiaries offering banking, investing, and insurance to people and families that serve, or served, in the United States military. At the end of 2015, there were 11.4 million members. USAA was founded in 1922 by a group of U.S. Army officers as a mechanism for mutual self-insurance when they were unable to secure auto insurance because of the perception that they, as military officers, were a high-risk group. USAA has since expanded to offer banking and insurance services to past and present members of the Armed Forces, officers and enlisted, and their immediate families. Solvency project basically provides the team to build the reports on business objects which can be used for business analysis.
  • 3. Responsibilities:  Involved in understanding of data modeling documents along with the data modeler.  Developed Sqoop Scripts to extract data from DB2 EDW source databases onto HDFS.  Developed custom Map Reduce Job to perform data cleanup, transform Data from Text to Avro & write output directly into hive tables by generating dynamic partitions  Developed Custom FTP & SFTP drivers to pull flat files from UNIX, Windows into Hadoop & tokenize an identified sensitive data from input records on the fly parallelly  Developed Custom Input Format, Record Reader,Mapper, Reducer,Partitioner as part of developing end to end hadoop applications.  Developed Custom Sqoop tool to import data residing in any relational databases, tokenize an identified sensitive column on the fly and store it into Hadoop.  Worked on Hbase Java API to populate operational Hbase table with Key value.  Experience in writing Spark Applications using spark-shell, pyspark, spark-submit  Developed severalcustom User defined functions in Hive & Pig using Java & python  Developed Sqoop Job to perform import/ Incremental Import of data from any relational tables into hadoop in different formats such as text,avro, sequence,etc & into hive tables.  Developed Sqoop Job to export the data from hadoop to relational tables for visualization and to generate reports for the BI team.  Developed ETL jobs as per business rules using ETL design document.  Created the reusable jobs which can be used across the project.  Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.  Extensively used Control M tool for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with proper dependencies.  Wrote complex SQL queries using joins, sub queries and correlated sub queries  Performed Unit testing and System Integration testing by developing and documenting test cases. Environment: IBM-DataStage-9.1 & 11.5 Oracle,Hadoop, Netezza Database,Control M Scheduler Client: Walmart Jan 2015 – June 2016 Role: DataStage/Hadoop Developer Walmart is the largest retailer of consumer staples products in the world. It was founded by Sam Walton in 1962, who started with the idea of a store that would offer its customers the lowest prices in the market. Walmart was incorporated in 1969 following the success of Walton’s ‘everyday low prices’ pricing strategy. It currently operates in 27 countries around the world. The company drives growth by expansion in its retail area and investments in e- commerce. Nonetheless, it has underperformed its competitor, Costco over the last few years due to the Costco’s better customer service record. Walmart has lost customers to Costco over the years, primarily because it doesn’t pay its employees well, which has led to demotivation in the workforce and poor customer service
  • 4. EIM Innovations Hana project basically provides Data Cafe team to build analytical reports on Tableau which are again used for further analysis by the business. The data supplied contains tables like Scan, Item, Comp_Hist, Club,Store wise details for every hour and on daily basis. Responsibilities:  Involved in understanding of data modeling documents along with the data modeler.  Imported data using Sqoop to load data from DB2 to HDFS on regular basis.  Developing Scripts and Batch Job to schedule various Hadoop Program.  Written Hive queries for data analysis to meet the business requirements.  Creating Hive tables and working on them using Hive QL.  Importing and exporting data into HDFS and Hive using Sqoop.  Experienced in defining job flows.  Involved in creating Hive tables, loading data and writing hive queries.  Developed a custom File system plug in for Hadoop so that it can access files on Data Platform  Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all available processors to achieve best job performance.  Developed ETL jobs as per business rules using ETL design document  Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.  Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.  Imported the data residing in the host systems into the data mart developed in DB2,Green Plum etc.  Wrote scripts to upload the data in to greenplum ,SAP hana since the current datastage 8.5 version don’t have Greenplum and SAP Hana DB plugins.  Extensively used CA7 tool which resides on the Mainframe for automation of scheduling jobs on daily, bi- weekly, weekly monthly basis with proper dependencies.  Wrote complex SQL queries using joins, sub queries and correlated sub queries  Performed Unit testing and System Integration testing by developing and documenting test cases.  Capable of processing large sets of structured, semi-structured and unstructured data  Pre-processing using Hive and Pig  Managing and deploying HBase Environment: IBM-DataStage-9.1, Hadoop ,Greenplum ,Teradata Database,SAP HANA LINUX,CA7 Scheduler
  • 5. Client: General Motors Feb 2014 – Dec 2014 Role: DataStage Developer General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. The purpose of this project (Part Launch Activity Network (PLAN)) is being developed to address a business gap in the Global SAP solution for General Motors (GM) Customer Care and Aftersales (CCA) around the management process for the release of new part numbers in General Motors. Responsibilities:  Involved in understanding of business processes and coordinated with business analysts to get specific user requirements.  Used Information Analyzer for Column Analysis, Primary Key Analysis and Foreign Key Analysis.  Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all available processors to achieve best job performance.  Developed ETL jobs as per business rules using ETL design document  Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.  Used DataStage maps to load data from Source to target.  Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the jobs.  Imported the data residing in the host systems into the data mart developed in Oracle 10g.  Extensively used Autosys for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with proper dependencies.  Wrote complex SQL queries using joins, sub queries and correlated sub queries  Performed Unit testing and System Integration testing by developing and documenting test cases. Environment: IBM-DataStage-8.5, Oracle 10g,LINUX
  • 6. Client: General Motors April 2013– Jan 2014 Role: Data Stage Developer General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. The Purpose this project ( Pricing Data) is distributed to severalrecipients, that is either to support different systems or the data is sold to agencies for them to offer it to their customers. These files were created manually by the usage of Excel or Access and distributed via e-mail or FTP to the recipients. We developed interfaces to automate the process of data sharing with multiple recipients. Responsibilities:  Worked on DataStage Designer, Manager and Director.  Worked with the Business analysts and the DBAs for requirements gathering, analysis, testing, and metrics and project coordination.  Involved in extracting the data from different data sources like Oracle and flat files.  Involved in creating and maintaining Sequencer and Batch jobs.  Creating ETL Job flow design.  Used ETL to load data into the Oracle warehouse.  Created various standard/reusable jobs in DataStage using various active and passive stages like Sort, Lookup, Filter, Join, Transformer, aggregator, Change Capture Data,Sequential file, DataSets.  Involved in development of Job Sequencing using the Sequencer.  Used Remove Duplicates stage to remove the duplicates in the data.  Used designer and director to schedules and monitor jobs and to collect the performance statistics  Extensively worked with database objects including tables, views and triggers.  Creating local and shared containers to facilitate ease and reuse of jobs.  Implemented the underlying logic for Slowly Changing Dimensions  Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.  Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis. Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
  • 7. Client: General Motors Sep 2012 – March 2013 Role: Datastage Developer General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. GM ADP Payroll project will outsource the payroll processing data to a third party provider - ADP which will process the Employee Payrolls and sends back the data that will be used by various Downstream Payroll related Applications. Currently 2 legacy applications are doing the payroll processing - HPS (Hourly Payroll System) and SPS (Salaried payroll system) which involves lot of manual activities and need very long time to process the complete Payroll data. With increase in number of employees, this process is expected to become more cumbersome. So General Motors decided to decommission the existing HPS/SPS and carry out processing through a 3rd party automated application called ADP. Responsibilities:  Involved as ETL Developer during the analysis, planning, design, development, and implementation stages of projects  Prepared Data Mapping Documents and Design the ETL jobs based on the DMD with required Tables in the Dev Environment.  Active participation in decision making and QA meetings and regularly interacted with the Business Analysts &development team to gain a better understanding of the Business Process,Requirements & Design.  Used DataStage as an ETL tool to extract data from sources systems, loaded the data into the ORACLE database.  Designed and Developed Data stage Jobs to Extract data from heterogeneous sources,Applied transform logics to extracted data and Loaded into Data Warehouse Databases.  Created Datastage jobs using different stages like Transformer,Aggregator, Sort, Join, Merge, Lookup, Data Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture,Sample, Surrogate Key, Column Generator, Row Generator, Etc.  Extensively worked with Join, Look up (Normal and Sparse) and Merge stages.  Extensively worked with sequential file, dataset, file set and look up file set stages.  Extensively used Parallel Stages like Row Generator, Column Generator, Head,and Peek for development and de-bugging purposes.  Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging its components, and monitoring the resulting executable versions on ad hoc or scheduled basis.  Converted complex job designs to different job segments and executed through job sequencer for better performance and easy maintenance.  Creation of jobs sequences.  Maintained Data Warehouse by loading dimensions and facts as part of project. Also worked for different enhancements in FACT tables.  Created shell script to run data stage jobs from UNIXand then schedule this script to run data stage jobs through scheduling tool.  Coordinate with team members and administer all onsite and offshore work packages.  Analyze performance and monitor work with capacity planning.  Performed performance tuning of the jobs by interpreting performance statistics of the jobs developed.
  • 8.  Documented ETL test plans, test cases,test scripts, and validations based on design specifications for unit testing, system testing, functional testing, prepared test data for testing, error handling and analysis.  Participated in weekly status meetings Environment: IBM-DataStage-8.5, Oracle 10g, LINUX Client: General Motors Feb 2011 – Aug 2012 Role: Team Member General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick, Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries across the world. This project having Global Strategic Pricing interfaces in production environment which are determine effective pricing structure including determining usages of parts,sellling price, demand, warranty, currency exchange rate,etc Responsibilities:  Supporting 350 interfaces specifications developed in different Datastage versions  Supporting 30 interface specifications mainly to / from SAP via R/3 Plugin  Analysed different Architecture designs for Batch Transaction Interfaces between different Systems (Mainframe, Weblogic, SAP).  Strictly observed GM GIF (Global Integration Foundation) EAI Standard for Error Handling, Audit report generation using CSF (Common Services Framework)  Develop and Deployment of MERS in Production environment.  Meeting with Clients for requirement gathering, addressing issues and change request.  Analysis of issues occurring in a production, resolving issues in Development and Test environment and roll over the changes to production.  Involved in SOX Audit process.  Involved in Change Management Process.  Working on Production Support Calls.  Co-ordination between onsite and offshore team. Environment: IBM-DataStage-8.1, IBM-DataStage-8.5,AscentialData Stage 7.5, Oracle 10g, LINUX