The document provides a summary of Rama Prasad Owk's professional experience as an ETL/Hadoop Developer. It outlines over 6 years of experience in areas such as: designing, developing and implementing ETL jobs using IBM Data Stage; working with various Hadoop technologies such as MapReduce, Pig, Hive, HBase, Sqoop, and Spark; troubleshooting errors; importing and exporting data between HDFS and databases; and writing Spark and Python applications for Hadoop. Specific experiences are listed from roles at clients such as United Services Automobile Association, Walmart, and General Motors where responsibilities included developing ETL jobs, Hadoop programs, and SQL queries to meet business requirements.
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdIBM Analytics
Originally Published on Oct 27, 2014
An overview of IBM's audited Hadoop-DS comparing IBM Big SQL, Cloudera Impala and Hortonworks Hive for performance and SQL compatibility. For more information, visit: http://www-01.ibm.com/software/data/infosphere/hadoop/
The strategic relationship between Hortonworks and SAP enables SAP to resell Hortonworks Data Platform (HDP) and provide enterprise support for their global customer base. This means SAP customers can incorporate enterprise Hadoop as a complement within a data architecture that includes SAP HANA, Sybase and SAP BusinessObjects enabling a broad range of new analytic applications.
How pig and hadoop fit in data processing architectureKovid Academy
Pig, developed by Yahoo research in 2006, enables programmers to write data transformation programs for Hadoop quickly and easily without the cost and complexity of map-reduce programs.
Hadoop-DS: Which SQL-on-Hadoop Rules the HerdIBM Analytics
Originally Published on Oct 27, 2014
An overview of IBM's audited Hadoop-DS comparing IBM Big SQL, Cloudera Impala and Hortonworks Hive for performance and SQL compatibility. For more information, visit: http://www-01.ibm.com/software/data/infosphere/hadoop/
The strategic relationship between Hortonworks and SAP enables SAP to resell Hortonworks Data Platform (HDP) and provide enterprise support for their global customer base. This means SAP customers can incorporate enterprise Hadoop as a complement within a data architecture that includes SAP HANA, Sybase and SAP BusinessObjects enabling a broad range of new analytic applications.
How pig and hadoop fit in data processing architectureKovid Academy
Pig, developed by Yahoo research in 2006, enables programmers to write data transformation programs for Hadoop quickly and easily without the cost and complexity of map-reduce programs.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
Search and Society: Reimagining Information Access for Radical FuturesBhaskar Mitra
The field of Information retrieval (IR) is currently undergoing a transformative shift, at least partly due to the emerging applications of generative AI to information access. In this talk, we will deliberate on the sociotechnical implications of generative AI for information access. We will argue that there is both a critical necessity and an exciting opportunity for the IR community to re-center our research agendas on societal needs while dismantling the artificial separation between the work on fairness, accountability, transparency, and ethics in IR and the rest of IR research. Instead of adopting a reactionary strategy of trying to mitigate potential social harms from emerging technologies, the community should aim to proactively set the research agenda for the kinds of systems we should build inspired by diverse explicitly stated sociotechnical imaginaries. The sociotechnical imaginaries that underpin the design and development of information access technologies needs to be explicitly articulated, and we need to develop theories of change in context of these diverse perspectives. Our guiding future imaginaries must be informed by other academic fields, such as democratic theory and critical theory, and should be co-developed with social science scholars, legal scholars, civil rights and social justice activists, and artists, among others.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
1. RAMA PRASAD OWK
ETL/Hadoop Developer
Email: ramprasad1261@gmail.com
Cell: +1 248 525 3827
_______________________________________________________________________________________________
Professional Summary
Overall 6+ years of IT experience in System Analysis, design, development and implementation of Data Warehouses
as an ETL Developer using IBM Data Stage and worked as Hadoop developer.
▪ Involved in Designing, Developing, Documenting, Testing of ETL jobs and mappings in Server and Parallel
jobs using Data Stage to populate tables in Data Warehouse and Data marts.
▪ Expertise in using various Hadoop infrastructures such as MapReduce, Pig, Hive, HBase, Sqoop,Spark and
Spark Streaming for data storage and analysis.
▪ Experienced in troubleshooting errors in HBase Shell/API, Pig, Hive and MapReduce.
▪ Highly experienced in importing and exporting data between HDFS and Relational Database Management
systems using Sqoop.
▪ Collected logs data from various sources and integrated in to HDFS using Flume.
▪ Good experience in Generating Statistics/extracts/reports from the Hadoop.
▪ Experience in writing Spark Applications using spark-shell, pyspark, spark-submit.
▪ Developed prototype Spark applications using Spark-Core, Spark SQL, DataFrame API
▪ Good experience in generating Statistics and reports from the Hadoop.
▪ Experience in Python language to write a scripts in Hadoop environment.
▪ Capable of processing large sets of structured, semi-structured and unstructured data using hadoop.
▪ Proficient in developing strategies for Extraction, Transformation and Loading (ETL) mechanism.
▪ Good Knowledge in designing Parallel jobs using various stages like Join, Merge, Lookup, Remove
duplicates, Filter, Dataset, Lookup file set, Complex flat file, Modify, Aggregator.
▪ Designed Server jobs using various types of stages like Sequential file, ODBC, Hashed file, Aggregator,
Transformer, Sort, Link Partitioner and Link Collector.
▪ Expert in working with Data Stage Designer and Director.
▪ Experience in analyzing the data generated by the business process, defining the granularity, source to target
mapping of the data elements, creating Indexes and Aggregate tables for the data warehouse design and
development.
▪ Good knowledge of studying the data dependencies using metadata stored in the repository and prepared
batches for the existing sessions to facilitate scheduling of multiple sessions.
▪ Proven track record in troubleshooting of Data Stage jobs and addressing production issues like performance
tuning and enhancement.
▪ Experienced in UNIXshell scripts for the automation of processes and scheduling the Data Stage jobs using
wrappers.
▪ Involved in unit testing, system integration testing, implementation and maintenance of databases jobs.
▪ Effective in cross-functional and global environments to manage multiple tasks and assignments concurrently
with effective communication skills.
2. Technical Skill Set
ETL Tools IBM Web Sphere Data stage and Quality Stage 8.1 , 8.5, 9.1 & 11.5, Ascential Data
Stage 7.5
Big Data Ecosystems
Hadoop,Mapreduce,HDFS,Hive,Sqoop,Pig,Spark, HBase
Databases Oracle,Greenplum,Teradata
Tools and Utilities SQL Developer,Teradata SQL Assistant, PGAdmin III
FTP Tool Tumbleweed, Sterling Commerce
Programming Languages Python, SQL, UNIX Shell Script
Operating Systems
Windows XP, LINUX
Education
▪ Bachelor of Technology in Information Technology.
Professional Certification
▪ IBM Certified Solution Developer - InfoSphere DataStage v8.5 Certified on 09/26/2013.
Experience Summary
Client: United Services Automobile Association July 2016 – Till Date
Role: DataStage/Hadoop Developer
The United Services Automobile Association (USAA) is a Texas-based Fortune 500 diversified financial services
group of companies including a Texas Department of Insurance regulated reciprocal inter-insurance exchange and
subsidiaries offering banking, investing, and insurance to people and families that serve, or served, in the United States
military. At the end of 2015, there were 11.4 million members.
USAA was founded in 1922 by a group of U.S. Army officers as a mechanism for mutual self-insurance when they
were unable to secure auto insurance because of the perception that they, as military officers, were a high-risk group.
USAA has since expanded to offer banking and insurance services to past and present members of the Armed Forces,
officers and enlisted, and their immediate families.
Solvency project basically provides the team to build the reports on business objects which can be used for business
analysis.
3. Responsibilities:
Involved in understanding of data modeling documents along with the data modeler.
Developed Sqoop Scripts to extract data from DB2 EDW source databases onto HDFS.
Developed custom Map Reduce Job to perform data cleanup, transform Data from Text to Avro & write
output directly into hive tables by generating dynamic partitions
Developed Custom FTP & SFTP drivers to pull flat files from UNIX, Windows into Hadoop & tokenize an
identified sensitive data from input records on the fly parallelly
Developed Custom Input Format, Record Reader,Mapper, Reducer,Partitioner as part of developing end to
end hadoop applications.
Developed Custom Sqoop tool to import data residing in any relational databases, tokenize an identified
sensitive column on the fly and store it into Hadoop.
Worked on Hbase Java API to populate operational Hbase table with Key value.
Experience in writing Spark Applications using spark-shell, pyspark, spark-submit
Developed severalcustom User defined functions in Hive & Pig using Java & python
Developed Sqoop Job to perform import/ Incremental Import of data from any relational tables into hadoop in
different formats such as text,avro, sequence,etc & into hive tables.
Developed Sqoop Job to export the data from hadoop to relational tables for visualization and to generate
reports for the BI team.
Developed ETL jobs as per business rules using ETL design document.
Created the reusable jobs which can be used across the project.
Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
Extensively used Control M tool for automation of scheduling jobs on daily, bi-weekly, weekly monthly
basis with proper dependencies.
Wrote complex SQL queries using joins, sub queries and correlated sub queries
Performed Unit testing and System Integration testing by developing and documenting test cases.
Environment: IBM-DataStage-9.1 & 11.5 Oracle,Hadoop, Netezza Database,Control M Scheduler
Client: Walmart Jan 2015 – June 2016
Role: DataStage/Hadoop Developer
Walmart is the largest retailer of consumer staples products in the world. It was founded by Sam Walton in 1962, who
started with the idea of a store that would offer its customers the lowest prices in the market. Walmart was
incorporated in 1969 following the success of Walton’s ‘everyday low prices’ pricing strategy. It currently operates in
27 countries around the world. The company drives growth by expansion in its retail area and investments in e-
commerce. Nonetheless, it has underperformed its competitor, Costco over the last few years due to the Costco’s better
customer service record. Walmart has lost customers to Costco over the years, primarily because it doesn’t pay its
employees well, which has led to demotivation in the workforce and poor customer service
4. EIM Innovations Hana project basically provides Data Cafe team to build analytical reports on Tableau which are
again used for further analysis by the business. The data supplied contains tables like Scan, Item, Comp_Hist,
Club,Store wise details for every hour and on daily basis.
Responsibilities:
Involved in understanding of data modeling documents along with the data modeler.
Imported data using Sqoop to load data from DB2 to HDFS on regular basis.
Developing Scripts and Batch Job to schedule various Hadoop Program.
Written Hive queries for data analysis to meet the business requirements.
Creating Hive tables and working on them using Hive QL.
Importing and exporting data into HDFS and Hive using Sqoop.
Experienced in defining job flows.
Involved in creating Hive tables, loading data and writing hive queries.
Developed a custom File system plug in for Hadoop so that it can access files on Data Platform
Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all
available processors to achieve best job performance.
Developed ETL jobs as per business rules using ETL design document
Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
Imported the data residing in the host systems into the data mart developed in DB2,Green Plum etc.
Wrote scripts to upload the data in to greenplum ,SAP hana since the current datastage 8.5 version don’t
have Greenplum and SAP Hana DB plugins.
Extensively used CA7 tool which resides on the Mainframe for automation of scheduling jobs on daily, bi-
weekly, weekly monthly basis with proper dependencies.
Wrote complex SQL queries using joins, sub queries and correlated sub queries
Performed Unit testing and System Integration testing by developing and documenting test cases.
Capable of processing large sets of structured, semi-structured and unstructured data
Pre-processing using Hive and Pig
Managing and deploying HBase
Environment: IBM-DataStage-9.1, Hadoop ,Greenplum ,Teradata Database,SAP HANA LINUX,CA7 Scheduler
5. Client: General Motors Feb 2014 – Dec 2014
Role: DataStage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
The purpose of this project (Part Launch Activity Network (PLAN)) is being developed to address a business gap in
the Global SAP solution for General Motors (GM) Customer Care and Aftersales (CCA) around the management
process for the release of new part numbers in General Motors.
Responsibilities:
Involved in understanding of business processes and coordinated with business analysts to get specific user
requirements.
Used Information Analyzer for Column Analysis, Primary Key Analysis and Foreign Key Analysis.
Extensively worked on DataStage jobs for splitting bulk data into subsets and to dynamically distribute to all
available processors to achieve best job performance.
Developed ETL jobs as per business rules using ETL design document
Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
Used DataStage maps to load data from Source to target.
Enhanced the reusability of the jobs by making and deploying shared containers and multiple instances of the
jobs.
Imported the data residing in the host systems into the data mart developed in Oracle 10g.
Extensively used Autosys for automation of scheduling jobs on daily, bi-weekly, weekly monthly basis with
proper dependencies.
Wrote complex SQL queries using joins, sub queries and correlated sub queries
Performed Unit testing and System Integration testing by developing and documenting test cases.
Environment: IBM-DataStage-8.5, Oracle 10g,LINUX
6. Client: General Motors April 2013– Jan 2014
Role: Data Stage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
The Purpose this project ( Pricing Data) is distributed to severalrecipients, that is either to support different systems or
the data is sold to agencies for them to offer it to their customers. These files were created manually by the usage of
Excel or Access and distributed via e-mail or FTP to the recipients. We developed interfaces to automate the process
of data sharing with multiple recipients.
Responsibilities:
Worked on DataStage Designer, Manager and Director.
Worked with the Business analysts and the DBAs for requirements gathering, analysis, testing, and metrics
and project coordination.
Involved in extracting the data from different data sources like Oracle and flat files.
Involved in creating and maintaining Sequencer and Batch jobs.
Creating ETL Job flow design.
Used ETL to load data into the Oracle warehouse.
Created various standard/reusable jobs in DataStage using various active and passive stages like Sort, Lookup,
Filter, Join, Transformer, aggregator, Change Capture Data,Sequential file, DataSets.
Involved in development of Job Sequencing using the Sequencer.
Used Remove Duplicates stage to remove the duplicates in the data.
Used designer and director to schedules and monitor jobs and to collect the performance statistics
Extensively worked with database objects including tables, views and triggers.
Creating local and shared containers to facilitate ease and reuse of jobs.
Implemented the underlying logic for Slowly Changing Dimensions
Worked with Developers to troubleshoot and resolve issues in job logic as well as performance.
Documented ETL test plans, test cases, test scripts, and validations based on design specifications for unit
testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
7. Client: General Motors Sep 2012 – March 2013
Role: Datastage Developer
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
GM ADP Payroll project will outsource the payroll processing data to a third party provider - ADP which will process
the Employee Payrolls and sends back the data that will be used by various Downstream Payroll related Applications.
Currently 2 legacy applications are doing the payroll processing - HPS (Hourly Payroll System) and SPS (Salaried
payroll system) which involves lot of manual activities and need very long time to process the complete Payroll data.
With increase in number of employees, this process is expected to become more cumbersome.
So General Motors decided to decommission the existing HPS/SPS and carry out processing through a 3rd party
automated application called ADP.
Responsibilities:
Involved as ETL Developer during the analysis, planning, design, development, and implementation stages of
projects
Prepared Data Mapping Documents and Design the ETL jobs based on the DMD with required Tables in the
Dev Environment.
Active participation in decision making and QA meetings and regularly interacted with the Business Analysts
&development team to gain a better understanding of the Business Process,Requirements & Design.
Used DataStage as an ETL tool to extract data from sources systems, loaded the data into the
ORACLE database.
Designed and Developed Data stage Jobs to Extract data from heterogeneous sources,Applied transform
logics to extracted data and Loaded into Data Warehouse Databases.
Created Datastage jobs using different stages like Transformer,Aggregator, Sort, Join, Merge, Lookup, Data
Set, Funnel, Remove Duplicates, Copy, Modify, Filter, Change Data Capture,Sample, Surrogate Key, Column
Generator, Row Generator, Etc.
Extensively worked with Join, Look up (Normal and Sparse) and Merge stages.
Extensively worked with sequential file, dataset, file set and look up file set stages.
Extensively used Parallel Stages like Row Generator, Column Generator, Head,and Peek for development and
de-bugging purposes.
Used the Data Stage Director and its run-time engine to schedule running the solution, testing and debugging
its components, and monitoring the resulting executable versions on ad hoc or scheduled basis.
Converted complex job designs to different job segments and executed through job sequencer for better
performance and easy maintenance.
Creation of jobs sequences.
Maintained Data Warehouse by loading dimensions and facts as part of project. Also worked for different
enhancements in FACT tables.
Created shell script to run data stage jobs from UNIXand then schedule this script to run data stage jobs
through scheduling tool.
Coordinate with team members and administer all onsite and offshore work packages.
Analyze performance and monitor work with capacity planning.
Performed performance tuning of the jobs by interpreting performance statistics of the jobs developed.
8. Documented ETL test plans, test cases,test scripts, and validations based on design specifications for unit
testing, system testing, functional testing, prepared test data for testing, error handling and analysis.
Participated in weekly status meetings
Environment: IBM-DataStage-8.5, Oracle 10g, LINUX
Client: General Motors Feb 2011 – Aug 2012
Role: Team Member
General Motors is one of the world's leading manufacturers of cars and trucks. Its domestic models include Buick,
Cadillac, Chevrolet and GMC and the company has a huge international presence, selling vehicles in major countries
across the world.
This project having Global Strategic Pricing interfaces in production environment which are determine effective
pricing structure including determining usages of parts,sellling price, demand, warranty, currency exchange rate,etc
Responsibilities:
Supporting 350 interfaces specifications developed in different Datastage versions
Supporting 30 interface specifications mainly to / from SAP via R/3 Plugin
Analysed different Architecture designs for Batch Transaction Interfaces between
different Systems (Mainframe, Weblogic, SAP).
Strictly observed GM GIF (Global Integration Foundation) EAI Standard for Error Handling,
Audit report generation using CSF (Common Services Framework)
Develop and Deployment of MERS in Production environment.
Meeting with Clients for requirement gathering, addressing issues and change request.
Analysis of issues occurring in a production, resolving issues in Development and
Test environment and roll over the changes to production.
Involved in SOX Audit process.
Involved in Change Management Process.
Working on Production Support Calls.
Co-ordination between onsite and offshore team.
Environment: IBM-DataStage-8.1, IBM-DataStage-8.5,AscentialData Stage 7.5, Oracle 10g, LINUX