"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
Srikanth hadoop hyderabad_3.4yeras - copy
1. +91-
Srikanth K
+91-7075436413
srikanthkamatam1@gmail.com
Skilled software engineer looking to enhance professional skills in a dynamic and a fast paced workplace while
contributing to challenging goals in a project based environment. I am seeking for an opportunity that challenges my
skill set so that I can contribute in the growth and development of the organization using high end technologies.
• 3+ of experience in Big-Data Analytics, Hadoop Paradigm and Core Java along with designing,
developing and deploying large scale distributed systems.
• Good experience in Hadoop Framework, HDFS, Map/Reduce, Pig, Hive, Sqoop.
• Involved in implementing Proof of Concepts for various clients across various ISU in Hadoop and its related
technologies.
• Extensive expertise and solid understanding of OOPs and collection framework
• Having strong trouble shooting and problem solving skills.
• Proficient in Database Programming skills SQL.
• Well versed in MapReduce MRv1.
• Excellent understanding of Hadoop architecture and various components such as HDFS, Job Tracker, Task
Tracker, Name Node, Data Node and Map Reduce programming paradigm.
• Extensive experience in analyzing data with big data tools like Pig Latin and Hive QL.
• Extending Hive and Pig core functionality by writing custom UDFs.
• Hands on experience in installing, configuring and using ecosystem components like Hadoop Map Reduce,
HDFS, HBase, Oozie, Sqoop, Flume, Pig & Hive.
EDUCATION QUALIFICATION
• M C A from Jawaharlal Nehru Technological University
PROFESSIONAL EXPERIENCE
Organization: - ADP INDIA PVT LTD through ALWASI SOFTWARE Pvt. Ltd. Hyderabad
Duration: - July 2012– To Present.
Designation: - Software Engineer.
Project 1: - Re-hosting of Web Intelligence
Client : - LOWES
Duration: - Dec 2013 – To Present.
Description:
The purpose of the project is to store terabytes of log information generated by the ecommerce website and extract
meaning information out of it. The solution is based on the open source BigData s/w Hadoop .The data will be stored
in Hadoop file system and processed using Map/Reduce jobs. Which intern includes getting the raw html data from
1
2. +91-
the websites, Process the html to obtain product and pricing information, Extract various reports out of the product
pricing information and Export the information for further processing
This project is mainly for the re-platforming of the current existing system which is running on WebHarvest a third
party JAR and in MySQL DB to a new cloud solution technology called Hadoop which can able to process large date
sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client requirements with the increasing completion
from his retailers.
Environment : Hadoop, Apache Pig, Hive, Sqoop, Java, Linux, MySQL
Roles & Responsibilities:-
• Participated in client calls to gather and analyses the requirement.
• Moved all crawl data flat files generated from various retailers to HDFS for further processing.
• Written the Apache PIG scripts to process the HDFS data.
• Created Hive tables to store the processed results in a tabular format.
• Developed the sqoop scripts in order to make the interaction between Pig and MySQL Database.
• For the development of Dashboard solution, developed the Controller, Service and Dao layers of Spring
Framework.
• Developed scripts for creating the reports from Hive data.
• Completely involved in the requirement analysis phase.
Project 2: - Repository DW
Client : - Private Bank
Duration: - July 2013 – Nov 2013
Description:
A full-fledged dimensional data mart to cater to the CPB analytical reporting requirement as the current
GWM system is mainly focused on data enrichment, adjustment, defaulting and other data oriented
process. Involved in the full development life cycle in a distributed environment for the Candidate Module.
Private Bank Repository system is processing approximately 5, 00,000 of records every month.
2
3. +91-
Roles & Responsibilities: -
• Participated in client calls to gather and analyses the requirement.
• Involved in setup for Hadoop Cluster in Pseudo-Distributed Mode based on Linux commands.
• Involved in Core Concepts of Hadoop HDFS, Map Reduce (Like Job Tracker, Task tracker).
• Involved in Map Reduce phases By Using Core java , create and put jar files in to HDFS and run web UI for
Name node , Job Tracker and Task Tracker.
• Involved in Extracting, transforming, loading Data from Hive to Load an RDBMS.
• Involved in Transforming Data within a Hadoop Cluster
• Involved in Using Pentaho Map Reduce to Parse Weblog Data for Pentaho Map Reduce to convert raw
weblog data into parsed, delimited records.
• Involved Job to Load, Loading Data into Hive.
• Involved in Create the Table in Hbase , Create a Transformation to Load Data into Hbase .
• Involved in Writing input output Formats for CSV.
• Involved in Import and Export by using Sqoop for job entries.
• Design and development by using Pentaho.
• Involved in Unit Test Pentaho Map Reduce Transformation
Project under training: - Big Data initiative in the largest Financial Institute in North America.
Client: Xavient Information Systems.
Duration: July 2012 to June 2013
Description: -
One of the largest financial institutions in North America had implemented small business banking e statements
project using existing software tools and applications. The overall process to generate e statement and send alerts to
customers was taking 18 to 30 hours per cycle day. Hence missing all SLA's leading to customer dissatisfaction.
The purpose of the project to cut down the processing time to generate E-statements and alerts by at least 50% and
also cut down the cost by 50%.
Environment : Hadoop, Map Reduce, Hive
3
4. +91-
Roles & Responsibilities:-
• Create hduser for performing hdfs operations
• Create Map Reduce user for performing map Reduce operations only
• Written the Apache PIG scripts to process the HDFS data.
• Setting Password less Hadoop
• Hadoop Installation Verification(Terra sort benchmark test)
• Setup Hive with Mysql as a Remote Meta store
• Developed the sqoop scripts in order to make the interaction between Hive and My SQL Database.
• Moved all log files generated by various network devices into HDFS location
• Created External Hive Table on top of parsed data
(SRIKANTH K)
4