PRABHAKAR T
E-mail: prabha_it53@yahoo.com
Mobile: +91-9703045253
Professional Summary:
 Diligent and hardworking professional with around 3 years of experience in IT sector.
 Having 2 years of Hands on Experience in Big Data technologies like Apache Hadoop,
HDFS, MapReduce, Hive, Pig, Sqoop and Hbase.
 Having Hands on Experience with Hadoop Core Concepts Like HDFS and Map Reduce.
 Having Hands on Experience in Core Java for Map Reduce Concepts / life cycle.
 Having Experience in HIVE Database and Analytics using Pig
 Having Hands on Experience in Hive Using CLI Mode (Hive Shell) and Thrift Mode.
 Having Hands on Experience in Java concepts like Core java, Servlets,Jsp,JDBC,Struts
 Capable of processing large sets of structured, semi-structured and unstructured data
and supporting systems application architecture.
 Having basic Knowledge in Apache Flume, Cassandra, MongoDB.
 Strong experience in developing Linux environment with Linux.
 Strong experience in Relational Databases like MySQL, Oracle.
 Strong analytical skills with ability to quickly understand client’s business needs and
create specifications.
 Possess good communication and interpersonal skills
Technical Skills:
Hadoop Technology : HDFS, MapReduce, Pig Latin, Hive, Hbase, Sqoop
Programming Languages : Java
Web Technologies : JSP, Servlets, JDBC, HTML
Development Tools : Eclipse
Databases : Oracle, My SQL
Web/Application Servers : Tomcat
Operating Systems : Windows XP/7/8, UNIX, CentOS
1
Education:
B.Tech from Jawaharlal Nehru Technology University -2012
Working Experience:
 Currently Working as a BigData Developer at Tata Consultancy Services, Hyderabad
Since March 2013 to till date
 Worked with Ascendum Systems, Bangalore as Java Developer March 2012 to
February 2013
PROJECTS PROFILE:
Project # 1 : Log Analytics
Client : McGraw-Hill Companies, USA
Team size : 20
Role : Developer
Duration : March 2013 to till date
Environment: Hadoop, HDFS, MapReduce, Pig, Hive, HBase, UNIX and Oracle
Project Synopsis : This project aims to generating log files(flat files) and then move all log
data from individual servers to HDFS as the main log storage and management system and then
perform analysis on these HDFS datasets. This application has the ability to parse the log files
generated by the web server and derive useful information for analytics of the Weblog. Using
these analytical reports we can identify patterns of visitor’s interest. Per day it will generate 50
GB of data. The findings from weblog data are visiting users, peak usage users, frequently
visited pages, most viewed pages/content, visit duration, navigation, through which linked sites
are search engines, etc.. It’s going to generate reports for daily, weekly, monthly, Quarterly,
Half yearly and yearly.
The Web server generates and stores a large amount of data. Since the data is huge, manual
analysis of the data is difficult. The need to handle such large volumes of data leads to the Big
Data Concepts and data analysis using Hadoop and MapReduce framework
Responsibilities Managed:
 Participated in client calls to gather and analyze the requirement.
 Involved in Extracting, transforming, loading (ETL) data from Hive to Load an RDBMS.
2
 Storing and retrieved data using HQL in Hive and Pig Latin
 Involved Job to Load, Loading Data into Hive.
 Involved to use Hive External Tables
 Importing and Exporting data from Oracle to Hive,HDFS & HBase
 Created HBase tables to load large sets of structured, semi-structured and unstructured
data coming from UNIX, NoSQL and a variety of portfolios
 Involved in Import and Export by using Sqoop for job entries.
Project # 2 : Bringmefood
Client : Giraffe Web Designs (Guernsey City-UK).
Duration : March 2012-february 2013
Role : Developer
Database : Sqlserver
Application Server : JBoss
Team Size : 6
Technologies : Java, JSP, Servlet, Struts, HTML, Mysql.
Project Synopsis: Bringmefood is a web based enterprise portal, which facilitates the end user
to have his favorite item to be delivered to the address specified. The portal will manage all the
transactions through a payment gateway. The system basically contains the hierarchical levels
as admin, vendor and the customer. The customer can place an order to the nearest restaurant
from his place just entering the post code or dish. The system maintains all the history of orders
related to the customer, so that it will be helpful for the customer to order the same menu.
Also he can choose some of his orders as his favorite orders.
Responsibilities Managed:
 Highly involved in development process and functionality of the project.
 Enhancement of Vendor functionality like add/managing restaurants, add/view menu
items, managing profiles.
 Involved in development of mail sending process (like HTML by using xslt and XML) in
the project to various users like customer, vendor etc.
3

Prabhakar_Hadoop_2 years Experience

  • 1.
    PRABHAKAR T E-mail: prabha_it53@yahoo.com Mobile:+91-9703045253 Professional Summary:  Diligent and hardworking professional with around 3 years of experience in IT sector.  Having 2 years of Hands on Experience in Big Data technologies like Apache Hadoop, HDFS, MapReduce, Hive, Pig, Sqoop and Hbase.  Having Hands on Experience with Hadoop Core Concepts Like HDFS and Map Reduce.  Having Hands on Experience in Core Java for Map Reduce Concepts / life cycle.  Having Experience in HIVE Database and Analytics using Pig  Having Hands on Experience in Hive Using CLI Mode (Hive Shell) and Thrift Mode.  Having Hands on Experience in Java concepts like Core java, Servlets,Jsp,JDBC,Struts  Capable of processing large sets of structured, semi-structured and unstructured data and supporting systems application architecture.  Having basic Knowledge in Apache Flume, Cassandra, MongoDB.  Strong experience in developing Linux environment with Linux.  Strong experience in Relational Databases like MySQL, Oracle.  Strong analytical skills with ability to quickly understand client’s business needs and create specifications.  Possess good communication and interpersonal skills Technical Skills: Hadoop Technology : HDFS, MapReduce, Pig Latin, Hive, Hbase, Sqoop Programming Languages : Java Web Technologies : JSP, Servlets, JDBC, HTML Development Tools : Eclipse Databases : Oracle, My SQL Web/Application Servers : Tomcat Operating Systems : Windows XP/7/8, UNIX, CentOS 1
  • 2.
    Education: B.Tech from JawaharlalNehru Technology University -2012 Working Experience:  Currently Working as a BigData Developer at Tata Consultancy Services, Hyderabad Since March 2013 to till date  Worked with Ascendum Systems, Bangalore as Java Developer March 2012 to February 2013 PROJECTS PROFILE: Project # 1 : Log Analytics Client : McGraw-Hill Companies, USA Team size : 20 Role : Developer Duration : March 2013 to till date Environment: Hadoop, HDFS, MapReduce, Pig, Hive, HBase, UNIX and Oracle Project Synopsis : This project aims to generating log files(flat files) and then move all log data from individual servers to HDFS as the main log storage and management system and then perform analysis on these HDFS datasets. This application has the ability to parse the log files generated by the web server and derive useful information for analytics of the Weblog. Using these analytical reports we can identify patterns of visitor’s interest. Per day it will generate 50 GB of data. The findings from weblog data are visiting users, peak usage users, frequently visited pages, most viewed pages/content, visit duration, navigation, through which linked sites are search engines, etc.. It’s going to generate reports for daily, weekly, monthly, Quarterly, Half yearly and yearly. The Web server generates and stores a large amount of data. Since the data is huge, manual analysis of the data is difficult. The need to handle such large volumes of data leads to the Big Data Concepts and data analysis using Hadoop and MapReduce framework Responsibilities Managed:  Participated in client calls to gather and analyze the requirement.  Involved in Extracting, transforming, loading (ETL) data from Hive to Load an RDBMS. 2
  • 3.
     Storing andretrieved data using HQL in Hive and Pig Latin  Involved Job to Load, Loading Data into Hive.  Involved to use Hive External Tables  Importing and Exporting data from Oracle to Hive,HDFS & HBase  Created HBase tables to load large sets of structured, semi-structured and unstructured data coming from UNIX, NoSQL and a variety of portfolios  Involved in Import and Export by using Sqoop for job entries. Project # 2 : Bringmefood Client : Giraffe Web Designs (Guernsey City-UK). Duration : March 2012-february 2013 Role : Developer Database : Sqlserver Application Server : JBoss Team Size : 6 Technologies : Java, JSP, Servlet, Struts, HTML, Mysql. Project Synopsis: Bringmefood is a web based enterprise portal, which facilitates the end user to have his favorite item to be delivered to the address specified. The portal will manage all the transactions through a payment gateway. The system basically contains the hierarchical levels as admin, vendor and the customer. The customer can place an order to the nearest restaurant from his place just entering the post code or dish. The system maintains all the history of orders related to the customer, so that it will be helpful for the customer to order the same menu. Also he can choose some of his orders as his favorite orders. Responsibilities Managed:  Highly involved in development process and functionality of the project.  Enhancement of Vendor functionality like add/managing restaurants, add/view menu items, managing profiles.  Involved in development of mail sending process (like HTML by using xslt and XML) in the project to various users like customer, vendor etc. 3