NAGESH M E.Mail : nageshmadanala1@gmail.com
Ph.No : +91 7416453763
CAREER OBJECTIVE :
To excel as software professional and hold up a position in corporate world through diligence
and dedicationandtoensure myhighestcontributiontowardsthe organizationIworkwith.
CAREER SUMMERY :
Experienced Hadoop Developer has a strong background with file distribution systems in a big-
data arena. Understands the complex processing needs of big data and has experience developing codes
and modules to address those needs. Familiar with Hadoop information architecture and ELT
workflows.
PROFESSIONAL EXPERIENCE:
 Experience :
• 2+ years of experience inITindustry that includes HadoopDeveloper(withknowledge of
Sqoop,Pig,Hive,HBase,Flume) andPL/SQLdeveloper.
• Experience indeveloping PIG LatinScripts and using Hive Query Language for data
Analytics.
• Good workingexperience usingSqoopto import data into HDFS from RDBMS and vice-
versa.
• Providinghardware architectural guidance,planningandestimatingclustercapacity,and
creatingroadmapsfor Hadoop clusterdeployment.
 Maintaining and monitoring clusters. Loaded data into the cluster from dynamically
generated files using Flume and from relational database management systems using
Sqoop.
 Supporting Hadoop developers and assisting in optimization of map reduce jobs, Pig Latin
scripts,Hive Scripts, andHBase ingestrequired.
• Experience inNOSQLcolumnorienteddatabases like HBase and its integrationwith Hive
and Pig.
• UsedSqoop extensivelytoingestdata fromvarioussource systems intoHDFS.
• Assistedin loadinglarge sets ofdata (Structure, Semi Structured,Unstructured).
• Managed Hadoop clusters include addingandremovingclusternodesformaintenance and
capacityneeds.
• Experience in InformationTechnologyincludingPL/SQL,System Analysis,Development,
StrongDatabase andTestingconcepts.
AREAS OF EXPERTISE :
 Big Data Ecosystems : Map Reduce,HDFS, HBase, Hive, Pig,Sqoop, Oozie and Flume.
 Programming Languages : Pig,hive,sql and Java.
 Scripting Languages : JSP and JavaScript.
 Databases : Oracle, MySQL and NoSql.
 Tools : Eclipse,Netbeans.
 Platforms : Windows(2000/XP), Ubuntuand CentOS.
HADOOP PROJECT'S:
Project #1
Title : Data processingusingMap Reduce in NoSql databases
Environment :Hadoop,Apache Pig,Hive,OOZIE,SQOOP,UNIX, HBase andMySQL.
Role :Hadoop Developer
Project Description:
The purpose of the project is to perform the analysis on the Effectiveness and validity of
controls and to store terabytes of log information generated by the source providers as part of the
analysis and extract meaningful information out of it. The solution is based on the open source Big Data
software Hadoop. The data will be stored in Hadoop file system and processed using Map Reduce jobs,
which intern includes getting the raw data, process the data to obtain controls and redesign/change
history information, extract various reports out of the controls history and Export the information for
furtherprocessing.
Roles& Responsibilities:
 Worked on setting up pig, Hive and HBase on multiple nodes and developed using Pig, Hive and
HBase,MapReduce.
 DevelopedMapReduce applicationusingHadoop,MapReduce programmingandHBase.
 Involvedindevelopingthe Pigscripts
 InvolvedindevelopingReportsandchartsusingJChartlibrary.
 Developedthe Sqoop scriptsinordertomake the interactionbetweenPigandMySQL Database.
Project #2
Title : SentimentAnalysison customer evolutioninbanking domain
Environment :Hadoop,Apache Pig,SQOOP,Java,LINUX, and MySQL.
Role :Hadoop Developer.
Project Description :
The purpose of the project is to store information generated by the bank’s historical data,
extract meaning information out of it and based on the information predict the customer’s category.
The solution is based on the open source BigData s/w Hadoop .The data will be stored in Hadoop file
systemandprocessedusingMap/Reduce jobsforproductandpricinginformation.
Roles& Responsibilities:
 Involvedindevelopingthe Pigscripts
 Developedthe Sqoopscriptsinordertomake the interactionbetweenPigandMySQLDatabase.
 Completelyinvolvedinthe requirementanalysisphase
 Analytics Model with R used for group expressions, Box plots and used to read files
usingread.table( ) functionsandscan( ) functions.RwithHDFS and R withHBASE.
 DidEDA on the data set andcarefullyremovedthe irrelevantdataitems.
ACADEMIC QUALIFICATION :
 GraduationinComputerScience Engineeringfrom JNTUHyderabad with75 %
 IntermediateinMPCfrom SREE NIDHI JuniorCollegewith 87%
 Highschool passedfrom VAGDEVIHIGH SCHOOL with86%
DECLARATION :
I hereby state that all the above furnished information is true and correct to the best of my
knowledge andbelief.
DATE :
PLACE : Chennai ( NAGESH )

Nagesh Hadoop Profile

  • 1.
    NAGESH M E.Mail: nageshmadanala1@gmail.com Ph.No : +91 7416453763 CAREER OBJECTIVE : To excel as software professional and hold up a position in corporate world through diligence and dedicationandtoensure myhighestcontributiontowardsthe organizationIworkwith. CAREER SUMMERY : Experienced Hadoop Developer has a strong background with file distribution systems in a big- data arena. Understands the complex processing needs of big data and has experience developing codes and modules to address those needs. Familiar with Hadoop information architecture and ELT workflows. PROFESSIONAL EXPERIENCE:  Experience : • 2+ years of experience inITindustry that includes HadoopDeveloper(withknowledge of Sqoop,Pig,Hive,HBase,Flume) andPL/SQLdeveloper. • Experience indeveloping PIG LatinScripts and using Hive Query Language for data Analytics. • Good workingexperience usingSqoopto import data into HDFS from RDBMS and vice- versa. • Providinghardware architectural guidance,planningandestimatingclustercapacity,and creatingroadmapsfor Hadoop clusterdeployment.  Maintaining and monitoring clusters. Loaded data into the cluster from dynamically generated files using Flume and from relational database management systems using Sqoop.  Supporting Hadoop developers and assisting in optimization of map reduce jobs, Pig Latin scripts,Hive Scripts, andHBase ingestrequired. • Experience inNOSQLcolumnorienteddatabases like HBase and its integrationwith Hive and Pig. • UsedSqoop extensivelytoingestdata fromvarioussource systems intoHDFS. • Assistedin loadinglarge sets ofdata (Structure, Semi Structured,Unstructured). • Managed Hadoop clusters include addingandremovingclusternodesformaintenance and capacityneeds. • Experience in InformationTechnologyincludingPL/SQL,System Analysis,Development, StrongDatabase andTestingconcepts.
  • 2.
    AREAS OF EXPERTISE:  Big Data Ecosystems : Map Reduce,HDFS, HBase, Hive, Pig,Sqoop, Oozie and Flume.  Programming Languages : Pig,hive,sql and Java.  Scripting Languages : JSP and JavaScript.  Databases : Oracle, MySQL and NoSql.  Tools : Eclipse,Netbeans.  Platforms : Windows(2000/XP), Ubuntuand CentOS. HADOOP PROJECT'S: Project #1 Title : Data processingusingMap Reduce in NoSql databases Environment :Hadoop,Apache Pig,Hive,OOZIE,SQOOP,UNIX, HBase andMySQL. Role :Hadoop Developer Project Description: The purpose of the project is to perform the analysis on the Effectiveness and validity of controls and to store terabytes of log information generated by the source providers as part of the analysis and extract meaningful information out of it. The solution is based on the open source Big Data software Hadoop. The data will be stored in Hadoop file system and processed using Map Reduce jobs, which intern includes getting the raw data, process the data to obtain controls and redesign/change history information, extract various reports out of the controls history and Export the information for furtherprocessing. Roles& Responsibilities:  Worked on setting up pig, Hive and HBase on multiple nodes and developed using Pig, Hive and HBase,MapReduce.  DevelopedMapReduce applicationusingHadoop,MapReduce programmingandHBase.  Involvedindevelopingthe Pigscripts  InvolvedindevelopingReportsandchartsusingJChartlibrary.  Developedthe Sqoop scriptsinordertomake the interactionbetweenPigandMySQL Database.
  • 3.
    Project #2 Title :SentimentAnalysison customer evolutioninbanking domain Environment :Hadoop,Apache Pig,SQOOP,Java,LINUX, and MySQL. Role :Hadoop Developer. Project Description : The purpose of the project is to store information generated by the bank’s historical data, extract meaning information out of it and based on the information predict the customer’s category. The solution is based on the open source BigData s/w Hadoop .The data will be stored in Hadoop file systemandprocessedusingMap/Reduce jobsforproductandpricinginformation. Roles& Responsibilities:  Involvedindevelopingthe Pigscripts  Developedthe Sqoopscriptsinordertomake the interactionbetweenPigandMySQLDatabase.  Completelyinvolvedinthe requirementanalysisphase  Analytics Model with R used for group expressions, Box plots and used to read files usingread.table( ) functionsandscan( ) functions.RwithHDFS and R withHBASE.  DidEDA on the data set andcarefullyremovedthe irrelevantdataitems. ACADEMIC QUALIFICATION :  GraduationinComputerScience Engineeringfrom JNTUHyderabad with75 %  IntermediateinMPCfrom SREE NIDHI JuniorCollegewith 87%  Highschool passedfrom VAGDEVIHIGH SCHOOL with86% DECLARATION : I hereby state that all the above furnished information is true and correct to the best of my knowledge andbelief. DATE : PLACE : Chennai ( NAGESH )