Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)
 Email : dnr.nageswara@gmail.com
Career Objective
A challenging and vibrant career in a growing organization, where I could be able to learn and use my technical
skills which can contribute to the growth of organization in the field of technology.
Professional Summary
 Software Engineer, Capgemini India privet limited, Bangalore.
 Overall 3.10 Years of experience.
 2.5 years of work experience on BIG DATA Technologies(Hadoop)
 1.4 years of work experience on core java.
 Highly versatile and experienced in adapting and implementing the latest technologies in new application
solutions.
 Hands-on experience in designing and implementing solutions using Apache Hadoop, HDFS, Map Reduce,
Hive, PIG,Sqoop.
 Knowledge in Tableau reporting tools
 Experience in Agile software development process
 Strong knowledge in OOP Concepts, UML, Core Java and C.
 Good exposure on Windows and Linux platforms.
Technical Summary
 Big Data Ecosystem: Hadoop, Map Reduce, Pig, Hive.
 Good Knowledge: HBase, Sqoop, Flume, Oozie, Zookeeper, Spark and Scala.
 Languages & Frameworks: Core Java, SQL.
 Scripting Languages: Java script, Unix Shell Scripting
 Web Technologies : HTML, JavaScript, CSS
 Tools : Eclipse, Clear case, Git ,Putty ,Winscp, Maven
 Databases: Oracle 9i, MySQL
 Development Methodologies: Agile SCURM.
Educational Summary
 B.Tech in Electrical and Electronics Engineering from JNTU Kakinada
Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)
 Email : dnr.nageswara@gmail.com
Assignments
 Banking Customer – Enterprise Data Provisioning Platform
Duration : Jul 2015 – Till date
Client: – BARCLAYS Bank , U K
Team Size : 31
Designation: Hadoop Developer
Project Description: - The Enterprise Data Provisioning Platform (EDPP), which is the desired build of the
Information Excellence project, will allow BARCLAYS to address new business needs and is in line with
BARCLAYS’s guiding principle which is to operate with excellence. The primary object of the EDPP project
is to institutionalize a Hadoop platform for data collected within BARCLAYS and make it available to
perform analytics on the collected data.
Environment:
Distribution CDH5, Apache Pig, Hive, Java , Unix , MySQL, Spark and Scala.
Roles & Responsibilities:
 Designing schemas in Hive.
 Moved all the data obtained from different sources into hadoop environment
 Created HIVE tables to store the processed results in a tabular format
 Written Map Reduce programs to process the HDFS data and convert into common format.
 Written shell scripts for automation of the loading process
 Resolving JIRA Tickets.
 Unit Testing and Performance tuning of hive queries
 Written various hive queries
 Involved in Client engagements
 Responsible to conduct scrum meetings.
 Retail Customer – TARGET Re-hosting of Web Intelligence Project
Duration : Nov 2014 – Jun 2015
Client: – TARGET , USA
Team Size : 15
Designation: Hadoop Developer
Project Description: - The purpose of the project is to store terabytes of log information generated by the
ecommerce website and extract meaning information out of it. The solution is based on the open source
Name: NAGESWARA RAO DASARI Tel No : 91-9035234744(M)
 Email : dnr.nageswara@gmail.com
BigData s/w Hadoop .The data will be stored in Hadoop file system and processed using Map/Reduce jobs.
Which intern includes getting the raw html data from the websites ,Process the html to obtain product and
pricing information, Extract various reports out of the product pricing information and Export the information
for further processing.
This project is mainly for the replatforming of the current existing system which is running on
WebHarvest a third party JAR and in MySQL DB to a new cloud solution technology called Hadoop which
can able to process large date sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client
requirements with the increasing competion from his retailers.
Environment:
Distribution CDH5,Apache Pig, Hive, SQOOP,Java , Unix , PHP , MySQL
Roles & Responsibilities:
 Moved all crawl data flat files generated from various retailers to HDFS for further processing
 Written the Apache PIG scripts to process the HDFS data.
 Created HIVE tables to store the processed results in a tabular format
 Developed the sqoop scripts in order to make the interaction between Pig and MySQL Database.
 Involved in resolving the JIRAs based on Hadoop.
 Developed the Unix shell scripts for creating the reports from Hive data.
 Completely involved in the requirement analysis phase.
 Web Application – Intella Sphere
Duration : Dec 2012 – Jun 2014
Environment: :
Java, Mysql, Mongo db2.4.6, Activity Workflow, Svn
Designation: Java Developer
Project Description: - The brand essence of Intella Sphere is direct, analytical and engaging. It's all about
empowring business to gain the intelligence they need to grow and improve their brand in the new age of
marketing. Intella Sphere is the ultimate marketing tool, giving a company the devices they need to gain
market share, beat the competition and get true results. Intella Sphere understands these challenges better
than anyone and uses experience and innovation to create the right tools for a business to clearly understand
its audience and empower them to grow and engage with their community!
Responsibilities:
• DAO Created for all DB operations using Mongo Db api.
• Worked on Designing phase of application using Visual Paradigm tool.
• Implementing social oauth configuration.
• Using social api for social networks(Facebook, Twitter, LinkedIn, Blogger, Youtube).
• Implementing Activiti workflow.
• Implementing aggregations for calculating metrics.
• Working on mongo db replica set.
• Worked on development and production environments.
Resume (1)

Resume (1)

  • 1.
    Name: NAGESWARA RAODASARI Tel No : 91-9035234744(M)  Email : dnr.nageswara@gmail.com Career Objective A challenging and vibrant career in a growing organization, where I could be able to learn and use my technical skills which can contribute to the growth of organization in the field of technology. Professional Summary  Software Engineer, Capgemini India privet limited, Bangalore.  Overall 3.10 Years of experience.  2.5 years of work experience on BIG DATA Technologies(Hadoop)  1.4 years of work experience on core java.  Highly versatile and experienced in adapting and implementing the latest technologies in new application solutions.  Hands-on experience in designing and implementing solutions using Apache Hadoop, HDFS, Map Reduce, Hive, PIG,Sqoop.  Knowledge in Tableau reporting tools  Experience in Agile software development process  Strong knowledge in OOP Concepts, UML, Core Java and C.  Good exposure on Windows and Linux platforms. Technical Summary  Big Data Ecosystem: Hadoop, Map Reduce, Pig, Hive.  Good Knowledge: HBase, Sqoop, Flume, Oozie, Zookeeper, Spark and Scala.  Languages & Frameworks: Core Java, SQL.  Scripting Languages: Java script, Unix Shell Scripting  Web Technologies : HTML, JavaScript, CSS  Tools : Eclipse, Clear case, Git ,Putty ,Winscp, Maven  Databases: Oracle 9i, MySQL  Development Methodologies: Agile SCURM. Educational Summary  B.Tech in Electrical and Electronics Engineering from JNTU Kakinada
  • 2.
    Name: NAGESWARA RAODASARI Tel No : 91-9035234744(M)  Email : dnr.nageswara@gmail.com Assignments  Banking Customer – Enterprise Data Provisioning Platform Duration : Jul 2015 – Till date Client: – BARCLAYS Bank , U K Team Size : 31 Designation: Hadoop Developer Project Description: - The Enterprise Data Provisioning Platform (EDPP), which is the desired build of the Information Excellence project, will allow BARCLAYS to address new business needs and is in line with BARCLAYS’s guiding principle which is to operate with excellence. The primary object of the EDPP project is to institutionalize a Hadoop platform for data collected within BARCLAYS and make it available to perform analytics on the collected data. Environment: Distribution CDH5, Apache Pig, Hive, Java , Unix , MySQL, Spark and Scala. Roles & Responsibilities:  Designing schemas in Hive.  Moved all the data obtained from different sources into hadoop environment  Created HIVE tables to store the processed results in a tabular format  Written Map Reduce programs to process the HDFS data and convert into common format.  Written shell scripts for automation of the loading process  Resolving JIRA Tickets.  Unit Testing and Performance tuning of hive queries  Written various hive queries  Involved in Client engagements  Responsible to conduct scrum meetings.  Retail Customer – TARGET Re-hosting of Web Intelligence Project Duration : Nov 2014 – Jun 2015 Client: – TARGET , USA Team Size : 15 Designation: Hadoop Developer Project Description: - The purpose of the project is to store terabytes of log information generated by the ecommerce website and extract meaning information out of it. The solution is based on the open source
  • 3.
    Name: NAGESWARA RAODASARI Tel No : 91-9035234744(M)  Email : dnr.nageswara@gmail.com BigData s/w Hadoop .The data will be stored in Hadoop file system and processed using Map/Reduce jobs. Which intern includes getting the raw html data from the websites ,Process the html to obtain product and pricing information, Extract various reports out of the product pricing information and Export the information for further processing. This project is mainly for the replatforming of the current existing system which is running on WebHarvest a third party JAR and in MySQL DB to a new cloud solution technology called Hadoop which can able to process large date sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client requirements with the increasing competion from his retailers. Environment: Distribution CDH5,Apache Pig, Hive, SQOOP,Java , Unix , PHP , MySQL Roles & Responsibilities:  Moved all crawl data flat files generated from various retailers to HDFS for further processing  Written the Apache PIG scripts to process the HDFS data.  Created HIVE tables to store the processed results in a tabular format  Developed the sqoop scripts in order to make the interaction between Pig and MySQL Database.  Involved in resolving the JIRAs based on Hadoop.  Developed the Unix shell scripts for creating the reports from Hive data.  Completely involved in the requirement analysis phase.  Web Application – Intella Sphere Duration : Dec 2012 – Jun 2014 Environment: : Java, Mysql, Mongo db2.4.6, Activity Workflow, Svn Designation: Java Developer Project Description: - The brand essence of Intella Sphere is direct, analytical and engaging. It's all about empowring business to gain the intelligence they need to grow and improve their brand in the new age of marketing. Intella Sphere is the ultimate marketing tool, giving a company the devices they need to gain market share, beat the competition and get true results. Intella Sphere understands these challenges better than anyone and uses experience and innovation to create the right tools for a business to clearly understand its audience and empower them to grow and engage with their community! Responsibilities: • DAO Created for all DB operations using Mongo Db api. • Worked on Designing phase of application using Visual Paradigm tool. • Implementing social oauth configuration. • Using social api for social networks(Facebook, Twitter, LinkedIn, Blogger, Youtube). • Implementing Activiti workflow. • Implementing aggregations for calculating metrics. • Working on mongo db replica set. • Worked on development and production environments.