The document contains details about Nageswara Rao Dasari including his contact information, career objective, professional summary, technical summary, educational summary, and assignments. It outlines his 4+ years of experience as a Software Engineer working with technologies like Hadoop, Java, SQL, and tools like Eclipse. It provides details on 3 projects he worked on involving building platforms for banking customer data, retail customer data processing, and a web application.
Game Changed – How Hadoop is Reinventing Enterprise ThinkingInside Analysis
The Briefing Room with Dr. Robin Bloor and RedPoint Global
Live Webcast on April 8, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=cfa1bffdd62dc6677fa225bdffe4a0b9
The innovation curve often arcs slowly before picking up speed. Companies that harness a major transformation early in the game can make serious headway before challengers enter the picture. The world of Hadoop features several of these upstarts, each of which uses the open-source foundation as an engine to drive vastly greater performance to a wide range of services, and even create new ones.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor explain how the Hadoop engine is being used to architect a new generation of enterprise applications. He’ll be briefed by George Corugedo, RedPoint Global CTO and Co-founder, who will showcase how enterprises can cost-effectively take advantage of the scalability, processing power and lower costs that Hadoop 2.0/YARN applications offer by eliminating the long-term expense of hiring MapReduce programmers.
Visit InsideAnlaysis.com for more information.
How pig and hadoop fit in data processing architectureKovid Academy
Pig, developed by Yahoo research in 2006, enables programmers to write data transformation programs for Hadoop quickly and easily without the cost and complexity of map-reduce programs.
Battling the disrupting Energy Markets utilizing PURE PLAY Cloud ComputingEdwin Poot
Disruption can be intimidating. You may even be losing business to one or more rising competitors. You may be wondering how you could possibly compete. Rest assured, this disruption doesn’t mean you need to turn your business upside down. But just be smart in how you engage your business using innovation without the need for huge changes, high risks or large investments.
BDaas- BigData as a service by "Sherya Pal" from "Saama". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Hadoop Reporting and Analysis - JaspersoftHortonworks
Hadoop is deployed for a variety of uses, including web analytics, fraud detection, security monitoring, healthcare, environmental analysis, social media monitoring, and other purposes.
IBM InfoSphere BigInsights for Hadoop: 10 Reasons to Love ItIBM Analytics
Originally Published on Oct 15, 2014
IBM InfoSphere BigInsights is an industry-standard Hadoop offering that combines the best of open-source software with enterprise-grade features.
- #1 InfoSphere BigInsights is 100% standard, open-source Hadoop
- #2 Big SQL - Lightning fast, ANSI-compliant, native Hadoop formats
- #3 BigSheets - Spreadsheet-like data access for business users
- #4 Big Text - Simplify text analytics and natural language
- #5 Adaptive MapReduce - Fully compatible, four times faster
- #6 In-Hadoop Analytics - Deploy the analytics to the data
- #7 HDFS and POSIX - a more capable enterprise file system
- #8 Big R - Deep R Language integration in Hadoop
- #9 IBM Watson Explorer - Search, explore and visualize all your data
- #10 Accelerators - Get to market faster leveraging pre-written code
To learn more about IBM InfoSphere BigInsights, download the free InfoSphere BigInsights QuickStart Edition from http://ibm.com/hadoop.
Game Changed – How Hadoop is Reinventing Enterprise ThinkingInside Analysis
The Briefing Room with Dr. Robin Bloor and RedPoint Global
Live Webcast on April 8, 2014
Watch the archive: https://bloorgroup.webex.com/bloorgroup/lsr.php?RCID=cfa1bffdd62dc6677fa225bdffe4a0b9
The innovation curve often arcs slowly before picking up speed. Companies that harness a major transformation early in the game can make serious headway before challengers enter the picture. The world of Hadoop features several of these upstarts, each of which uses the open-source foundation as an engine to drive vastly greater performance to a wide range of services, and even create new ones.
Register for this episode of The Briefing Room to hear veteran Analyst Dr. Robin Bloor explain how the Hadoop engine is being used to architect a new generation of enterprise applications. He’ll be briefed by George Corugedo, RedPoint Global CTO and Co-founder, who will showcase how enterprises can cost-effectively take advantage of the scalability, processing power and lower costs that Hadoop 2.0/YARN applications offer by eliminating the long-term expense of hiring MapReduce programmers.
Visit InsideAnlaysis.com for more information.
How pig and hadoop fit in data processing architectureKovid Academy
Pig, developed by Yahoo research in 2006, enables programmers to write data transformation programs for Hadoop quickly and easily without the cost and complexity of map-reduce programs.
Battling the disrupting Energy Markets utilizing PURE PLAY Cloud ComputingEdwin Poot
Disruption can be intimidating. You may even be losing business to one or more rising competitors. You may be wondering how you could possibly compete. Rest assured, this disruption doesn’t mean you need to turn your business upside down. But just be smart in how you engage your business using innovation without the need for huge changes, high risks or large investments.
BDaas- BigData as a service by "Sherya Pal" from "Saama". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Hadoop Reporting and Analysis - JaspersoftHortonworks
Hadoop is deployed for a variety of uses, including web analytics, fraud detection, security monitoring, healthcare, environmental analysis, social media monitoring, and other purposes.
IBM InfoSphere BigInsights for Hadoop: 10 Reasons to Love ItIBM Analytics
Originally Published on Oct 15, 2014
IBM InfoSphere BigInsights is an industry-standard Hadoop offering that combines the best of open-source software with enterprise-grade features.
- #1 InfoSphere BigInsights is 100% standard, open-source Hadoop
- #2 Big SQL - Lightning fast, ANSI-compliant, native Hadoop formats
- #3 BigSheets - Spreadsheet-like data access for business users
- #4 Big Text - Simplify text analytics and natural language
- #5 Adaptive MapReduce - Fully compatible, four times faster
- #6 In-Hadoop Analytics - Deploy the analytics to the data
- #7 HDFS and POSIX - a more capable enterprise file system
- #8 Big R - Deep R Language integration in Hadoop
- #9 IBM Watson Explorer - Search, explore and visualize all your data
- #10 Accelerators - Get to market faster leveraging pre-written code
To learn more about IBM InfoSphere BigInsights, download the free InfoSphere BigInsights QuickStart Edition from http://ibm.com/hadoop.
• Capable of processing large sets of structured, semi-structured and unstructured data and supporting system architecture
• Implemented Proof of concepts on Hadoop stack and different big data analytic tools, migration from different databases to Hadoop.
• Developed multiple Map Reduce jobs in java for data cleaning and pre-processing according to the business requirements, Importing and exporting data into HDFS and Hive using Sqoop.
Having Experience in writing HIVE queries & Pig scripts.
1. Name: NAGESWARA RAO DASARI Tel No : 91-9035131268(M)
Email : nageswara268@gmail.com
Career Objective
A challenging and vibrant career in a growing organization, where I could be able to learn and use my technical skills
which can contribute to the growth of organization in the field of technology.
Professional Summary
Software Engineer, Capgemini India privet limited, Bangalore.
Overall 4+ Years of experience.
2.6 years of work experience on BIG DATA Technologies(Hadoop)
1. 7years of work experience on core java.
Highly versatile and experienced in adapting and implementing the latest technologies in new application solutions.
Hands-on experience in designing and implementing solutions using Apache Hadoop, HDFS, Map Reduce,Spark
Hive, PIG,Sqoop.
Knowledge in Tableau reporting tools
Experience in Agile software development process
Strong knowledge in OOP Concepts, Core Java .
Good exposure on Windows and Linux platforms.
Technical Summary
Big Data Ecosystem: Hadoop, Map Reduce, Pig, Hive.
Good Knowledge: HBase, Sqoop, Flume, Oozie, Zookeeper, Spark and Scala.
Languages & Frameworks: Core Java, SQL.
Scripting Languages: Unix Shell Scripting
Web Technologies : HTML
Tools : Eclipse,Putty , Maven
Databases: Oracle 9i, MySQL
Development Methodologies: Agile SCURM.
Educational Summary
2. Name: NAGESWARA RAO DASARI Tel No : 91-9035131268(M)
Email : nageswara268@gmail.com
B.Tech in Electrical and Electronics Engineering from JNTU Kakinada(2008 - 2012)
Assignments
3. Name: NAGESWARA RAO DASARI Tel No : 91-9035131268(M)
Email : nageswara268@gmail.com
Banking Customer – Enterprise Data Provisioning Platform
Duration : Jan 2016 – Till date
Client: – BARCLAYS Bank , U K
Team Size : 31
Designation: Hadoop Developer
Project Description: - The Enterprise Data Provisioning Platform (EDPP), which is the desired build of the
Information Excellence project, will allow BARCLAYS to address new business needs and is in line with
BARCLAYS’s guiding principle which is to operate with excellence. The primary object of the EDPP project is to
institutionalize a Hadoop platform for data collected within BARCLAYS and make it available to perform analytics
on the collected data.
Environment:
Distribution CDH5, Apache Pig, Hive, Java , Unix , MySQL, Spark and Scala.
Roles & Responsibilities:
Designing schemas in Hive.
Moved all the data obtained from different sources into hadoop environment
Created HIVE tables to store the processed results in a tabular format
Written Map Reduce programs to process the HDFS data and convert into common format.
Written shell scripts for automation of the loading process
Resolving JIRA Tickets.
Unit Testing and Performance tuning of hive queries
Written various hive queries
Involved in Client engagements
Responsible to conduct scrum meetings.
Retail Customer – TARGET Re-hosting of Web Intelligence Project
Duration : Dec 2014 – Nov 2015
Client: – TARGET , USA
Team Size : 15
Designation: Hadoop Developer
Project Description: - The purpose of the project is to store terabytes of log information generated by the
ecommerce website and extract meaning information out of it. The solution is based on the open source
BigData s/w Hadoop .The data will be stored in Hadoop file system and processed using Map/Reduce jobs.
Which intern includes getting the raw html data from the websites ,Process the html to obtain product and pricing
information, Extract various reports out of the product pricing information and Export the information for further
processing.
This project is mainly for the replatforming of the current existing system which is running on WebHarvest a
third party JAR and in MySQL DB to a new cloud solution technology called Hadoop which can able to process
large date sets (i.e. Tera bytes and Peta bytes of data) in order to meet the client requirements with the
increasing competion from his retailers.
4. Name: NAGESWARA RAO DASARI Tel No : 91-9035131268(M)
Email : nageswara268@gmail.com
Environment:
Distribution CDH5,Apache Pig, Hive, SQOOP, Unix , MySQL
Roles & Responsibilities:
Moved all crawl data flat files generated from various retailers to HDFS for further processing
Written the Apache PIG scripts to process the HDFS data.
Created HIVE tables to store the processed results in a tabular format
Developed the sqoop scripts in order to make the interaction between Pig and MySQL Database.
Involved in resolving the JIRAs based on Hadoop.
Developed the Unix shell scripts for creating the reports from Hive data.
Completely involved in the requirement analysis phase.
Web Application – Intella Sphere
Duration : Dec 2012 – Jun 2014
Environment: :
Java, Mysql, Mongo db2.4.6, Activity Workflow, Svn
Designation: Java Developer
Project Description: - The brand essence of Intella Sphere is direct, analytical and engaging. It's all about
empowring business to gain the intelligence they need to grow and improve their brand in the new age of marketing.
Intella Sphere is the ultimate marketing tool, giving a company the devices they need to gain market share, beat the
competition and get true results. Intella Sphere understands these challenges better than anyone and uses
experience and innovation to create the right tools for a business to clearly understand its audience and empower
them to grow and engage with their community!
Responsibilities:
• DAO Created for all DB operations using Mongo Db api.
• Worked on Designing phase of application using Visual Paradigm tool.
• Implementing social oauth configuration.
• Using social api for social networks(Facebook, Twitter, LinkedIn, Blogger, Youtube).
• Implementing Activiti workflow.
• Implementing aggregations for calculating metrics.
• Working on mongo db replica set.
• Worked on development and production environments.