This document provides a summary of R.HariKrishna's professional experience and skills. He has over 4 years of experience developing software using technologies like Java, Scala, Hadoop and NoSQL databases. Some of his key projects involved developing real-time analytics platforms using Spark Streaming, Kafka and Cassandra to analyze sensor data, and using Hadoop, Hive and Pig to perform predictive analytics on server logs and calculate production credit reports by analyzing banking transactions. He is proficient in MapReduce, Pig, Hive, HDFS and has skills in machine learning technologies like Mahout.
Expert Hadoop Developer with 4+ years of experience
1. R.HariKrishna
Krishnareddy.revuri@gmail.com 8892662994
PROFESSIONAL SUMMARY:
Over 4+ years of professional experience in IT industry, involving in Software development and
implementation of web based applications using MVC architecture. Having 2.5 year of experience in
Hadoop development using HDFS, MapReduce, Pig, Hive, HBase to include designing, developing and
deploying n-tired and enterprise level distributed applications.
• Good experience on Map Reduce, Pig and Hive and experienced for storing data in NOSQL
Databases like Mongo DB and HBase.
• Proficient in writing Map reduce jobs, Hive queries and Pig based scripts.
• Good knowledge on Map Reduce Framework & HDFS Architecture.
• Good knowledge of installation and configuration of Hadoop Cluster.
• Good knowledge in Hadoop Shell Commands.
• Extensively worked on Hive HQL Queries.
• Interpersonal skills with excellent analytical, problem solving and easily cop up with the
changing business needs.
• Ability to meet deadlines and handle pressure in coordinating multiple tasks in the work
environment.
SKILLS PROFILE:
Operating System : UNIX, Windows.
Hadoop : HDFS, Map Reduce, Hive, Pig, Kafka, Spark.
NoSql Databases : HBase, Mongodb.
ML Technologies : Mahout, NLP, Python, Data Science Packages.
Languages : Core java, Scala, SQL.
J2EE Technologie : JDBC, Servlets, JSP.
Query languages : Oracle SQL, PL/SQL.
Other Tools : Eclipse, Maven, Sbt.
EDUCATION: B.Tech (CSE) from Jawaharlal Nehru Technological University, HYD, A.P in the year 2012.
2. WORK EXPERIENCE
Working as a Hadoop Developer in “CMS IT SERVICES PVT LTD“ Bangalore.
# Project1: Sensor Analytics
Client : AMWAY
Duration : July 16 to Till Date
Role : Team Member
Team Size : 9
Environment: HDFS, FLUME, Kafka, Spark, Cassandra, zeppelin,Scala.
Description:
Sensor Analytics is Provides of Wi-fi hot spot devices in public spaces. Ability to collect the data
from these devices and analyse Existing System. Collect data and process in daily batches to generate
required results. There is Lot of Failures in User Login Need to analyse why there is a drop in user logins
ability to analyse the data in real time rather than daily batches. A reporting mechanism to view the
insights obtained from the analysis need to see results in real time. In a simple term we can call it as a real
time monitoring System.
Roles and Responsibilities :
• Worked as a Setting up hadoop Cluster Ecosystem with (hotonworks and Apache distribution)
• Worked on flume for moving large amounts of data from many different sources to a centralized
Data store (HDFS)
• Worked on Kafka distributed commit log for Store the Data.
• Worked on spark streaming with scala Programming to access live data.
• Maintains the nosql database using Cassandra.
• Monitoring the result sing the zeppelin web based notebook that allows interactive data analysis.
# Project2: PRANTO TOOL
Client : AMWAY
Duration : Sep 2014 to July 15
Role : Team Member
Team Size : 8
3. Environment: Hadoop, Mapreduce, Hbase, Flume, Spark, scala, R.
Description:
Predictive Analytics is the art and science of using data to make better informed decisions.
Predictive analytics helps you uncover hidden patterns and relationships in your data that can help you
predict with greater confidence what may happen in the future, and provide you with valuable, actionable
insights for your organization.
• Predicting the all server logs for the future Analytics.
• Doing Analytics using machine learning algorithms.
Roles and Responsibilities :
• Worked as a Setting up Hadoop Cluster Ecosystem with (Hotonworks and Apache Distribution)
• Worked on flume for moving large amounts of data from many different sources to a centralized
data store(HDFS)
• Wrote Map Reduce Jobs using Java to parse the logs which are stored in HDFS.
• Worked On Spark Streaming With Scala Programing to Access Live Data.
• Unit testing of the components, Maintenance of the code and components.
• Maintains the NoSql database using Hbase.
# Project3: PRODUTION CREDIT REPORTING
Environment: HADOOP HDFS, MAPREDUCE, HIVE, PIG, SQOOP.
Description:
PCR is a Production Credits calculation which captures banking information from regional system.
This project focuses on getting data about banking transactions from different sources. The transactions
are then analyzed using Map reduce program to find different user patterns. A visualization layer help user
to track his financial activity in many scenarios. System captures the data through CSV files and direct
database pull from approximately 60 different front office systems and manual sources throughout the
world.
Roles and Responsibilities :
4. • Worked on setting up Hadoop, PIG, Hive over multiple nodes
• Wrote Map Reduce Jobs using Java to parse the logs which are stored in HDFS.
• Storing and retrieved data using HQL Queries in Hive
• Implemented Hive tables and HQL Queries for the reports
• Involved in database connection by Using SQOOP.
• Archiving files to HAR file.
PERSONAL PROFILE:
Father’s Name : R.Ganapathi Reddy
Date of Birth : 20th
April 1990
Sex : Male
Marital Status : Single
Nationality : Indian
Languages Known : English , Telugu and Hindi
DECLARATION:
I hereby declare that all above mentioned information is true the best of my
knowledge.
(R.HariKrishna)