SlideShare a Scribd company logo
1 of 3
DEEPESH REHI
Email: rehi.deepesh@gmail.com +91 9962 726 080
Professional Summary:
• IT services experience with exclusive focus on Hadoop, Scala/Spark technologies and integrating
the Applications for BFSI and Communications domain in Cognizant Technology Solutions.
• Experience in designing & developing application based on Big data technologies using Hadoop
framework using Spark, Hive, Scala, Sqoop, Flume.
• Experience in working with Scala with maven building spark jobs.
• Have involved in several POC development for getting new projects for organization.
• Experience in projects under core-publishing, banking, telecommunication solutions.
• Experience in working with tools- hortonworks, cloudera framework.
• Experience in working in Unix environment with languages like java.
• Experience in sql databases like MySql and also working with NoSql databases like Hbase and
neo4j.
Technical Skills:
Big data Technologies Hadoop, Spark, Hive, Sqoop, Flume, Hbase, Neo4j
Programming Language Scala, Core java, sql, Unix shell scripting
Tools and Utilities Cloudera, Hortonworks, eclipse, MobaXterm, winscp, Solr, putty
Database Oracle, DB2, mySql Server
Source Control Git
Operating system Windows XP/2000/7, Unix
Professional Work Experience:
Client: AAA Insurance May‘16- Till
Date
Project & Role Description Programmer Analyst
Design the powerful suite of novel algorithms using statistical and other techniques to monitor Fraud
Detection. The system will develop a very good graphical representation of customer activity, and can
watch for First Party Fraud, Third Party Fraud and Identity Fraud using the unique graph node entity
relations. The bank can use this system to detect a variety of illegal activity, including money
laundering, insider trading, front-running, intra-day manipulation, marking to close, and more. Fast
detection allows the bank to protect itself from considerable losses.
Responsibilities:
• Creating a Staging environment in Hive.
• Spark-neo4j connectivity and using spark to run neo4j.
• Creating nodes, relations, entities into Neo4j using Spark.
• Automating the whole process using scala language.
Environment: Hadoop, Spark, Scala, Neo4j, Hive
Project: Unified data BI Sept ‘15-
Feb’16
Client: CapitalOne
Project & Role Description Programmer Analyst
Unified data BI brings in a single window access to multiple data sources by building a federated query
engine and runtime environment. By defining a virtual database over various sources (currently
RDBMS, flat file & other non RDBMS sources in future) enables a business user or even a power user to
query over federated data from a standalone or web application.
- 1 -
Responsibilities:
• Designed modules for the Query Generation and translation using teiid designer framework
and integrating Spark to process the queries.
• Designed code in Scala integrated with teiid.
• Assisted fellow team mate for the design of different modules.
• Wrote ScalaTest for QueryTraslator, PlanGenerator and PlanProcessor modules.
Environment: Hadoop, Java, Scala, Spark
Project: Big Data COE May’15-Sept’16
Project & Role Description Programmer Analyst
Generic Framework puts the data from different platform (like servers, Databases, ftp) to hdfs.
This Application is used for the putting data from anywhere to hdfs. With the help flume, Sqoop, ftp etc.
we had created a generic application by this any format of data can be placed to hdfs and can be used
for the further process.
Responsibilities:
• Designed Sqoop import module to load the data to hdfs (DB level and Query level).
• Designed modules on validation and testing of Flume and Sqoop data import.
• Assisted fellow team mates for the design of the Logging framework using Log4j.
Environment: Hadoop, Flume, Sqoop, Shell Script, Java, Agile Methodology.
Project: nXg Device Analytics Aug’14- Apr’15
Client: Cognizant – COE (Centre of Excellence)
Project & Role Description Programmer Analyst
This Application is used for the analytics of the set top box logs data. The data was available in the AWS
S3 bucket. Data was ingested in the Hadoop Cluster using Flume. The master data which went through
daily update and new inserts were managed through HBase. The transactional data which come in point
in time data, daily new increments were managed through Hive.
After that R-Script runs on the data for the prediction and then data is loaded to qlickview for reporting
and analysis.
Responsibilities:
• Work on flume configuration to load data to Hdfs from Amazon s3bucket.
• W Integration from Hbase to hive as per the requirement.
• Populate the data in Data models.
• Work on Base tables, data mart tables and fact tables in HIVE to meet the business requirements
(for Star schema Data Mart).
Environnent : Hadoop, Hive, Java, Hbase, Flume, S3, Oozie.
Scholastics
• B. Tech (CSE), JECRC UDML– Rajasthan Technical University, Jaipur 2013
• 10+2, Gyan Vihar School, CBSE Board, 2009
• 10 , All Saints Church School, Rajasthan Board, 2007
- 2 -
Accolades & Achievements:
• Have been awarded as Young Innovator in Cts 2015.
• Consistently received appraisals by customer as well as senior team member.
• Consistently achieved highest performance rating among team member.
• Have been Part of Organizing Committee of Sarvatra-Tech Fest during 6th
semester.
• North India Trinity Guitar Grade 3 Topper 2013.
• Won many guitar competitions.
Personal Details:
Father’s Name : Kamlesh Rehi
Permanent Address : 340, Krishna Bhawna, Chandi ki Taksal, Jaipur.
Temporary Address : D-140, West Patel Nagar, New Delhi
Declaration:
I hereby declare that the above mentioned information is correct up to my knowledge and I bear the
responsibility for the correctness of the above mentioned particulars.
Date:
Place: Gurgaon (Deepesh Rehi)
- 3 -

More Related Content

What's hot

Anil_BigData Resume
Anil_BigData ResumeAnil_BigData Resume
Anil_BigData ResumeAnil Sokhal
 
H2O Rains with Databricks Cloud - Parisoma SF
H2O Rains with Databricks Cloud - Parisoma SFH2O Rains with Databricks Cloud - Parisoma SF
H2O Rains with Databricks Cloud - Parisoma SFSri Ambati
 
Amith_Hadoop_Admin_CV
Amith_Hadoop_Admin_CVAmith_Hadoop_Admin_CV
Amith_Hadoop_Admin_CVAmith R
 
Vijay_hadoop admin
Vijay_hadoop adminVijay_hadoop admin
Vijay_hadoop adminvijay vijay
 
Bharath Hadoop Resume
Bharath Hadoop ResumeBharath Hadoop Resume
Bharath Hadoop ResumeBharath Kumar
 
H2O Rains with Databricks Cloud - NY 02.16.16
H2O Rains with Databricks Cloud - NY 02.16.16H2O Rains with Databricks Cloud - NY 02.16.16
H2O Rains with Databricks Cloud - NY 02.16.16Sri Ambati
 
Getting Ready to Use Redis with Apache Spark with Tague Griffith
Getting Ready to Use Redis with Apache Spark with Tague GriffithGetting Ready to Use Redis with Apache Spark with Tague Griffith
Getting Ready to Use Redis with Apache Spark with Tague GriffithDatabricks
 
Docker datascience pipeline
Docker datascience pipelineDocker datascience pipeline
Docker datascience pipelineDataWorks Summit
 
The Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture ViewThe Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture ViewDataWorks Summit/Hadoop Summit
 
Security Updates: More Seamless Access Controls with Apache Spark and Apache ...
Security Updates: More Seamless Access Controls with Apache Spark and Apache ...Security Updates: More Seamless Access Controls with Apache Spark and Apache ...
Security Updates: More Seamless Access Controls with Apache Spark and Apache ...DataWorks Summit
 
Multitenancy At Bloomberg - HBase and Oozie
Multitenancy At Bloomberg - HBase and OozieMultitenancy At Bloomberg - HBase and Oozie
Multitenancy At Bloomberg - HBase and OozieDataWorks Summit
 
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_SparkSunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_SparkMopuru Babu
 
Partner Ecosystem Showcase for Apache Ranger and Apache Atlas
Partner Ecosystem Showcase for Apache Ranger and Apache AtlasPartner Ecosystem Showcase for Apache Ranger and Apache Atlas
Partner Ecosystem Showcase for Apache Ranger and Apache AtlasDataWorks Summit
 
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopSenior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopabinash bindhani
 
Classification based security in Hadoop
Classification based security in HadoopClassification based security in Hadoop
Classification based security in HadoopMadhan Neethiraj
 
What Is Hadoop | Hadoop Tutorial For Beginners | Edureka
What Is Hadoop | Hadoop Tutorial For Beginners | EdurekaWhat Is Hadoop | Hadoop Tutorial For Beginners | Edureka
What Is Hadoop | Hadoop Tutorial For Beginners | EdurekaEdureka!
 

What's hot (20)

hadoop exp
hadoop exphadoop exp
hadoop exp
 
Anil_BigData Resume
Anil_BigData ResumeAnil_BigData Resume
Anil_BigData Resume
 
H2O Rains with Databricks Cloud - Parisoma SF
H2O Rains with Databricks Cloud - Parisoma SFH2O Rains with Databricks Cloud - Parisoma SF
H2O Rains with Databricks Cloud - Parisoma SF
 
Amith_Hadoop_Admin_CV
Amith_Hadoop_Admin_CVAmith_Hadoop_Admin_CV
Amith_Hadoop_Admin_CV
 
Apache Ranger Hive Metastore Security
Apache Ranger Hive Metastore Security Apache Ranger Hive Metastore Security
Apache Ranger Hive Metastore Security
 
Vijay_hadoop admin
Vijay_hadoop adminVijay_hadoop admin
Vijay_hadoop admin
 
Bharath Hadoop Resume
Bharath Hadoop ResumeBharath Hadoop Resume
Bharath Hadoop Resume
 
H2O Rains with Databricks Cloud - NY 02.16.16
H2O Rains with Databricks Cloud - NY 02.16.16H2O Rains with Databricks Cloud - NY 02.16.16
H2O Rains with Databricks Cloud - NY 02.16.16
 
Getting Ready to Use Redis with Apache Spark with Tague Griffith
Getting Ready to Use Redis with Apache Spark with Tague GriffithGetting Ready to Use Redis with Apache Spark with Tague Griffith
Getting Ready to Use Redis with Apache Spark with Tague Griffith
 
Big data course
Big data  courseBig data  course
Big data course
 
Docker datascience pipeline
Docker datascience pipelineDocker datascience pipeline
Docker datascience pipeline
 
Containers and Big Data
Containers and Big DataContainers and Big Data
Containers and Big Data
 
The Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture ViewThe Future of Apache Hadoop an Enterprise Architecture View
The Future of Apache Hadoop an Enterprise Architecture View
 
Security Updates: More Seamless Access Controls with Apache Spark and Apache ...
Security Updates: More Seamless Access Controls with Apache Spark and Apache ...Security Updates: More Seamless Access Controls with Apache Spark and Apache ...
Security Updates: More Seamless Access Controls with Apache Spark and Apache ...
 
Multitenancy At Bloomberg - HBase and Oozie
Multitenancy At Bloomberg - HBase and OozieMultitenancy At Bloomberg - HBase and Oozie
Multitenancy At Bloomberg - HBase and Oozie
 
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_SparkSunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
Sunshine consulting mopuru babu cv_java_j2_ee_spring_bigdata_scala_Spark
 
Partner Ecosystem Showcase for Apache Ranger and Apache Atlas
Partner Ecosystem Showcase for Apache Ranger and Apache AtlasPartner Ecosystem Showcase for Apache Ranger and Apache Atlas
Partner Ecosystem Showcase for Apache Ranger and Apache Atlas
 
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopSenior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoop
 
Classification based security in Hadoop
Classification based security in HadoopClassification based security in Hadoop
Classification based security in Hadoop
 
What Is Hadoop | Hadoop Tutorial For Beginners | Edureka
What Is Hadoop | Hadoop Tutorial For Beginners | EdurekaWhat Is Hadoop | Hadoop Tutorial For Beginners | Edureka
What Is Hadoop | Hadoop Tutorial For Beginners | Edureka
 

Similar to DeepeshRehi

Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_SparkSunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_SparkMopuru Babu
 
mahendhar_java3_8yearsHadoop1yearexp
mahendhar_java3_8yearsHadoop1yearexpmahendhar_java3_8yearsHadoop1yearexp
mahendhar_java3_8yearsHadoop1yearexpmahendhar e
 
Deepankar Sehdev- Resume2015
Deepankar Sehdev- Resume2015Deepankar Sehdev- Resume2015
Deepankar Sehdev- Resume2015Deepankar Sehdev
 
Pankaj Resume for Hadoop,Java,J2EE - Outside World
Pankaj Resume for Hadoop,Java,J2EE -  Outside WorldPankaj Resume for Hadoop,Java,J2EE -  Outside World
Pankaj Resume for Hadoop,Java,J2EE - Outside WorldPankaj Kumar
 
Srikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copySrikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copysrikanth K
 
Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume Sudhir Saxena
 
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scalaSunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scalaMopuru Babu
 
Bigdata.sunil_6+yearsExp
Bigdata.sunil_6+yearsExpBigdata.sunil_6+yearsExp
Bigdata.sunil_6+yearsExpbigdata sunil
 

Similar to DeepeshRehi (20)

Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_SparkSunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
Sunshine consulting Mopuru Babu CV_Java_J2ee_Spring_Bigdata_Scala_Spark
 
Robin_Hadoop
Robin_HadoopRobin_Hadoop
Robin_Hadoop
 
ChandraSekhar CV
ChandraSekhar CVChandraSekhar CV
ChandraSekhar CV
 
Sourav banerjee resume
Sourav banerjee   resumeSourav banerjee   resume
Sourav banerjee resume
 
Hadoop Developer
Hadoop DeveloperHadoop Developer
Hadoop Developer
 
mahendhar_java3_8yearsHadoop1yearexp
mahendhar_java3_8yearsHadoop1yearexpmahendhar_java3_8yearsHadoop1yearexp
mahendhar_java3_8yearsHadoop1yearexp
 
Deepankar Sehdev- Resume2015
Deepankar Sehdev- Resume2015Deepankar Sehdev- Resume2015
Deepankar Sehdev- Resume2015
 
Pallavi_Resume
Pallavi_ResumePallavi_Resume
Pallavi_Resume
 
Sudhanshu kumar hadoop
Sudhanshu kumar hadoopSudhanshu kumar hadoop
Sudhanshu kumar hadoop
 
Pankaj Resume for Hadoop,Java,J2EE - Outside World
Pankaj Resume for Hadoop,Java,J2EE -  Outside WorldPankaj Resume for Hadoop,Java,J2EE -  Outside World
Pankaj Resume for Hadoop,Java,J2EE - Outside World
 
Srikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copySrikanth hadoop hyderabad_3.4yeras - copy
Srikanth hadoop hyderabad_3.4yeras - copy
 
Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume Sudhir hadoop and Data warehousing resume
Sudhir hadoop and Data warehousing resume
 
Resume_2706
Resume_2706Resume_2706
Resume_2706
 
Prashanth Kumar_Hadoop_NEW
Prashanth Kumar_Hadoop_NEWPrashanth Kumar_Hadoop_NEW
Prashanth Kumar_Hadoop_NEW
 
hadoop_bigdata
hadoop_bigdatahadoop_bigdata
hadoop_bigdata
 
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scalaSunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
Sunshine consulting mopuru babu cv_java_j2ee_spring_bigdata_scala
 
sudipto_resume
sudipto_resumesudipto_resume
sudipto_resume
 
SreenivasulaReddy
SreenivasulaReddySreenivasulaReddy
SreenivasulaReddy
 
Bigdata.sunil_6+yearsExp
Bigdata.sunil_6+yearsExpBigdata.sunil_6+yearsExp
Bigdata.sunil_6+yearsExp
 
RESUME_N
RESUME_NRESUME_N
RESUME_N
 

DeepeshRehi

  • 1. DEEPESH REHI Email: rehi.deepesh@gmail.com +91 9962 726 080 Professional Summary: • IT services experience with exclusive focus on Hadoop, Scala/Spark technologies and integrating the Applications for BFSI and Communications domain in Cognizant Technology Solutions. • Experience in designing & developing application based on Big data technologies using Hadoop framework using Spark, Hive, Scala, Sqoop, Flume. • Experience in working with Scala with maven building spark jobs. • Have involved in several POC development for getting new projects for organization. • Experience in projects under core-publishing, banking, telecommunication solutions. • Experience in working with tools- hortonworks, cloudera framework. • Experience in working in Unix environment with languages like java. • Experience in sql databases like MySql and also working with NoSql databases like Hbase and neo4j. Technical Skills: Big data Technologies Hadoop, Spark, Hive, Sqoop, Flume, Hbase, Neo4j Programming Language Scala, Core java, sql, Unix shell scripting Tools and Utilities Cloudera, Hortonworks, eclipse, MobaXterm, winscp, Solr, putty Database Oracle, DB2, mySql Server Source Control Git Operating system Windows XP/2000/7, Unix Professional Work Experience: Client: AAA Insurance May‘16- Till Date Project & Role Description Programmer Analyst Design the powerful suite of novel algorithms using statistical and other techniques to monitor Fraud Detection. The system will develop a very good graphical representation of customer activity, and can watch for First Party Fraud, Third Party Fraud and Identity Fraud using the unique graph node entity relations. The bank can use this system to detect a variety of illegal activity, including money laundering, insider trading, front-running, intra-day manipulation, marking to close, and more. Fast detection allows the bank to protect itself from considerable losses. Responsibilities: • Creating a Staging environment in Hive. • Spark-neo4j connectivity and using spark to run neo4j. • Creating nodes, relations, entities into Neo4j using Spark. • Automating the whole process using scala language. Environment: Hadoop, Spark, Scala, Neo4j, Hive Project: Unified data BI Sept ‘15- Feb’16 Client: CapitalOne Project & Role Description Programmer Analyst Unified data BI brings in a single window access to multiple data sources by building a federated query engine and runtime environment. By defining a virtual database over various sources (currently RDBMS, flat file & other non RDBMS sources in future) enables a business user or even a power user to query over federated data from a standalone or web application. - 1 -
  • 2. Responsibilities: • Designed modules for the Query Generation and translation using teiid designer framework and integrating Spark to process the queries. • Designed code in Scala integrated with teiid. • Assisted fellow team mate for the design of different modules. • Wrote ScalaTest for QueryTraslator, PlanGenerator and PlanProcessor modules. Environment: Hadoop, Java, Scala, Spark Project: Big Data COE May’15-Sept’16 Project & Role Description Programmer Analyst Generic Framework puts the data from different platform (like servers, Databases, ftp) to hdfs. This Application is used for the putting data from anywhere to hdfs. With the help flume, Sqoop, ftp etc. we had created a generic application by this any format of data can be placed to hdfs and can be used for the further process. Responsibilities: • Designed Sqoop import module to load the data to hdfs (DB level and Query level). • Designed modules on validation and testing of Flume and Sqoop data import. • Assisted fellow team mates for the design of the Logging framework using Log4j. Environment: Hadoop, Flume, Sqoop, Shell Script, Java, Agile Methodology. Project: nXg Device Analytics Aug’14- Apr’15 Client: Cognizant – COE (Centre of Excellence) Project & Role Description Programmer Analyst This Application is used for the analytics of the set top box logs data. The data was available in the AWS S3 bucket. Data was ingested in the Hadoop Cluster using Flume. The master data which went through daily update and new inserts were managed through HBase. The transactional data which come in point in time data, daily new increments were managed through Hive. After that R-Script runs on the data for the prediction and then data is loaded to qlickview for reporting and analysis. Responsibilities: • Work on flume configuration to load data to Hdfs from Amazon s3bucket. • W Integration from Hbase to hive as per the requirement. • Populate the data in Data models. • Work on Base tables, data mart tables and fact tables in HIVE to meet the business requirements (for Star schema Data Mart). Environnent : Hadoop, Hive, Java, Hbase, Flume, S3, Oozie. Scholastics • B. Tech (CSE), JECRC UDML– Rajasthan Technical University, Jaipur 2013 • 10+2, Gyan Vihar School, CBSE Board, 2009 • 10 , All Saints Church School, Rajasthan Board, 2007 - 2 -
  • 3. Accolades & Achievements: • Have been awarded as Young Innovator in Cts 2015. • Consistently received appraisals by customer as well as senior team member. • Consistently achieved highest performance rating among team member. • Have been Part of Organizing Committee of Sarvatra-Tech Fest during 6th semester. • North India Trinity Guitar Grade 3 Topper 2013. • Won many guitar competitions. Personal Details: Father’s Name : Kamlesh Rehi Permanent Address : 340, Krishna Bhawna, Chandi ki Taksal, Jaipur. Temporary Address : D-140, West Patel Nagar, New Delhi Declaration: I hereby declare that the above mentioned information is correct up to my knowledge and I bear the responsibility for the correctness of the above mentioned particulars. Date: Place: Gurgaon (Deepesh Rehi) - 3 -