Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.
Upcoming SlideShare
Integradora I portafolio joran abelino
Next
Download to read offline and view in fullscreen.

0

Share

Download to read offline

Debtanu_cv

Download to read offline

Related Audiobooks

Free with a 30 day trial from Scribd

See all
  • Be the first to like this

Debtanu_cv

  1. 1. Debtanu Chatterjee Sri Kalki Chamber debtanu.good@gmail.com Block – B, Flat no. : 105 phone no. :7501111325 Allwyn X Road, Madinaguda Hyderabad , 500050. OBJECTIVE To secure a position in an organization that offers career growth and a chance to achieve goals through persistence and hard work where and also to give my best in whatever I do for future development of organization. Skills Elaboration 1. 2 years of Work Experience in technologies like HTML, CSS, JS, PHP, Mysql. 2. 2 years of Work Experience in technologies like Hadoop Eco System, HDFS, map reduce, hive, pig, sqoop, flume, oozie, HBASE. 3. Attended Training on Big Data Technologies like Hadoop, PIG, HIVE, and Have the hands on, on public data sets. 4. Experience in using XML, JavaScript, JSON, Ajax, CSS, HTML, and PHP. 5. Good knowledge of Hadoop ecosystem, HDFS, java , RDBMS(oracle 11g. 6. Experienced on working with Big Data and Hadoop File System (HDFS). 7. Hands on Experience in working with ecosystems like Hive, Pig, Sqoop, Map Reduce, Flume, OoZie. 8. good Knowledge of Hadoop and Hive and Hive's analytical functions. 9. Very good understanding of Partitions, Bucketing concepts in Hive and designed both Managed and External tables in Hive to optimize performance 10. Capturing data from existing databases that provide SQL interfaces using Sqoop Import. 11. Efficient in building hive, pig and map Reduce scripts. 12. Experienced in managing and reviewing Hadoop log files 13. Implemented Proofs of Concept on Hadoop stack and different big data analytic tools, migration from different databases (i.e. Oracle,MYSQL ) to Hadoop. 14. Successfully loaded files to Hive and HDFS from MongoDB, HBase 15. Loaded the data set into Hive for ETL Operation. 16. Good knowledge on Hadoop Cluster architecture and monitoring the cluster. 17. Experience in using Zoo keeper and cloudera Manager. 18. Hands on experience in IDE tools like Eclipse.
  2. 2. 19. Experience in using Sequence files, RCFile, AVRO file formats. 20. Developed Oozie work flow for scheduling and orchestrating the ETL process 21. tools used for cluster management like Cloudera Manager and Apache Ambari  Professional Experience  Worked in Clinzen Pvt.Ltd as a UI developer and SQL Developer .  Currently Working in Clinzen Pvt.Ltd, Hyderabad as a Hadoop developer till Date.  Academic Profile  MCA (Master of Computer Applications) From WBUT with 78.50%.  B.Sc. (Mathematics, Physics & Chemistry) from Calcutta UNIVERSITY with 50.01%.  Project Experience Project # 1 School Management System Responsibilities: DB design, coding, development, implementation. Skills Used: PHP, MySQL, JavaScript, HTML, CSS,AJAX. Team size : 3 members Description:  storing students data and staff data  generating Mark sheet and yearly report card for each students.  Generating ID card and unique id for students and staffs.  Full library management of the school.  Keeping records Staff' salary records and leave recommendation  Generating service book and pension book for each staff. Project # 2
  3. 3.  Install raw Hadoop and NoSQL applications on cluster mode (3 node) and develop programs for sorting and analyzing data.  Responsibilities:  Replaced default Derby meta-data storage system for Hive with MySQL system.  Executed queries using Hive and developed Map-Reduce jobs to analyze data.  Solved performance issues in Hive and Pig scripts with understanding of Joins, Group and aggregation and how does it translate to MapReduce jobs.  Developed Pig Latin scripts to extract the data from the web server output files to load into HDFS.  Developed the Pig UDF's to preprocess the data for analysis. • DevelopedHive queries for the analysts. • Utilized Apache Hadoop environment. • Involved in loading data from LINUX and UNIX file system to HDFS. • Supported in setting up QA environment and updating configurations for implementing scripts with Pig. Environment: Core Java, Apache Hadoop, HDFS, Pig, Hive, Shell Scripting, My Sql, Linux. Areas of Expertise:  Big Data Ecosystems: Hadoop, MapReduce, HDFS, HBase, Zookeeper, Hive, Pig, Sqoop, Oozie,  Programming Languages: Java, C,php.  Scripting Languages: JavaScript, XML, HTML,pig Latin.  Databases: NoSQL (HBASE), Oracle(11g).  Server: Apache.  Tools: Eclipse.  Platforms: Windows, Linux.  Methodologies: UML.

Views

Total views

194

On Slideshare

0

From embeds

0

Number of embeds

5

Actions

Downloads

1

Shares

0

Comments

0

Likes

0

×