This document contains the resume of Hassan Qureshi. He has over 9 years of experience as a Hadoop Lead Developer with expertise in technologies like Hadoop, HDFS, Hive, Pig and HBase. Currently he works as the technical lead of a data engineering team developing insights from data. He has extensive hands-on experience installing, configuring and maintaining Hadoop clusters in different environments.
Pankaj Resume for Hadoop,Java,J2EE - Outside WorldPankaj Kumar
Pankaj Kumar is seeking a challenging position utilizing his 7.9 years of experience in big data technologies like Hadoop, Java, and machine learning. He has deep expertise in technologies such as MapReduce, HDFS, Pig, Hive, HBase, MongoDB, and Spark. His experience includes successfully developing and delivering big data analytics solutions for healthcare, telecom, and other industries.
This document contains a summary of a candidate's experience and qualifications. In 2 sentences:
The candidate has over 2 years of experience in big data technologies like Hadoop, Hive, and Spark, and has worked on projects involving data ingestion, analytics, and building recommendation engines. They are seeking a position in big data or data science that provides opportunities for learning, growth, and career advancement.
Praveen Reddy Gajjala has over 2 years of experience as a Hadoop Developer at Wipro in Hyderabad. He has extensive skills in technologies like Java, Hadoop, Pig, Hive, HBase, and Sqoop. As part of a project for Sears, he helped analyze clickstream data from their websites and apps using Hadoop to improve the customer experience.
Vasudevan Venkatraman has over 11 years of experience working in the IT industry, including 7+ years of experience with Oracle PL/SQL, data warehousing, and 3+ years in performance consulting and applications database administration. He has experience designing and developing applications using Oracle PL/SQL, Hadoop, and big data technologies. Currently he works as an Assistant Consultant at TCS focusing on data warehousing projects using Oracle and Hadoop.
This document contains a summary of Renuga Veeraragavan's work experience and qualifications. It outlines 7 years of experience in IT with expertise in areas like Hadoop, Java, SQL, and web technologies. Specific roles are highlighted including current role as Hadoop Developer at Lowe's where responsibilities include data analysis, Hive queries, and HBase. Previous roles include Senior Java UI Developer at TD Bank and Accenture developing web applications. Educational background includes a B.E. in IT from Avinashilingam University.
Suresh Yadav is seeking a challenging position to improve his skills. He has a degree in Electronics and Communication Engineering from Malla Reddy Engineering College. He has technical skills in Hadoop, Java, SQL, HBase, Eclipse, Linux, and Windows. He has experience developing MapReduce programs and migrating data to HDFS. For a POC, he analyzed Twitter data using Hadoop, Hive, and Pig. He previously worked as a Technical Support Engineer at Krish Technologies, where he troubleshot computers and built marketing products using VPN technology.
Romy Khetan is a senior software engineer with over 3 years of experience in big data technologies like Elasticsearch, MongoDB, Hadoop, Spark, and Java. She has worked on multiple projects involving sentiment analysis, vertical search, and identifying relationships across social media data. Her roles have included backend development, designing plugins, APIs, and interfaces between applications and services. She is proficient in technologies such as Scala, Redis, RabbitMQ, and graph databases.
This resume summarizes Arbind Kumar Jha's experience working with big data technologies like Hadoop, Hive, Pig, and HBase. He has over 12 years of IT experience, including 1.5 years working with Hadoop. His current role is a Technical Architect Lead at HCL Technologies, where he works on architectures, designs, and develops solutions involving big data, NoSQL, Hadoop, and BIRT. His technical skills include programming languages like Java, databases like Oracle and SQL Server, and big data tools like Hadoop, Hive, Pig, Cassandra, and Flume.
Pankaj Resume for Hadoop,Java,J2EE - Outside WorldPankaj Kumar
Pankaj Kumar is seeking a challenging position utilizing his 7.9 years of experience in big data technologies like Hadoop, Java, and machine learning. He has deep expertise in technologies such as MapReduce, HDFS, Pig, Hive, HBase, MongoDB, and Spark. His experience includes successfully developing and delivering big data analytics solutions for healthcare, telecom, and other industries.
This document contains a summary of a candidate's experience and qualifications. In 2 sentences:
The candidate has over 2 years of experience in big data technologies like Hadoop, Hive, and Spark, and has worked on projects involving data ingestion, analytics, and building recommendation engines. They are seeking a position in big data or data science that provides opportunities for learning, growth, and career advancement.
Praveen Reddy Gajjala has over 2 years of experience as a Hadoop Developer at Wipro in Hyderabad. He has extensive skills in technologies like Java, Hadoop, Pig, Hive, HBase, and Sqoop. As part of a project for Sears, he helped analyze clickstream data from their websites and apps using Hadoop to improve the customer experience.
Vasudevan Venkatraman has over 11 years of experience working in the IT industry, including 7+ years of experience with Oracle PL/SQL, data warehousing, and 3+ years in performance consulting and applications database administration. He has experience designing and developing applications using Oracle PL/SQL, Hadoop, and big data technologies. Currently he works as an Assistant Consultant at TCS focusing on data warehousing projects using Oracle and Hadoop.
This document contains a summary of Renuga Veeraragavan's work experience and qualifications. It outlines 7 years of experience in IT with expertise in areas like Hadoop, Java, SQL, and web technologies. Specific roles are highlighted including current role as Hadoop Developer at Lowe's where responsibilities include data analysis, Hive queries, and HBase. Previous roles include Senior Java UI Developer at TD Bank and Accenture developing web applications. Educational background includes a B.E. in IT from Avinashilingam University.
Suresh Yadav is seeking a challenging position to improve his skills. He has a degree in Electronics and Communication Engineering from Malla Reddy Engineering College. He has technical skills in Hadoop, Java, SQL, HBase, Eclipse, Linux, and Windows. He has experience developing MapReduce programs and migrating data to HDFS. For a POC, he analyzed Twitter data using Hadoop, Hive, and Pig. He previously worked as a Technical Support Engineer at Krish Technologies, where he troubleshot computers and built marketing products using VPN technology.
Romy Khetan is a senior software engineer with over 3 years of experience in big data technologies like Elasticsearch, MongoDB, Hadoop, Spark, and Java. She has worked on multiple projects involving sentiment analysis, vertical search, and identifying relationships across social media data. Her roles have included backend development, designing plugins, APIs, and interfaces between applications and services. She is proficient in technologies such as Scala, Redis, RabbitMQ, and graph databases.
This resume summarizes Arbind Kumar Jha's experience working with big data technologies like Hadoop, Hive, Pig, and HBase. He has over 12 years of IT experience, including 1.5 years working with Hadoop. His current role is a Technical Architect Lead at HCL Technologies, where he works on architectures, designs, and develops solutions involving big data, NoSQL, Hadoop, and BIRT. His technical skills include programming languages like Java, databases like Oracle and SQL Server, and big data tools like Hadoop, Hive, Pig, Cassandra, and Flume.
Suresh Yadav is seeking a challenging position to improve his skills. He has a B.Tech in E.C.E from Malla Reddy Engineering College with 67% and expertise in Java, Hadoop, and Ubuntu. His one year of work experience was as a Technical Support Engineer at Krish Technologies, an IT services company. He completed an academic project on a traffic density analyzer and signal system using GSM technology. Suresh has strengths in learning new things, a positive attitude, and self-confidence.
Jayaram_Parida- Big Data Architect and Technical Scrum MasterJayaram Parida
Jayaram Parida has over 19 years of experience in IT, including 3 years as a Big Data Technical Solution Architect. He has extensive skills in technologies like Hadoop, HDFS, HBase, Hive, MapReduce, Kafka, Storm, YARN, Pig, Python, and data analytics tools. He has experience architecting and developing big data solutions for clients in various industries. His roles have included designing Hadoop infrastructures, developing real-time analytics platforms, and creating visualizations and reports.
Abhinav is an ETL Hadoop developer with over 3 years of experience working with technologies like Hadoop, Informatica Power Centre, PL/SQL, and Unix. He has 1.5 years of experience on big data projects using Hadoop and technologies like Hive, Pig, HDFS, and Sqoop. He is proficient in Informatica for ETL development, data modeling, and data integration. Abhinav has also worked on projects involving data migration, requirement analysis, automation, and testing. He is looking for a role that offers learning opportunities while allowing him to utilize his skills.
Resume_Triveni_Bigdata_Hadoop ProfessionalTRIVENI PATRO
Triveni Patro is currently working as a Hadoop admin at Tata Consultancy Services in India with over 4 years of experience in IT development, administration, implementation, and 24/7 support of Hortonworks Hadoop distribution. Some key responsibilities include supporting production clusters of 400+ nodes and troubleshooting Hadoop cluster issues. Previous experience includes working as a Hadoop developer on projects for clients like Comcast and automotive companies, developing MapReduce, Pig, and Hive scripts for data processing and report generation.
Kumaresan Kaliappan has over 14 years of experience as a software consultant specializing in middleware applications using BPM, SOA and J2EE architectures. He has hands-on experience in ecommerce, financial services, and manufacturing verticals. Currently he works as a Senior Programmer Analyst at CSC developing applications for Chrysler including a web RFQ system and tooling applications.
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopabinash bindhani
Abinash Bindhani is seeking a position as a Hadoop developer where he can utilize over 2 years of experience with Hadoop and Java technologies. He currently works as a senior systems engineer at Infosys where he has gained experience migrating data from Oracle to Hadoop platforms and collecting/analyzing log data using tools like Flume, Pig, and Hive. His technical skills include MapReduce, HBase, HDFS, Java, Spring, MySQL, and Apache Tomcat. He has expertise in Hadoop architecture, cluster concepts, and each phase of the software development life cycle.
Soundarya Reddy has over 7 years of experience as a Java developer. She has extensive experience designing and developing web applications using technologies like Java, J2EE, Spring, Hibernate, and web services. She is proficient in all phases of the development lifecycle and has worked on projects for clients like IHG and the CDC. Her most recent role is as a Java developer for Intersect Group where she works on their application for IHG.
This document provides a summary of Manish Agrahari's career and qualifications. It outlines his 6 years of experience in IBM BPM development and Java/.Net, including designing and developing IBM BPM applications using features like BPDs, coaches, subprocesses, and integrating databases. It also lists his skills in technologies like IBM BPM, Eclipse, Java, SQL, and scripting languages. Recent projects are described, including developing insurance underwriting and claims management processes using IBM BPM.
This document provides a summary of a candidate's skills and experience working with large data sets and Hadoop technologies. The candidate has 1 year of experience developing, implementing, and deploying Big Data solutions using Hadoop technologies like HDFS, Hive, Pig, and Sqoop on Linux and Windows environments. They have extensive experience developing UDFs, UDAFs, UDTFs in Hive and Pig scripts for ETL processes. Additionally, the candidate has knowledge of Spark, Scala, Java/J2EE development, Linux, and databases. Their most recent role involved writing Pig and Hive queries, UDFs, and shell scripts to process and report on large data sets using Hadoop, Pig, Hive, Cass
Kishore Babu has over 7 years of experience in data analytics, business analytics, and project management. He is currently an Associate Business Analyst at GlobalLogic Technologies working on projects for Google. He leads a team that manages project dashboards, performs cost analysis, and automates reports using tools like Hive, Google SQL, and Dremel. Previously he has held roles as a Senior Lead and Lead at GlobalLogic where he mentored teammates and ensured project metrics were achieved.
Robin David is seeking a position that utilizes his 9+ years of experience in IT and 3+ years experience with Hadoop. He has extensive experience designing, implementing, and managing Hadoop clusters and data solutions. His experience includes building data lakes, ETL processes, data integration, and analytics solutions for clients across various industries. He is proficient in Hadoop ecosystem tools like Hive, Pig, Sqoop, Flume, and Spark and has expertise in Hadoop administration, performance tuning, and support.
This document is a CV that summarizes the qualifications and experience of Archana Jaiswal, an Indian national who speaks English and Hindi. She has over 15 years of experience conducting technical training in areas such as Hadoop, databases, Java/J2EE, and data structures. She has provided training to employees of various MNCs and conducted over 15,000 hours of training sessions. Her expertise includes Cloudera developer, Hortonworks data analyst, Oracle, SQL Server, MySQL, MongoDB, and she is proficient in Java/J2EE. She has worked as a trainer and subject matter expert for various companies such as Koenig Solutions, SEED InfoTech, and Global Talent Track.
Borja González is a Big Data Architect with over 5 years of experience in information technologies. He has expertise in Splunk, Big Data, cloud solutions, and other technologies. Currently, he leads a team using Splunk to analyze large volumes of data and create dashboards to monitor business metrics for a major natural gas company. Previously, he deployed a Hadoop cluster for a bank and worked as a software architect for Telefonica Movistar developing their website.
The document contains a profile summary for B. Sreenivasula Reddy including his contact information, 8.5 years of experience in Java and big data technologies, expertise in Hadoop ecosystem tools, databases like HBase and MongoDB, and frameworks like Spring. It also lists his educational qualifications and provides details of projects he has worked on involving technologies like Hadoop, Hive, Pig and frameworks like Java EE.
The document provides details about an experienced designer named Afsal K H. It outlines his skills, education, and work experience. For over 10 years, he has worked as a user interface designer for various companies including Wipro Infotech, CVK Mindsource Consulting, and PRHUB Integrated Marketing Communications. In these roles, he has been responsible for tasks such as creating visual concepts and designs, developing user interfaces, and ensuring accessibility compliance. He has expertise with tools like Photoshop, Dreamweaver, and languages like HTML, CSS, JavaScript.
I am having 6 years of experience in Java Programming.Involved in building Java web applications which are scalable for cloud infrastructure.Good experience in RDBMS database and NOSQL database MongoDb. Good knowledge in REST API design.Worked in Business Intelligence,Financial domain.
This document provides a summary of B. Sreenivasula Reddy's professional profile. It outlines his 8.5 years of experience in Java, J2EE, and Big Data technologies including Hadoop, HDFS, MapReduce, Pig, Hive, HBase, ZooKeeper, Oozie, and Flume. It also lists his educational qualifications of an MCA and BSc, and details several projects he has worked on utilizing these technologies for clients such as Wipro, Syntel, HP, and BlueStar.
Chandan Das is a developer/designer with over 7 years of experience in IT implementation projects using technologies like Teradata, Oracle, Hadoop, Pig, Hive, and Sqoop. He has extensive experience in data warehousing, ETL, and database administration. His career objective is to obtain a productive role in an IT organization where he can implement his expertise in developing complex projects efficiently and meeting expectations. He provides details of his professional experience, technical skills, key achievements and completed projects.
Pradeep G P is a Senior SQL Database Administrator with over 6 years of experience in handling SQL databases. He has extensive experience designing, maintaining, and tuning SQL Server databases. He has worked on databases ranging from SQL Server 2000 to SQL Server 2012 in both clustered and non-clustered environments. Pradeep is proficient in database administration tasks including backups, security management, performance tuning and high availability solutions.
Mukul Upadhyay is seeking a position in Big Data technology with an IT company. He has over 5 years of experience developing Hadoop applications and working with technologies like MapReduce, Hive, HBase, HDFS, and Sqoop. Some of his responsibilities include architecting Big Data platforms, developing custom MapReduce jobs, importing and exporting data between HDFS and relational databases, and tuning and monitoring Hadoop clusters. He has worked on projects for clients in the USA and India involving building Hadoop-based analytics platforms and processing terabytes of device log data.
• Capable of processing large sets of structured, semi-structured and unstructured data and supporting system architecture
• Implemented Proof of concepts on Hadoop stack and different big data analytic tools, migration from different databases to Hadoop.
• Developed multiple Map Reduce jobs in java for data cleaning and pre-processing according to the business requirements, Importing and exporting data into HDFS and Hive using Sqoop.
Having Experience in writing HIVE queries & Pig scripts.
Suresh Yadav is seeking a challenging position to improve his skills. He has a B.Tech in E.C.E from Malla Reddy Engineering College with 67% and expertise in Java, Hadoop, and Ubuntu. His one year of work experience was as a Technical Support Engineer at Krish Technologies, an IT services company. He completed an academic project on a traffic density analyzer and signal system using GSM technology. Suresh has strengths in learning new things, a positive attitude, and self-confidence.
Jayaram_Parida- Big Data Architect and Technical Scrum MasterJayaram Parida
Jayaram Parida has over 19 years of experience in IT, including 3 years as a Big Data Technical Solution Architect. He has extensive skills in technologies like Hadoop, HDFS, HBase, Hive, MapReduce, Kafka, Storm, YARN, Pig, Python, and data analytics tools. He has experience architecting and developing big data solutions for clients in various industries. His roles have included designing Hadoop infrastructures, developing real-time analytics platforms, and creating visualizations and reports.
Abhinav is an ETL Hadoop developer with over 3 years of experience working with technologies like Hadoop, Informatica Power Centre, PL/SQL, and Unix. He has 1.5 years of experience on big data projects using Hadoop and technologies like Hive, Pig, HDFS, and Sqoop. He is proficient in Informatica for ETL development, data modeling, and data integration. Abhinav has also worked on projects involving data migration, requirement analysis, automation, and testing. He is looking for a role that offers learning opportunities while allowing him to utilize his skills.
Resume_Triveni_Bigdata_Hadoop ProfessionalTRIVENI PATRO
Triveni Patro is currently working as a Hadoop admin at Tata Consultancy Services in India with over 4 years of experience in IT development, administration, implementation, and 24/7 support of Hortonworks Hadoop distribution. Some key responsibilities include supporting production clusters of 400+ nodes and troubleshooting Hadoop cluster issues. Previous experience includes working as a Hadoop developer on projects for clients like Comcast and automotive companies, developing MapReduce, Pig, and Hive scripts for data processing and report generation.
Kumaresan Kaliappan has over 14 years of experience as a software consultant specializing in middleware applications using BPM, SOA and J2EE architectures. He has hands-on experience in ecommerce, financial services, and manufacturing verticals. Currently he works as a Senior Programmer Analyst at CSC developing applications for Chrysler including a web RFQ system and tooling applications.
Senior systems engineer at Infosys with 2.4yrs of experience on Bigdata & hadoopabinash bindhani
Abinash Bindhani is seeking a position as a Hadoop developer where he can utilize over 2 years of experience with Hadoop and Java technologies. He currently works as a senior systems engineer at Infosys where he has gained experience migrating data from Oracle to Hadoop platforms and collecting/analyzing log data using tools like Flume, Pig, and Hive. His technical skills include MapReduce, HBase, HDFS, Java, Spring, MySQL, and Apache Tomcat. He has expertise in Hadoop architecture, cluster concepts, and each phase of the software development life cycle.
Soundarya Reddy has over 7 years of experience as a Java developer. She has extensive experience designing and developing web applications using technologies like Java, J2EE, Spring, Hibernate, and web services. She is proficient in all phases of the development lifecycle and has worked on projects for clients like IHG and the CDC. Her most recent role is as a Java developer for Intersect Group where she works on their application for IHG.
This document provides a summary of Manish Agrahari's career and qualifications. It outlines his 6 years of experience in IBM BPM development and Java/.Net, including designing and developing IBM BPM applications using features like BPDs, coaches, subprocesses, and integrating databases. It also lists his skills in technologies like IBM BPM, Eclipse, Java, SQL, and scripting languages. Recent projects are described, including developing insurance underwriting and claims management processes using IBM BPM.
This document provides a summary of a candidate's skills and experience working with large data sets and Hadoop technologies. The candidate has 1 year of experience developing, implementing, and deploying Big Data solutions using Hadoop technologies like HDFS, Hive, Pig, and Sqoop on Linux and Windows environments. They have extensive experience developing UDFs, UDAFs, UDTFs in Hive and Pig scripts for ETL processes. Additionally, the candidate has knowledge of Spark, Scala, Java/J2EE development, Linux, and databases. Their most recent role involved writing Pig and Hive queries, UDFs, and shell scripts to process and report on large data sets using Hadoop, Pig, Hive, Cass
Kishore Babu has over 7 years of experience in data analytics, business analytics, and project management. He is currently an Associate Business Analyst at GlobalLogic Technologies working on projects for Google. He leads a team that manages project dashboards, performs cost analysis, and automates reports using tools like Hive, Google SQL, and Dremel. Previously he has held roles as a Senior Lead and Lead at GlobalLogic where he mentored teammates and ensured project metrics were achieved.
Robin David is seeking a position that utilizes his 9+ years of experience in IT and 3+ years experience with Hadoop. He has extensive experience designing, implementing, and managing Hadoop clusters and data solutions. His experience includes building data lakes, ETL processes, data integration, and analytics solutions for clients across various industries. He is proficient in Hadoop ecosystem tools like Hive, Pig, Sqoop, Flume, and Spark and has expertise in Hadoop administration, performance tuning, and support.
This document is a CV that summarizes the qualifications and experience of Archana Jaiswal, an Indian national who speaks English and Hindi. She has over 15 years of experience conducting technical training in areas such as Hadoop, databases, Java/J2EE, and data structures. She has provided training to employees of various MNCs and conducted over 15,000 hours of training sessions. Her expertise includes Cloudera developer, Hortonworks data analyst, Oracle, SQL Server, MySQL, MongoDB, and she is proficient in Java/J2EE. She has worked as a trainer and subject matter expert for various companies such as Koenig Solutions, SEED InfoTech, and Global Talent Track.
Borja González is a Big Data Architect with over 5 years of experience in information technologies. He has expertise in Splunk, Big Data, cloud solutions, and other technologies. Currently, he leads a team using Splunk to analyze large volumes of data and create dashboards to monitor business metrics for a major natural gas company. Previously, he deployed a Hadoop cluster for a bank and worked as a software architect for Telefonica Movistar developing their website.
The document contains a profile summary for B. Sreenivasula Reddy including his contact information, 8.5 years of experience in Java and big data technologies, expertise in Hadoop ecosystem tools, databases like HBase and MongoDB, and frameworks like Spring. It also lists his educational qualifications and provides details of projects he has worked on involving technologies like Hadoop, Hive, Pig and frameworks like Java EE.
The document provides details about an experienced designer named Afsal K H. It outlines his skills, education, and work experience. For over 10 years, he has worked as a user interface designer for various companies including Wipro Infotech, CVK Mindsource Consulting, and PRHUB Integrated Marketing Communications. In these roles, he has been responsible for tasks such as creating visual concepts and designs, developing user interfaces, and ensuring accessibility compliance. He has expertise with tools like Photoshop, Dreamweaver, and languages like HTML, CSS, JavaScript.
I am having 6 years of experience in Java Programming.Involved in building Java web applications which are scalable for cloud infrastructure.Good experience in RDBMS database and NOSQL database MongoDb. Good knowledge in REST API design.Worked in Business Intelligence,Financial domain.
This document provides a summary of B. Sreenivasula Reddy's professional profile. It outlines his 8.5 years of experience in Java, J2EE, and Big Data technologies including Hadoop, HDFS, MapReduce, Pig, Hive, HBase, ZooKeeper, Oozie, and Flume. It also lists his educational qualifications of an MCA and BSc, and details several projects he has worked on utilizing these technologies for clients such as Wipro, Syntel, HP, and BlueStar.
Chandan Das is a developer/designer with over 7 years of experience in IT implementation projects using technologies like Teradata, Oracle, Hadoop, Pig, Hive, and Sqoop. He has extensive experience in data warehousing, ETL, and database administration. His career objective is to obtain a productive role in an IT organization where he can implement his expertise in developing complex projects efficiently and meeting expectations. He provides details of his professional experience, technical skills, key achievements and completed projects.
Pradeep G P is a Senior SQL Database Administrator with over 6 years of experience in handling SQL databases. He has extensive experience designing, maintaining, and tuning SQL Server databases. He has worked on databases ranging from SQL Server 2000 to SQL Server 2012 in both clustered and non-clustered environments. Pradeep is proficient in database administration tasks including backups, security management, performance tuning and high availability solutions.
Mukul Upadhyay is seeking a position in Big Data technology with an IT company. He has over 5 years of experience developing Hadoop applications and working with technologies like MapReduce, Hive, HBase, HDFS, and Sqoop. Some of his responsibilities include architecting Big Data platforms, developing custom MapReduce jobs, importing and exporting data between HDFS and relational databases, and tuning and monitoring Hadoop clusters. He has worked on projects for clients in the USA and India involving building Hadoop-based analytics platforms and processing terabytes of device log data.
• Capable of processing large sets of structured, semi-structured and unstructured data and supporting system architecture
• Implemented Proof of concepts on Hadoop stack and different big data analytic tools, migration from different databases to Hadoop.
• Developed multiple Map Reduce jobs in java for data cleaning and pre-processing according to the business requirements, Importing and exporting data into HDFS and Hive using Sqoop.
Having Experience in writing HIVE queries & Pig scripts.
Shiv Shakti is seeking a position utilizing his 4+ years of experience in Hadoop development and automation projects. He has expertise in handling structured and unstructured data using tools like Python, HiveQL, Sqoop, Oozie, PigLatin, Impala, HDFS. His technical skills include Hadoop and its components, ShellScript, Python, SQL, and Java. He has worked on several projects for Accenture involving data ingestion, processing, analysis and automation using Hadoop technologies.
Prafulla Kumar Dash has over 5 years of experience in Hadoop development. He has worked on projects involving loading data from various sources into HDFS, performing ETL using Hive and Pig, and generating reports. Currently he is working on a bank reconciliation project involving matching data between systems using Spark and Scala.
Prashanth Shankar Kumar has over 8 years of experience in data analytics, Hadoop, Teradata, and mainframes. He currently works as a Hadoop Developer/Tech Lead at Bank of America where he develops Hive queries, Impala queries, MapReduce programs, and Oozie workflows. Previously he worked as a Hadoop Developer at State Farm Insurance where he installed and managed Hadoop clusters and developed solutions using Hive, Pig, Sqoop, and HBase. He has expertise in Teradata, SQL, Java, Linux, and agile methodologies.
This document provides an overview of Hadoop and Big Data. It begins with introducing key concepts like structured, semi-structured, and unstructured data. It then discusses the growth of data and need for Big Data solutions. The core components of Hadoop like HDFS and MapReduce are explained at a high level. The document also covers Hadoop architecture, installation, and developing a basic MapReduce program.
Enough taking about Big data and Hadoop and let’s see how Hadoop works in action.
We will locate a real dataset, ingest it to our cluster, connect it to a database, apply some queries and data transformations on it , save our result and show it via BI tool.
The document discusses big data and Hadoop. It describes the three V's of big data - variety, volume, and velocity. It also discusses Hadoop components like HDFS, MapReduce, Pig, Hive, and YARN. Hadoop is a framework for storing and processing large datasets in a distributed computing environment. It allows for the ability to store and use all types of data at scale using commodity hardware.
Prafulla Kumar Dash has over 6 years of experience in Hadoop development and administration. He has worked on various projects involving data ingestion from multiple sources into Hadoop, building Hive data warehouses, writing Pig and Spark programs, and developing ETL processes. Currently he is working as a Hadoop developer for IDFC Bank, where he has set up Hadoop clusters and develops reconciliation jobs.
This document contains the resume of Himabindu Y. summarizing their professional experience and skills. They have over 3 years of experience in application development using Java and big data technologies like Hadoop. Their most recent role was as a Software Engineer at Prokarma Softtech since 2013 where they worked on projects involving Hadoop, Pig, Hive, Sqoop and machine learning. They have also worked on projects involving web services, file processing and cryptographic operations. Himabindu holds a B.Tech in Computer Science and Engineering and is proficient in technologies like Java, Hadoop, Oracle SQL and Linux.
This document provides a detailed summary of Poorna Chandra Rao Kommana's professional experience and technical skills. It outlines his 8 years of experience in big data technologies including Hadoop, Hive, Pig, Spark, Kafka and AWS services. It details his roles and responsibilities in building scalable big data solutions, developing ETL pipelines, performing data analysis, and optimizing performance. His skills include Java, Python, SQL, Pig Latin, HiveQL, and tools like Eclipse and PyCharm.
Prabhakar has over 3 years of experience in IT with a focus on Big Data technologies like Hadoop, HDFS, MapReduce, Hive, Pig and HBase. He has worked as a BigData Developer at Tata Consultancy Services since 2013 and previously as a Java Developer. His skills include Java, SQL, Pig Latin, Hadoop ecosystem tools and Linux. He has participated in two projects involving log analytics on large datasets and developing a food delivery application.
Prabhakar has over 3 years of experience in IT with a focus on Big Data technologies like Hadoop, HDFS, MapReduce, Hive, Pig and HBase. He has worked as a BigData Developer at Tata Consultancy Services since 2013 and previously as a Java Developer. His skills include Java, SQL, Pig Latin, Hadoop ecosystem tools and Linux. He has participated in two projects involving log analytics on large datasets and developing a food delivery application.
Shiv Shakti has over 4 years of experience in Hadoop development and automation projects. He has expertise in handling structured and unstructured data using tools like Hive, Pig, Sqoop, HDFS, and Oozie. He is proficient in frameworks like Hadoop, Hive, and Pig and languages like Shell scripting, Python, SQL, and Java. He has worked on projects involving log analysis, data processing, and dashboard creation for clients in the banking industry. His roles included developing MapReduce functions, writing Hive queries, creating Pig scripts, and automating workflows with Oozie.
Overview of Big data, Hadoop and Microsoft BI - version1Thanh Nguyen
Big Data and advanced analytics are critical topics for executives today. But many still aren't sure how to turn that promise into value. This presentation provides an overview of 16 examples and use cases that lay out the different ways companies have approached the issue and found value: everything from pricing flexibility to customer preference management to credit risk analysis to fraud protection and discount targeting. For the latest on Big Data & Advanced Analytics: http://mckinseyonmarketingandsales.com/topics/big-data
Overview of big data & hadoop version 1 - Tony NguyenThanh Nguyen
Overview of Big data, Hadoop and Microsoft BI - version1
Big Data and Hadoop are emerging topics in data warehousing for many executives, BI practices and technologists today. However, many people still aren't sure how Big Data and existing Data warehouse can be married and turn that promise into value. This presentation provides an overview of Big Data technology and how Big Data can fit to the current BI/data warehousing context.
http://www.quantumit.com.au
http://www.evisional.com
Atul Mithe is an Indian citizen born in 1991 with experience in Big Data technologies like Hadoop, HDFS, MapReduce, Hive, Pig, HBase and Flume. He has over 2 years of work experience as a Software Engineer and Big Data Developer. He is currently working on a project to ingest data from various sources into a central data lake for analytics. His skills include requirement gathering, data processing using Pig Latin, developing and querying Hive tables, and automating processes with tools like Control-M. He has a BE in Information Technology.
Big data refers to large amounts of data from various sources that is analyzed to solve problems. It is characterized by volume, velocity, and variety. Hadoop is an open source framework used to store and process big data across clusters of computers. Key components of Hadoop include HDFS for storage, MapReduce for processing, and HIVE for querying. Other tools like Pig and HBase provide additional functionality. Together these tools provide a scalable infrastructure to handle the volume, speed, and complexity of big data.
1. Hassan Qureshi
Hadoop Lead Developer
hhquresh@gmail.com
PROFESSIONAL SUMMARY:
Certified Java programmer with 9+ Years of extensive experience in IT including few years of
Big Data related technologies.
Currently Researcher and developer Technical lead of data engineering team, team works
with data scientists in developing insights
Good exposure in following all the process in a production environment like change
management, incident management and managing escalations
Hands-on experience on major components in Hadoop Ecosystem including Hive, HBase,
HBase-Hive Integration, PIG, Sqoop, Flume & knowledge of Mapper/Reducer/HDFS
Framework.
Handson experience Installation,configuration,maintenance,monitoring,performance andtuning,
and troubleshooting Hadoop clusters in different environments such as Development Cluster, Test
Cluster and Production
Defined file system layout and data set permissions
Monitor local file system disk space usage, log files, cleaning log files with auto script
Extensive knowledge of Front End technologies like HTML, CSS, Java Script.
Good working Knowledge in OOA & OOD using UML and designing use cases.
Good communication skills, work ethics and the ability to work in a team efficiently with
good leadership skills.
TECHNICAL SKILLS:
Big Data Hadoop, HDFS, MapReduce, Hive, Sqoop, Pig, HBase,
MongoDB, Flume, Zookeeper, Oozie.
Operating Systems Windows, Ubuntu, Red Hat Linux, Linux, UNIX
Java Technologies JDBC, JAVA, SQL, JavaScript, J2EE, C, JDBC, SQL, PL/SQL
Programming or Scripting
Languages
Java, SQL, Unix Shell Scripting, C.,Python
Database MS-SQL, MySQL, Oracle, MS-Access
Middleware Web Sphere, TIBCO
IDE’s & Utilities Eclipse and JCreator, NetBeans
Protocols TCP/IP, HTTP and HTTPS.
Testing Quality Center, Win Runner, Load Runner, QTP
Frameworks Hadoop,py-spark,Cassendra
PROFESSIONAL EXPERIENCE
Fortinet, NewYork, NY Nov 2013- Present
Hadoop Lead
2. Project Details:
Fortinet is company that provides networking software including IP communicator phone
software. IP enhancement project features a post sales project. All devices of Fortinet especially
Routers and Switches would be sending xml files to a centralised server on daily basis. Xml files
contain info about their location and activities done for the day. All xml files need to process to
get the device health and their usage, xmls are 20 percent of overall data and remaining will be
coming from RDBMS and flat files.
Frame works and tools used:
HDP 2.3 distribution for development Cluster
Hadoop eco systems Hive, Map reduce to process data Contribution
Writing Map reduce for processing xmls and flat files
Provided production support for cluster maintenance
Commissioned and decommissioned nodes as needed
There were 10 node cluster with Hortonworks data platform with 550 GB RAM, 10 TB SSDs and 8
cores
Star schema was designed with fact tables and dimension tables
Worked on analyzing Hadoop stack and different big data analytic tools including Pig and
Hive, Hbase database and Sqoop
Taking trainings for new joiners into project
Triggered workflows based on time or availability of data using the Oozie Coordinator
Engine
Emerson Climate Technologies, Louisville, KY Jan 2011- October 2013
Hadoop Lead
Project Details:
Emerson Climate is providing efficient HVAC system to buildings minimize cost energy used by
HVAC system. Project HVAC controller used in controlling HVAC speed on basis of occupancy in a
floor by analysing access card data of the employees. HVAC systems run at constant speeds
irrespective of occupancy in buildings. We developed a project where we collect data of our own
employees across many locations at 10 mins frequency and come up with recommendation of
HVAC speeds depending occupancy in a floor. Used Mapreduce and spark to clean and format
the data, jobs run at 10 mins frequency using Crontab to generate HVAC controller report.
Frame works and Tools Used
HDP 2.0 distribution for development Cluster
All the datasets was loaded from two different source such as Oracle, MySQL to HDFS and
Hive respectively on daily basis
We were getting on an average of 80 GB on daily basis on the whole the data warehouse.
We used 12 node cluster to process the data
3. Involved in loading data from UNIX file system to HDFS
Hadoop eco systems hive, Map reduce, Pyspark to process data
Implemented capacity scheduler to share the resources of the cluster and perform Hadoop
admin responsibilities as needed
Writing Map reduce and Pyspark jobs for cleansing and applying algorithms
Cassendra database was use to transform queries to Hadoop HDFS
Designed scalable big data cluster solutions
Monitored job status through email received from cluster health monitoring tools
Responsible to manage data coming from different sources.
BMO Harris Bank, Buffalo Grove, IL Aug 2010 – Dec 2011
Hadoop Lead
BMO isfinancial servicesdepartmentthathelpscustomerwiththeirfinancial needsincludingcredit
card, banking,andloans.
BMO CreditCardproject wasdesignedtoextractraw data from differentsourcesintoHadoop Eco
systemtocreate andpopulate the necessaryHive tables.The mainaimof the projectisto centralize the
source of data forreport generationusinghistorical databasewhichotherwiseare generatedfrom
multiple sources.
Responsibilities:
Worked on Importing and exporting data into HDFS in financial sector
Involved as a team in reviewing of functional and non-functional requirements for writing
debit processing in Atlanta location.
Implemented Oozie workflows to perform Ingestion & Merging of data in the MapReduce
jobs for credit card fraud detection.
Extracted files from Cassendra Database through Sqoop and placed in HDFS and processed.
Hands on experience in creating Hive tables,loading with data and writing hive queries which
will run internally in map reduce way to administer transactions.
Developed a custom File system plug in for Hadoop so it can access files on Data Platform.
This plug-in allows Hadoop MapReduce programs, HBase, Pig and Hive to work unmodified
and access files directly.
Expertise in server-side and J2EE technologies including Java, J2SE, JSP, Servlets, XML,
Hibernate, Struts, Struts2, JDBC, and JavaScript development.
Design of GUI using Model View Architecture (STRUTS Frame Work).
Extracted feeds form social media sites such as Facebook, Twitter using Python scripts.
Environment: Hadoop 1x, Hive, Pig, HBASE, Sqoop and Flume, Spring, Jquery, Java, J2EE, HTML,
Javascript, Hibernate
Lowe’s, Mooresville, NC Feb 2005 – July 2010
Sr. Java Developer
4. Lowes.com redesign and assembly haul away. Worked with different services based on Service
Oriented Architecture (SOA) and also standalone projects which utilize UNIX shell scripts to
execute java programs. Some of the REST and SOAP services include: catalog service, façade
service, pricing service, purchase history service, mylowes service, seo-redirect service. Also
worked in REST API automation project using Rest-assured framework. Involved in updating
small batch script based java projects which produces some CSV, excel, txt files and sends those
files through SFTP or email.
Responsibilities:
Developed new DAOs methods using Hibernate 4.3 as ORMfor application.
Used DOMParser to parse XML 1.1 data from file.
Used JAXB 2.0 annotations to convert Java object to/from XML 1.1 file.
Created a SOAP 1.2 web service and then got its WSDL 2.0 generated.
Created a Web Service Client and invoked the web service using the client
Developed a REST based service which reads the JSON 2.0 file and passed it as an argument
to the Controller which handles the multiple HTML 5.1 UI files.
Used Struts MVC framework for user authentication by using Ping Federate Server for single
sign on (SSO)
Used SAML to use many services by entering into the systemfor one service
Involved in coding front end using Swing, HTML, JSP, JSF, Struts Framework
IDesign and Development of Spring service classes and JSF pages
Involvedinall software developmentlifecycle phaseslike development,unittesting,regression
testing,performance testing,deployment
Responsible fordeveloping,configuring,ormodifyingRESTand SOAPwebservicesusing
technologieslikeJAX-RS,JAX-WS,Jersey,SpringMVC.
UsedSpringJDBC as data layerto querydatabasesDB2 and Cassandra.
WorkedUNIXbatch applicationsthatgeneratesproductfeedsandXML files.
WorkedwithRestAPIautomationusingRestAssuredandTestingframework.
Participatedinscrummeetings,dailystand-ups, groomingsessions.
Usedtechnologieslike Spring,REST,JAX-RS,Jersy,JSON,Junit,Testing,Mockito,EasyMock,
RestAssured,Ehcache,Maven,DB2,JDBC,Batch Scripting,DB2, WebSphere commerce,websphere.
Environment: Java, J2EE, JSP, ExtJS, Servlets, Struts, JDBC, Java Script, LifeRay, Google Web
Toolkit, Spring, EJB (SSB, MDB), Ajax, Websphere 6.1
Education Details-BE in IT , Guelph University,2005