• Capable of processing large sets of structured, semi-structured and unstructured data and supporting system architecture
• Implemented Proof of concepts on Hadoop stack and different big data analytic tools, migration from different databases to Hadoop.
• Developed multiple Map Reduce jobs in java for data cleaning and pre-processing according to the business requirements, Importing and exporting data into HDFS and Hive using Sqoop.
Having Experience in writing HIVE queries & Pig scripts.
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...Agile Testing Alliance
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Processing by "Sampat Kumar" from "Harman". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Cloudera's open-source Apache Hadoop distribution, CDH (Cloudera Distribution Including Apache Hadoop), targets enterprise-class deployments of that technology. Cloudera says that more than 50% of its engineering output is donated upstream to the various Apache-licensed open source projects.
https://www.pass4sureexam.com/ccD-410.html
• Capable of processing large sets of structured, semi-structured and unstructured data and supporting system architecture
• Implemented Proof of concepts on Hadoop stack and different big data analytic tools, migration from different databases to Hadoop.
• Developed multiple Map Reduce jobs in java for data cleaning and pre-processing according to the business requirements, Importing and exporting data into HDFS and Hive using Sqoop.
Having Experience in writing HIVE queries & Pig scripts.
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Proc...Agile Testing Alliance
Introduction To Big Data with Hadoop and Spark - For Batch and Real Time Processing by "Sampat Kumar" from "Harman". The presentation was done at #doppa17 DevOps++ Global Summit 2017. All the copyrights are reserved with the author
Cloudera's open-source Apache Hadoop distribution, CDH (Cloudera Distribution Including Apache Hadoop), targets enterprise-class deployments of that technology. Cloudera says that more than 50% of its engineering output is donated upstream to the various Apache-licensed open source projects.
https://www.pass4sureexam.com/ccD-410.html
Student information management system project report ii.pdfKamal Acharya
Our project explains about the student management. This project mainly explains the various actions related to student details. This project shows some ease in adding, editing and deleting the student details. It also provides a less time consuming process for viewing, adding, editing and deleting the marks of the students.
6th International Conference on Machine Learning & Applications (CMLA 2024)ClaraZara1
6th International Conference on Machine Learning & Applications (CMLA 2024) will provide an excellent international forum for sharing knowledge and results in theory, methodology and applications of on Machine Learning & Applications.
Sachpazis:Terzaghi Bearing Capacity Estimation in simple terms with Calculati...Dr.Costas Sachpazis
Terzaghi's soil bearing capacity theory, developed by Karl Terzaghi, is a fundamental principle in geotechnical engineering used to determine the bearing capacity of shallow foundations. This theory provides a method to calculate the ultimate bearing capacity of soil, which is the maximum load per unit area that the soil can support without undergoing shear failure. The Calculation HTML Code included.
Hierarchical Digital Twin of a Naval Power SystemKerry Sado
A hierarchical digital twin of a Naval DC power system has been developed and experimentally verified. Similar to other state-of-the-art digital twins, this technology creates a digital replica of the physical system executed in real-time or faster, which can modify hardware controls. However, its advantage stems from distributing computational efforts by utilizing a hierarchical structure composed of lower-level digital twin blocks and a higher-level system digital twin. Each digital twin block is associated with a physical subsystem of the hardware and communicates with a singular system digital twin, which creates a system-level response. By extracting information from each level of the hierarchy, power system controls of the hardware were reconfigured autonomously. This hierarchical digital twin development offers several advantages over other digital twins, particularly in the field of naval power systems. The hierarchical structure allows for greater computational efficiency and scalability while the ability to autonomously reconfigure hardware controls offers increased flexibility and responsiveness. The hierarchical decomposition and models utilized were well aligned with the physical twin, as indicated by the maximum deviations between the developed digital twin hierarchy and the hardware.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
CW RADAR, FMCW RADAR, FMCW ALTIMETER, AND THEIR PARAMETERSveerababupersonal22
It consists of cw radar and fmcw radar ,range measurement,if amplifier and fmcw altimeterThe CW radar operates using continuous wave transmission, while the FMCW radar employs frequency-modulated continuous wave technology. Range measurement is a crucial aspect of radar systems, providing information about the distance to a target. The IF amplifier plays a key role in signal processing, amplifying intermediate frequency signals for further analysis. The FMCW altimeter utilizes frequency-modulated continuous wave technology to accurately measure altitude above a reference point.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...ssuser7dcef0
Power plants release a large amount of water vapor into the
atmosphere through the stack. The flue gas can be a potential
source for obtaining much needed cooling water for a power
plant. If a power plant could recover and reuse a portion of this
moisture, it could reduce its total cooling water intake
requirement. One of the most practical way to recover water
from flue gas is to use a condensing heat exchanger. The power
plant could also recover latent heat due to condensation as well
as sensible heat due to lowering the flue gas exit temperature.
Additionally, harmful acids released from the stack can be
reduced in a condensing heat exchanger by acid condensation. reduced in a condensing heat exchanger by acid condensation.
Condensation of vapors in flue gas is a complicated
phenomenon since heat and mass transfer of water vapor and
various acids simultaneously occur in the presence of noncondensable
gases such as nitrogen and oxygen. Design of a
condenser depends on the knowledge and understanding of the
heat and mass transfer processes. A computer program for
numerical simulations of water (H2O) and sulfuric acid (H2SO4)
condensation in a flue gas condensing heat exchanger was
developed using MATLAB. Governing equations based on
mass and energy balances for the system were derived to
predict variables such as flue gas exit temperature, cooling
water outlet temperature, mole fraction and condensation rates
of water and sulfuric acid vapors. The equations were solved
using an iterative solution technique with calculations of heat
and mass transfer coefficients and physical properties.
NUMERICAL SIMULATIONS OF HEAT AND MASS TRANSFER IN CONDENSING HEAT EXCHANGERS...
Shiv shakti resume
1. Shiv Shakti
shivshakti070@gmail.com
Hadoop Developer at Accenture Pvt. Ltd.
Job Objective:
Seeking exposure to work on Hadoop ecosystem and automation projects where my skillsand work experience can
be utilized to the fullest.I want to work with committed and dedicated people and grow in and with the
organization.
Summary:
4 Years and 8 month of overall experiencein Hadoop Development and automation projects.
2 Years and 3 month of relevant experience as a Big data professional with expertisein handling
structured and unstructured data using python, HiveQL, Sqoop, Oozie, PigLatin, Impala,HDFS and other
hadoop ecosystem.
Sound experience on HDFS, MapReduce, Hadoop ecosystem components like Hive, Pig,Sqoop, Oozie,
Flume and MySQL databases,Impala.
Knowledge on automation projects using Shell Script,Python, MySQL, Core Java.
Ability to play key role in the team in technical implementation as well as cross team communication.
Providingtrainingto ITemployee on Big data,Hadoop and hadoop ecosystem through external vendor
(21st century software solutions.)
Technical Proficiency:
Framework: Hadoop and it’s component (Hive, Pig, OOZie, Sqoop, Flume, HBase, Impala)
Languages: ShellScript,python, SQL, HQL, Piglatin,Corejava,Basics of HTML, Impala
Databases: MySQL
Platforms: Unix/Linux,Windows
Professional Experience:
Hadoop Developer
Accenture
Mar’14 – present (2 Years and 3 month)
Project #: 1
ProjectName : R&F IT Platform ServiceBig Data
Client: Credit Suisse
Domain: Investment Banking
Duration: 2014 Mar to Till date
Environment&Tools: Hadoop, Hive, Pig,oozie, Sqoop, python, Impala,shell script,HDFS Concepts.
Team Size : Four
Roles&
Responsibilities:
Gathering logs from several system and store(ingest) it into HDFS.
Importing data from external databaseto HDFS usingsqoop.
UsingPython and shell scriptto do pre-processingon data.
WritingMapReduce function to retrieve different outcome from the logs(data).
Creating Hive partitioned tables to store the processed results in a tabular
Bangalore
Cell:+91-9036284261
2. (structured) format.
WritingHQL queries to analyze and process data stored in Hive tables
WritingImpala queries to get Insightfrom data stored into HIVE table.
Writingvarious ApachePIGscripts to process the data stored in HDFS.
Developing Sqoop scripts and jobs in order to import/export data between Hadoop
(HDFS) and MySQL Database.
Create Oozie workflow to automate several hadoop(hive, sqoop, map-reduce,
sendingemails,pig) processes.
Buildingmechanismto handleerror in hadoop job workflow usingoozie.
UsingMapReduce(python) to develop Hadoop applicationsand jobs.
Setting up Pseudo/multinode Hadoop cluster,installation and configuration of
Hadoop Eco System, HDFS, Pig,Hive, sqoop,HBase and Impala
ProjectDescription: Platformservices bigdata team is responsiblefor analyzingbigdata requirement from
various application(clusternet/Marsnet/MET/Basel2/Basel3) and process all thedata
(logs/RDBMS tables) and come with conclusion which helps management in fixinglongterm
issues related to system/environment. CreatingOOzie workflow to automate workflow of
several hadoop component together to run as one automated process.
Project #: 2
ProjectName : Reg-IT Big data
Client: Credit Suisse
Domain: Investment Banking
Duration: From February 2015 - till Present
Environment&Tools: Sqoop, Hive, Pig, Impala,HDFS, Mapreduce, oozie.
Team Size : Three
Roles& Responsibilities: Ingestingdata from oracledatabaseto HDFS usingsqoop (incremental import).
Creating temporary HIVE table to store data for pre-processing.
Creating external HIVE table to store pre-processed data.
Writingvarious Hivequeries to get insights fromdata.
Writingvarious Impala queries to process and analyzedata.
Exporting processed data into oracledatabaseusingsqoop export.
Automating whole big data process using oozie.
Handlingoozieerror mails and resolvingissuerelated to whole hadoop process.
ProjectDescription: Clientwanted to migrate their data from oracledatabaseto hadoop to take most of the advantages
from it’s data.Client wanted to analyzedata with the help of hadoop ecosystem(Pig, Hive, Impala).
They wanted to take advantages of hadoop features likeparallel processingof it’s application and
distributed storage(HDFS). Clientwanted to extract several insights of it’s data with HQL or PIGLatin.
Project #: 3
ProjectName : Social Media Dashboard
Client: Credit Suisse
Domain: Investment Banking
Duration: From February 2015 – December 2015
Environment&Tools: Python, HQL, Shell scripting,Hadoop Framework, Flume, HBase
Team Size : Four
Roles& Responsibilities: Reading Text files extracted from social media channelsinto CSV format usingPython in the
3. Hadoop environment.
Writing python code to create Page level and Post level excel files for every channel.
Developing Python program to check for data inconsistencies in thetext files.
Creating separatetables in Hive for Page and post level data for every channel.
Writing Hivequeries to append the excel data to the respective Channel’s Table.
Creating Job to automate python program to run daily usingshell script.
Automating Hive queries to append the data on a daily basis
ProjectDescription: Clientwanted to gauge performance of various campaignson their social media pa ges on various
Channels likeFacebook, LinkedIn, Twitter, YouTube and Google+. Digital Marketingteam wanted to
build a QlikViewdashboard to track Impression,Engagement and conversion level metrics for page
level as well as post level activities across theseChannels
Automation Engineer
Accenture
Feb’13 - Oct’14 (1 Year and 8 Months)
Project #: 4
ProjectName : Automated Environment Management
Client: Credit Suisse
Domain: Investment Banking
Duration: From February 2013 - till Present
Environment&Tools: Unix/Linux,Shell scripting,Python, MySql, JIRA
Team Size : Six
Roles& Responsibilities: Write automation scriptusingshell script/Python/SQL
Automate deployment/manual process on Linux/Unix boxes
Test scripts on sampledata and applyingscripts on production environment.
Create JIRA to track development process of automation.
ProjectDescription: Environment Management team was responsiblefor automation of all the manual process in
Production Environment Management. Team was also responsiblefor development and deploymentof
Environment Management dashboard to display dynamic information aboutapplication which was
hosted in the environment.
Software Engineer
Irely soft services.
Sept’11 - Dec’12 (1 Year and 3 Months)
Project #: 5
ProjectName : Sales Order Management
Client: SDSFT (Securities DealingSystems)
Duration: From September 2011– December 2012
Environment&Tools: LINUX, Bash Shell scripting,SQL, python
Role: Software Developer
Team Size : Four
Roles&
Responsibilities: Appending products data onto already existingSQLtables
Read text fileof all theproducts into excel format using Python
4. Loadingdata into application for different subscriber to view latestdata
ProjectDescription: SDSFT clientis a Market Data System for all theUS exchanges (CME, LME, NYSE). Its main functionality
included maintainingsalesorder and provides filtered data to the subscribers. The clientenabled
subscribersto viewthe latest information for all theProducts from different Exchanges.
Education & Credentials:
B.Tech in Information Technology
Paavai EngCollage(Anna University) 8.35 CGPA
Senior Secondary 12th (C.B.S.E) 71.20 %
DAV Public School,Patna,Bihar
Higher Secondary 10th (C.B.S.E) 71.10 %
DAV Public School,Muzaffarpur,Bihar
Key Accomplishments:
Received Champion award for R&F IT Platform ServiceBig Data project in 2015.
Got ACE Award in 2014 for automation projects.
CBSE Under 18 doubles winner in Table Tennis doubles in 2007.
Winner of several inter state tournament in Table Tennis singles as well as doubles.
Personal Summary:
Date of Birth : 27th December 1989
Permanent Address : Bhellcolony,q no:-f 5/5, P.O. Khabra,Dist.Muzaffarpur,Bihar -843146
Language Known : English,Hindi,Basicsof Tamil
I hereby declare that all the information furnished above is true to the best of my
knowledge and belief.
Shiv Shakti
(Applicant) (Date)