Lakshmi Narayana H is seeking an opportunity to utilize his 4.6 years of experience developing applications using SQL, PLSQL, Oracle APEX, and Informatica ETL. He has worked on projects in various domains including telecom, pharma, and media. His responsibilities have included implementing business logic, loading data warehouses, developing complex queries for reporting, and more. He holds a B.Tech in ECE from Jawaharlal Nehru Technological University Kakinada.
What is the role of data engineer in any organization and how can one become a data engineer can be understood by this article.
https://www.janbasktraining.com/hadoop-big-data-analytics
What is the role of data engineer in any organization and how can one become a data engineer can be understood by this article.
https://www.janbasktraining.com/hadoop-big-data-analytics
Data Production Pipelines: Legacy, practices, and innovationNatalino Busa
Modern engineering requires machine learning engineers, who are needed to monitor and implement ETL and machine learning models in production. Natalino Busa shares technologies, techniques, and blueprints on how to robustly and reliably manage data science and ETL flows from inception to production.
In particular, Natalino explains how to solve one of the most annoying problems in modern data pipelines—migrating and managing legacy ETL—by generating Spark jobs from a textual representation (NLP and SQL). Natalino also demonstrates an open source web UI implemented in React that transforms high-level representations to Spark code and shows how users are able to capture and discover data in the organization by accessing a metadata service. Natalino also introduces the datalabframework, a Jupyter-powered lightweight framework that allows machine learning scientists and engineers to build a robust production ML system only using notebooks.
https://conferences.oreilly.com/strata/strata-sg/public/schedule/detail/64481
How To Become A Big Data Engineer | Big Data Engineer Skills, Roles & Respons...Simplilearn
This presentation will help you understand how to become a Big Data Engineer. First, you will learn who is a Big Data Engineer and what are their roles and responsibilities. Then, you will see the seven essential skills you need to have to become a Big Data Engineer. You will understand the different range of salaries and job roles of a Big Data Engineer. Finally, this video will tell you the necessary certifications you can opt for after becoming a Big Data Engineer. Now, let’s get started with learning the steps to become a Big Data Engineer.
Below topics are explained this "how to become a Big Data Engineer" presentation:
1. Who is a Big Data Engineer
2. Responsibilities of a Big Data Engineer
3. Skills to become a Big Data Engineer
4. Big Data Engineer's salary and roles
5. Certifications for a Big Data Engineer
6. Simplilearn certifications for a Big Data Engineer
YouTube Link: https://www.youtube.com/watch?v=yHf7qzFV6Qg
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
4+ years of experience in various roles such as ETL, SQL, Data Migration and Production support.
Teradata 14 Certified Professional.
Skilled in Informatica ETL and Teradata SQL.
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Ellucian executives and customers will talk about a range of issues, including challenges facing higher education and technical strategies (including cloud) into the future for both the Banner and Colleague products. This event is limited to NWACC Council members, enterprise directors, and invited guests only.
Data Production Pipelines: Legacy, practices, and innovationNatalino Busa
Modern engineering requires machine learning engineers, who are needed to monitor and implement ETL and machine learning models in production. Natalino Busa shares technologies, techniques, and blueprints on how to robustly and reliably manage data science and ETL flows from inception to production.
In particular, Natalino explains how to solve one of the most annoying problems in modern data pipelines—migrating and managing legacy ETL—by generating Spark jobs from a textual representation (NLP and SQL). Natalino also demonstrates an open source web UI implemented in React that transforms high-level representations to Spark code and shows how users are able to capture and discover data in the organization by accessing a metadata service. Natalino also introduces the datalabframework, a Jupyter-powered lightweight framework that allows machine learning scientists and engineers to build a robust production ML system only using notebooks.
https://conferences.oreilly.com/strata/strata-sg/public/schedule/detail/64481
How To Become A Big Data Engineer | Big Data Engineer Skills, Roles & Respons...Simplilearn
This presentation will help you understand how to become a Big Data Engineer. First, you will learn who is a Big Data Engineer and what are their roles and responsibilities. Then, you will see the seven essential skills you need to have to become a Big Data Engineer. You will understand the different range of salaries and job roles of a Big Data Engineer. Finally, this video will tell you the necessary certifications you can opt for after becoming a Big Data Engineer. Now, let’s get started with learning the steps to become a Big Data Engineer.
Below topics are explained this "how to become a Big Data Engineer" presentation:
1. Who is a Big Data Engineer
2. Responsibilities of a Big Data Engineer
3. Skills to become a Big Data Engineer
4. Big Data Engineer's salary and roles
5. Certifications for a Big Data Engineer
6. Simplilearn certifications for a Big Data Engineer
YouTube Link: https://www.youtube.com/watch?v=yHf7qzFV6Qg
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
4+ years of experience in various roles such as ETL, SQL, Data Migration and Production support.
Teradata 14 Certified Professional.
Skilled in Informatica ETL and Teradata SQL.
Feature Store as a Data Foundation for Machine LearningProvectus
Looking to design and build a centralized, scalable Feature Store for your Data Science & Machine Learning teams to take advantage of? Come and learn from experts of Provectus and Amazon Web Services (AWS) how to!
Feature Store is a key component of the ML stack and data infrastructure, which enables feature engineering and management. By having a Feature Store, organizations can save massive amounts of resources, innovate faster, and drive ML processes at scale. In this webinar, you will learn how to build a Feature Store with a data mesh pattern and see how to achieve consistency between real-time and training features, to improve reproducibility with time-traveling for data.
Agenda
- Modern Data Lakes & Modern ML Infrastructure
- Existing and Emerging Architectural Shifts
- Feature Store: Overview and Reference Architecture
- AWS Perspective on Feature Store
Intended Audience
Technology executives & decision makers, manager-level tech roles, data architects & analysts, data engineers & data scientists, ML practitioners & ML engineers, and developers
Presenters
- Stepan Pushkarev, Chief Technology Officer, Provectus
- Gandhi Raketla, Senior Solutions Architect, AWS
- German Osin, Senior Solutions Architect, Provectus
Feel free to share this presentation with your colleagues and don't hesitate to reach out to us at info@provectus.com if you have any questions!
REQUEST WEBINAR: https://provectus.com/webinar-feature-store-as-data-foundation-for-ml-nov-2020/
Ellucian executives and customers will talk about a range of issues, including challenges facing higher education and technical strategies (including cloud) into the future for both the Banner and Colleague products. This event is limited to NWACC Council members, enterprise directors, and invited guests only.
1. LAKSHMI NARAYANA H
E-mail:h_lakshminarayana@yahoo.in
Contact: +91-9866338660
Objective
Seekinganopportunitytoutilizemyskillsandabilitiesinthe fieldthatoffersprofessionalgrowthandgain
professionalexcellence while beingresourceful,innovativeandflexible.
Professional Summary
Having 4.6 years of developing experience in SQL, PLSQL, Oracle APEX, Informatica ETL.
Having hands on experience in various domains like Telecom, Pharma, Media and Information.
Had an experience inDataMigrationactivitieslikeOracle Datapump.
HavingGood Data warehousingknowledge.
Technical Skills/Competencies
Programming Languages SQL,PL/SQL.
Tools Oracle Application Express(APEX),TOAD,SQLDeveloper,SQLmanagementStudio,
InformaticaPowerCenter
Databases Oracle-11g,12c
OperatingSystems WindowsXP/2007/2008, Unix.
Employment Experience
Workingas an Associate Consultant atVirtusaConsultingServicesPvt Ltd since 16th
Jan 2012.
Projects handled
Project:PowerInformation Networks(JDP)
J.D.Powerand Associates,Inc.,amarketinginformationservicescompany,conductssurveysof productsand
servicesquality,customersatisfaction,andbuyerbehaviour.The company'sservicesincludeindustry-wide
syndicatedstudies,client-commissioned.
Data from thirdpart vendorsgatherinregularintervalsof time,thisdatawill be processedandmaintained
inData Warehouse,Ontopof thisData Marts were created.Fromthe Data Mart Reportingwill be done inMicro
Strategy.
Role: Database, ETL developer,Oracle APEX
Duration: Mar 2015 to till date
Technologies: SQL, PLSQL,InformaticaETL.
Responsibilities:
Implementedthe BusinesslogicusingPLSQLblocks,dealinglarge volumesof data.
Tuningthe longrunningDB processes.
Loadingthe Facts & DimensionstablesusingInformaticaandPLSQL.
Developedthe complexSQLqueriesforthe reportgeneration.
DevelopedSQLscript’sforclean-upjobs,creatingcomplex views,synonymsandgrantingprivilegesonthe newly
createdobjectstothe otherusers.
DevelopedETLprocessusingInformaticaPowerCentre –9.5
2. Project:GETransportation UnscheduledMaintenance Approval
Developedanautomatedapplicationwhichcantrack all the customer’sworkorders.Invoice generationfor
those workordersat customerlevel,basedonthe classificationof SITE,FLET, Fiscal weekandFiscal month.
GeneratedInvoice will be undergone throughthe workflowforpriorapprovalsfromthe customerend.Approved
orderwill be invoicedtothe customer.Pendingapprovals,correctionswill be remindedthroughemailalerts.
Role: Database developer.
Duration: Sep2014 - Mar 2015.
Technologies: PLSQL.
Responsibilities:
Developed procedure’s& Function’s for data retrieval from Oracledata warehouse and SQL data warehouse and OLTP
database.
Designedthe applicationdatabase asperthe FSD.
Developedthe complexSQLqueriesforthe reportgeneration.
DevelopedSQLscript’sforclean-upjobs.
Project:TMW Wholesale NEOT2R
BT Neois a workflowmanagement systemforBT 21CN Products.It isan order raising(L2C) andfault
resolutionsystem(T2R).It’salsoanOSS (Operational SupportSystem) model systemwhichwill supportoperations
teamin BT.
Role: Database developer.
Duration: Jan 2013 - Aug2014.
Technologies: PLSQL, Oracle APEX
Responsibilities:
Responsible forpresentingthe daily/weeklyraised/rectifiedfaultsdataof differentcomponentslikeOpen
Reach,Wholesale,IRAMS,andAUTOFIXonthe dashboard.
Gatheringthe requirementsfromthe client andresponsible forcreating/modifyingthe reportsasperthe
requirement.
Responsible forMaintainingthe Applicationdatabase.
Project:ScientificElectronicLibrary
Developed aSearchEngine that searchesdataonline relatedtovariousScientificResearchesdataof the
RegisteredResearchMembersandpartners.
Role: Database developer.
Duration: Apr 2012 - Dec2012.
Technologies: PLSQL, Unix.
Responsibilities:
Responsible forcreatingormodifyingProcedures,Functions,dependingonthe needsof the business.
Workingcloselywithonsite(coordinator,BusinessAnalyst,SolutionArchitect,DataModeler,QA etc.),to
ensure thatthe endto enddesignsmeetthe businessanddatarequirements.
Responsible forretrievingthe dataas perthe businesslogicbywritingSQLqueries,creatingviewsand
indexes.
3. Educational Qualifications
B.Techin E.C.E with69.3% from Jawaharlal NehruTechnological University Kakinada.
DiplomainE.C.E with76.4% from A.A.N.M&V.V.R.S.RPolytechnic,Gudlavalleru.
S.S.Cwith81% fromGowthamHigh School,Mudinepally.
Personal Details
Name : Lakshmi NarayanaHanumanthu
Location : Hyderabad – Telangana,India
Date of Birth : 16th October 1989
Sex : Male
Nationality : Indian
Marital Status : Single
Language Known : English,Hindi,Telugu.