Apache Pig: Introduction, Description, Installation, Pig Latin Commands, Use, Examples, Usefulness are demonstrated in this presentation.
Tushar B. Kute
Researcher,
http://tusharkute.com
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.
This presentation discusses the follow topics
What is Hadoop?
Need for Hadoop
History of Hadoop
Hadoop Overview
Advantages and Disadvantages of Hadoop
Hadoop Distributed File System
Comparing: RDBMS vs. Hadoop
Advantages and Disadvantages of HDFS
Hadoop frameworks
Modules of Hadoop frameworks
Features of 'Hadoop‘
Hadoop Analytics Tools
HDFS is a Java-based file system that provides scalable and reliable data storage, and it was designed to span large clusters of commodity servers. HDFS has demonstrated production scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to a billion files and blocks.
This presentation discusses the following topics:
Hadoop Distributed File System (HDFS)
How does HDFS work?
HDFS Architecture
Features of HDFS
Benefits of using HDFS
Examples: Target Marketing
HDFS data replication
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...Simplilearn
This presentation about Hadoop for beginners will help you understand what is Hadoop, why Hadoop, what is Hadoop HDFS, Hadoop MapReduce, Hadoop YARN, a use case of Hadoop and finally a demo on HDFS (Hadoop Distributed File System), MapReduce and YARN. Big Data is a massive amount of data which cannot be stored, processed, and analyzed using traditional systems. To overcome this problem, we use Hadoop. Hadoop is a framework which stores and handles Big Data in a distributed and parallel fashion. Hadoop overcomes the challenges of Big Data. Hadoop has three components HDFS, MapReduce, and YARN. HDFS is the storage unit of Hadoop, MapReduce is its processing unit, and YARN is the resource management unit of Hadoop. In this video, we will look into these units individually and also see a demo on each of these units.
Below topics are explained in this Hadoop presentation:
1. What is Hadoop
2. Why Hadoop
3. Big Data generation
4. Hadoop HDFS
5. Hadoop MapReduce
6. Hadoop YARN
7. Use of Hadoop
8. Demo on HDFS, MapReduce and YARN
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Compression Options in Hadoop - A Tale of TradeoffsDataWorks Summit
Yahoo! is one of the most-visited web sites in the world. It runs one of the largest private cloud infrastructures, one that operates on petabytes of data every day. Being able to store and manage that data well is essential to the efficient functioning of Yahoo!`s Hadoop clusters. A key component that enables this efficient operation is data compression. With regard to compression algorithms, there is an underlying tension between compression ratio and compression performance. Consequently, Hadoop provides support for several compression algorithms, including gzip, bzip2, Snappy, LZ4 and others. This plethora of options can make it difficult for users to select appropriate codecs for their MapReduce jobs. This paper attempts to provide guidance in that regard. Performance results with Gridmix and with several corpuses of data are presented. The paper also describes enhancements we have made to the bzip2 codec that improve its performance. This will be of particular interest to the increasing number of users operating on “Big Data” who require the best possible ratios. The impact of using the Intel IPP libraries is also investigated; these have the potential to improve performance significantly. Finally, a few proposals for future enhancements to Hadoop in this area are outlined.
this presentation describes the company from where I did my summer training and what is bigdata why we use big data, big data challenges, the issue in big data, the solution of big data issues, hadoop, docker , Ansible etc.
HDFS is a Java-based file system that provides scalable and reliable data storage, and it was designed to span large clusters of commodity servers. HDFS has demonstrated production scalability of up to 200 PB of storage and a single cluster of 4500 servers, supporting close to a billion files and blocks.
This presentation discusses the following topics:
Hadoop Distributed File System (HDFS)
How does HDFS work?
HDFS Architecture
Features of HDFS
Benefits of using HDFS
Examples: Target Marketing
HDFS data replication
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...Simplilearn
This presentation about Hadoop for beginners will help you understand what is Hadoop, why Hadoop, what is Hadoop HDFS, Hadoop MapReduce, Hadoop YARN, a use case of Hadoop and finally a demo on HDFS (Hadoop Distributed File System), MapReduce and YARN. Big Data is a massive amount of data which cannot be stored, processed, and analyzed using traditional systems. To overcome this problem, we use Hadoop. Hadoop is a framework which stores and handles Big Data in a distributed and parallel fashion. Hadoop overcomes the challenges of Big Data. Hadoop has three components HDFS, MapReduce, and YARN. HDFS is the storage unit of Hadoop, MapReduce is its processing unit, and YARN is the resource management unit of Hadoop. In this video, we will look into these units individually and also see a demo on each of these units.
Below topics are explained in this Hadoop presentation:
1. What is Hadoop
2. Why Hadoop
3. Big Data generation
4. Hadoop HDFS
5. Hadoop MapReduce
6. Hadoop YARN
7. Use of Hadoop
8. Demo on HDFS, MapReduce and YARN
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Compression Options in Hadoop - A Tale of TradeoffsDataWorks Summit
Yahoo! is one of the most-visited web sites in the world. It runs one of the largest private cloud infrastructures, one that operates on petabytes of data every day. Being able to store and manage that data well is essential to the efficient functioning of Yahoo!`s Hadoop clusters. A key component that enables this efficient operation is data compression. With regard to compression algorithms, there is an underlying tension between compression ratio and compression performance. Consequently, Hadoop provides support for several compression algorithms, including gzip, bzip2, Snappy, LZ4 and others. This plethora of options can make it difficult for users to select appropriate codecs for their MapReduce jobs. This paper attempts to provide guidance in that regard. Performance results with Gridmix and with several corpuses of data are presented. The paper also describes enhancements we have made to the bzip2 codec that improve its performance. This will be of particular interest to the increasing number of users operating on “Big Data” who require the best possible ratios. The impact of using the Intel IPP libraries is also investigated; these have the potential to improve performance significantly. Finally, a few proposals for future enhancements to Hadoop in this area are outlined.
this presentation describes the company from where I did my summer training and what is bigdata why we use big data, big data challenges, the issue in big data, the solution of big data issues, hadoop, docker , Ansible etc.
Pig Tutorial | Twitter Case Study | Apache Pig Script and Commands | EdurekaEdureka!
This Edureka Pig Tutorial ( Pig Tutorial Blog Series: https://goo.gl/KPE94k ) will help you understand the concepts of Apache Pig in depth.
Check our complete Hadoop playlist here: https://goo.gl/ExJdZs
Below are the topics covered in this Pig Tutorial:
1) Entry of Apache Pig
2) Pig vs MapReduce
3) Twitter Case Study on Apache Pig
4) Apache Pig Architecture
5) Pig Components
6) Pig Data Model
7) Running Pig Commands and Pig Scripts (Log Analysis)
Unit 6 Operating System TEIT Savitribai Phule Pune University by Tushar B KuteTushar B Kute
Recent And Future Trends In Os
Linux Kernel Module Programming, Embedded Operating Systems: Characteristics of Embedded Systems, Embedded Linux, and Application specific OS. Basic services of NACH Operating System.
Introduction to Service Oriented Operating System (SOOS), Introduction to Ubuntu EDGE OS.
Designed By : Tushar B Kute (http://tusharkute.com)
Part 02 Linux Kernel Module ProgrammingTushar B Kute
Presentation on "Linux Kernel Module Programming".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
MIS 02 foundations of information systemsTushar B Kute
The series of presentations contains the information about "Management Information System" subject of SEIT for University of Pune.
Subject Teacher: Tushar B Kute (Sandip Institute of Technology and Research Centre, Nashik)
http://www.tusharkute.com
The series of presentations contains the information about "Management Information System" subject of SEIT for University of Pune.
Subject Teacher: Tushar B Kute (Sandip Institute of Technology and Research Centre, Nashik)
http://www.tusharkute.com
The series of presentations contains the information about "Management Information System" subject of SEIT for University of Pune.
Subject Teacher: Tushar B Kute (Sandip Institute of Technology and Research Centre, Nashik)
http://www.tusharkute.com
Study techniques of programming in c at kkwpssTushar B Kute
My expert Lecture at KK Wagh Polytechnic, Second Shift, Nashik on "Study Techniques in C Programming" taken for First Year Information Technology Students.
The series of presentations contains the information about "Management Information System" subject of SEIT for University of Pune.
Subject Teacher: Tushar B Kute (Sandip Institute of Technology and Research Centre, Nashik)
http://www.tusharkute.com
Introduction to Apache Pig.
Apache pig is a platform which provides to modes for analyzing datasets. One is local mode over local file system and other is over HDFS. Apache Pig consists of a high-level language called PigLatin which is a Query language.
This is part of an introductory course to Big Data Tools for Artificial Intelligence. These slides introduce students to the use of Apache Pig as an ETL tool over Hadoop.
Pig Latin is a language game, argot, or cant in which words in English are altered, usually by adding a fabricated suffix or by moving the onset or initial consonant or consonant cluster of a word to the end of the word and adding a vocalic syllable to create such a suffix.[1] For example, Wikipedia would become Ikipediaway (taking the 'W' and 'ay' to create a suffix). The objective is often to conceal the words from others not familiar with the rules. The reference to Latin is a deliberate misnomer; Pig Latin is simply a form of argot or jargon unrelated to Latin, and the name is used for its English connotations as a strange and foreign-sounding language. It is most often used by young children as a fun way to confuse people unfamiliar with Pig Latin.
Your data is getting bigger while your boss is getting anxious to have insights! This tutorial covers Apache Spark that makes data analytics fast to write and fast to run. Tackle big datasets quickly through a simple API in Python, and learn one programming paradigm in order to deploy interactive, batch, and streaming applications while connecting to data sources incl. HDFS, Hive, JSON, and S3.
Ubuntu OS and it Flavours-
UbuntuKylin
Ubuntu Server
Ubuntu Touch
Ubuntu GNOME
Ubuntu MATE
Kubuntu
Lubuntu
Xubuntu
Edubuntu
MythBuntu
Ubuntu Studio
Blackbuntu
Linux Mint
Tushar B. Kute,
http://tusharkute.com
Part 04 Creating a System Call in LinuxTushar B Kute
Presentation on "System Call creation in Linux".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
Part 03 File System Implementation in LinuxTushar B Kute
Presentation on "Virtual File System Implementation in Linux".
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
Part 01 Linux Kernel Compilation (Ubuntu)Tushar B Kute
Presentation on "Linux Kernel Compilation" (Ubuntu based).
Presented at Army Institute of Technology, Pune for FDP on "Basics of Linux Kernel Programming". by Tushar B Kute (http://tusharkute.com).
Chapter 01 Introduction to Java by Tushar B KuteTushar B Kute
The lecture was condcuted by Tushar B Kute at YCMOU, Nashik through VLC orgnanized by MSBTE. The contents can be found in book "Core Java Programming - A Practical Approach' by Laxmi Publications.
Chapter 02: Classes Objects and Methods Java by Tushar B KuteTushar B Kute
The lecture was condcuted by Tushar B Kute at YCMOU, Nashik through VLC orgnanized by MSBTE. The contents can be found in book "Core Java Programming - A Practical Approach' by Laxmi Publications.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
Generating a custom Ruby SDK for your web service or Rails API using Smithyg2nightmarescribd
Have you ever wanted a Ruby client API to communicate with your web service? Smithy is a protocol-agnostic language for defining services and SDKs. Smithy Ruby is an implementation of Smithy that generates a Ruby SDK using a Smithy model. In this talk, we will explore Smithy and Smithy Ruby to learn how to generate custom feature-rich SDKs that can communicate with any web service, such as a Rails JSON API.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
Neuro-symbolic is not enough, we need neuro-*semantic*Frank van Harmelen
Neuro-symbolic (NeSy) AI is on the rise. However, simply machine learning on just any symbolic structure is not sufficient to really harvest the gains of NeSy. These will only be gained when the symbolic structures have an actual semantics. I give an operational definition of semantics as “predictable inference”.
All of this illustrated with link prediction over knowledge graphs, but the argument is general.
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
1. PIG: A Big Data Processor
Tushar B. Kute,
http://tusharkute.com
2. What is Pig?
• Apache Pig is an abstraction over MapReduce. It is a
tool/platform which is used to analyze larger sets of
data representing them as data flows.
• Pig is generally used with Hadoop; we can perform all
the data manipulation operations in Hadoop using
Apache Pig.
• To write data analysis programs, Pig provides a high-
level language known as Pig Latin.
• This language provides various operators using which
programmers can develop their own functions for
reading, writing, and processing data.
3. Apache Pig
• To analyze data using Apache Pig, programmers
need to write scripts using Pig Latin language.
• All these scripts are internally converted to Map
and Reduce tasks.
• Apache Pig has a component known as Pig
Engine that accepts the Pig Latin scripts as
input and converts those scripts into
MapReduce jobs.
4. Why do we need Apache Pig?
• Using Pig Latin, programmers can perform MapReduce tasks
easily without having to type complex codes in Java.
• Apache Pig uses multi-query approach, thereby reducing the
length of codes. For example, an operation that would require you
to type 200 lines of code (LoC) in Java can be easily done by typing
as less as just 10 LoC in Apache Pig. Ultimately, Apache Pig
reduces the development time by almost 16 times.
• Pig Latin is SQL-like language and it is easy to learn Apache Pig
when you are familiar with SQL.
• Apache Pig provides many built-in operators to support data
operations like joins, filters, ordering, etc. In addition, it also
provides nested data types like tuples, bags, and maps that are
missing from MapReduce.
5. Features of Pig
• Rich set of operators: It provides many operators to perform
operations like join, sort, filer, etc.
• Ease of programming: Pig Latin is similar to SQL and it is easy to write
a Pig script if you are good at SQL.
• Optimization opportunities: The tasks in Apache Pig optimize their
execution automatically, so the programmers need to focus only on
semantics of the language.
• Extensibility: Using the existing operators, users can develop their
own functions to read, process, and write data.
• UDF’s: Pig provides the facility to create User-defined Functions in
other programming languages such as Java and invoke or embed them
in Pig Scripts.
• Handles all kinds of data: Apache Pig analyzes all kinds of data, both
structured as well as unstructured. It stores the results in HDFS.
9. Applications of Apache Pig
• To process huge data sources such as web logs.
• To perform data processing for search
platforms.
• To process time sensitive data loads.
10. Apache Pig – History
• In 2006, Apache Pig was developed as a
research project at Yahoo, especially to create
and execute MapReduce jobs on every dataset.
• In 2007, Apache Pig was open sourced via
Apache incubator.
• In 2008, the first release of Apache Pig came
out. In 2010, Apache Pig graduated as an
Apache top-level project.
12. Apache Pig – Components
• Parser: Initially the Pig Scripts are handled by the Parser. It
checks the syntax of the script, does type checking, and other
miscellaneous checks. The output of the parser will be a DAG
(directed acyclic graph), which represents the Pig Latin
statements and logical operators.
• Optimizer: The logical plan (DAG) is passed to the logical
optimizer, which carries out the logical optimizations such as
projection and pushdown.
• Compiler: The compiler compiles the optimized logical plan
into a series of MapReduce jobs.
• Execution engine: Finally the MapReduce jobs are submitted
to Hadoop in a sorted order. Finally, these MapReduce jobs are
executed on Hadoop producing the desired results.
14. Apache Pig – Elements
• Atom
– Any single value in Pig Latin, irrespective of their
data, type is known as an Atom.
– It is stored as string and can be used as string
and number. int, long, float, double, chararray,
and bytearray are the atomic values of Pig.
– A piece of data or a simple atomic value is known
as a field.
– Example: ‘raja’ or ‘30’
15. Apache Pig – Elements
• Tuple
– A record that is formed by an ordered set of
fields is known as a tuple, the fields can be of any
type. A tuple is similar to a row in a table of
RDBMS.
– Example: (Raja, 30)
16. Apache Pig – Elements
• Bag
– A bag is an unordered set of tuples. In other words, a
collection of tuples (non-unique) is known as a bag. Each
tuple can have any number of fields (flexible schema). A
bag is represented by ‘{}’. It is similar to a table in RDBMS,
but unlike a table in RDBMS, it is not necessary that every
tuple contain the same number of fields or that the fields
in th same position (column) have the same type.
– Example: {(Raja, 30), (Mohammad, 45)}
– A bag can be a field in a relation; in that context, it is
known as inner bag.
– Example: {Raja, 30, {9848022338, raja@gmail.com,}}
17. Apache Pig – Elements
• Relation
– A relation is a bag of tuples. The relations in Pig
Latin are unordered (there is no guarantee that
tuples are processed in any particular order).
• Map
– A map (or data map) is a set of key-value pairs.
The key needs to be of type chararray and should
be unique. The value might be of any type. It is
represented by ‘[]’
– Example: [name#Raja, age#30]
19. Download
• Download the tar.gz file of Apache Pig from
here:
http://mirror.fibergrid.in/apache/pig/pig-0.15.0/
pig-0.15.0.tar.gz
20. Extract and copy
• Extract this file using right-click -> 'Extract here'
option or by tar -xzvf command.
• Rename the created folder 'pig-0.15.0' to 'pig'
• Now, move this folder to /usr/lib using following
command:
$ sudo mv pig/ /usr/lib
21. Edit the bashrc file
• Open the bashrc file:
sudo gedit ~/.bashrc
• Go to end of the file and add following lines.
export PIG_HOME=/usr/lib/pig
export PATH=$PATH:$PIG_HOME/bin
• Type following command to make it in effect:
source ~/.bashrc
22. Start the Pig
• Start the pig in local mode:
pig -x local
• Start the pig in mapreduce mode (needs hadoop
datanode started):
pig -x mapreduce
26. Load data
• $ pig -x local
• grunt> movies = LOAD
'movies_data.csv' USING
PigStorage(',') as
(id,name,year,rating,duration)
• grunt> dump movies;
it displays the contents
27. Filter data
• grunt> movies_greater_than_35 =
FILTER movies BY (float)rating > 3.5;
• grunt> dump movies_greater_than_35;
28. Store the results data
• grunt> store movies_greater_than_35
into 'my_movies';
• It stores the result in local file system directory
named 'my_movies'.
29. Display the result
• Now display the result from local file system.
cat my_movies/part-m-00000
30. Load command
• The load command specified only the column
names. We can modify the statement as follows
to include the data type of the columns:
• grunt> movies = LOAD
'movies_data.csv' USING
PigStorage(',') as (id:int,
name:chararray, year:int,
rating:double, duration:int);
31. Check the filters
• List the movies that were released between 1950 and
1960
grunt> movies_between_90_95 = FILTER
movies by year > 1990 and year < 1995;
• List the movies that start with the Alpahbet D
grunt> movies_starting_with_D = FILTER
movies by name matches 'D.*';
• List the movies that have duration greater that 2 hours
grunt> movies_duration_2_hrs = FILTER
movies by duration > 7200;
33. Describe
• DESCRIBE The schema of a relation/alias can be
viewed using the DESCRIBE command:
grunt> DESCRIBE movies;
movies: {id: int, name: chararray,
year: int, rating: double, duration:
int}
34. Foreach
• FOREACH gives a simple way to apply
transformations based on columns. Let’s understand
this with an example.
• List the movie names its duration in minutes
grunt> movie_duration = FOREACH movies
GENERATE name, (double)(duration/60);
• The above statement generates a new alias that has
the list of movies and it duration in minutes.
• You can check the results using the DUMP command.
36. Group
• The GROUP keyword is used to group fields in a
relation.
• List the years and the number of movies released
each year.
grunt> grouped_by_year = group movies
by year;
grunt> count_by_year = FOREACH
grouped_by_year GENERATE group,
COUNT(movies);
38. Order by
• Let us question the data to illustrate the ORDER BY
operation.
• List all the movies in the ascending order of year.
grunt> desc_movies_by_year = ORDER
movies BY year ASC;
grunt> DUMP desc_movies_by_year;
• List all the movies in the descending order of year.
grunt> asc_movies_by_year = ORDER movies
by year DESC;
grunt> DUMP asc_movies_by_year;
40. Limit
• Use the LIMIT keyword to get only a limited number
for results from relation.
grunt> top_5_movies = LIMIT movies 5;
grunt> DUMP top_10_movies;
41. Pig: Modes of Execution
• Pig programs can be run in three methods which
work in both local and MapReduce mode. They are
– Script Mode
– Grunt Mode
– Embedded Mode
42. Script mode
• Script Mode or Batch Mode: In script mode, pig runs
the commands specified in a script file. The following
example shows how to run a pig programs from a
script file:
$ vim scriptfile.pig
A = LOAD 'script_file';
DUMP A;
$ pig x local scriptfile.pig
43. Grunt mode
• Grunt Mode or Interactive Mode: The grunt mode can also
be called as interactive mode. Grunt is pig's interactive shell.
It is started when no file is specified for pig to run.
$ pig x local
grunt> A = LOAD 'grunt_file';
grunt> DUMP A;
• You can also run pig scripts from grunt using run and exec
commands.
grunt> run scriptfile.pig
grunt> exec scriptfile.pig
44. Embedded mode
• You can embed pig programs in Java, python and
ruby and can run from the same.
45. Example: Wordcount program
• Q) How to find the number of occurrences of the
words in a file using the pig script?
• You can find the famous word count example written
in map reduce programs in apache website. Here we
will write a simple pig script for the word count
problem.
• The pig script given in next slide finds the number of
times a word repeated in a file:
49. References
• “Programming Pig” by Alan Gates, O'Reilly
Publishers.
• “Pig Design Patterns” by Pradeep Pasupuleti,
PACKT Publishing
• Tutorials Point
• http://github.com/rohitdens
• http://pig.apache.org
50. tushar@tusharkute.com
Thank you
This presentation is created using LibreOffice Impress 4.2.8.2, can be used freely as per GNU General Public License
Blogs
http://digitallocha.blogspot.in
http://kyamputar.blogspot.in
Web Resources
http://tusharkute.com