Dan Ellis (CTO@Kentik) presents and discusses the technology and platform behind Kentik Detect Engine.
Links to the video of the presentation: https://kentik.com/nfd14
Web3 Security: The Blockchain is Your SIEMTal Be'ery
2021’s hottest new tech term, according to TechCrunch, was “definitely Web3”. Web3, as its name suggests, is considered by many as the future of the internet: decentralized, permissionless, and based on modern blockchain technology. While Web3 might have a bright future, it’s in the middle of growing pains: A number of Web3 apps were hacked in 2021, leading to theft of cryptoassets valued at hundreds of millions of US Dollars. In this talk we will present Web3 app technology, dissect new attack surfaces, and suggest new and exciting defense mechanisms.
First, we will dive into the technical details of Web3 applications, showing how Web3 technology opens new attack surfaces by moving app functionality onto the blockchain. We will then analyze these newly-exposed attack surfaces by reviewing a few examples we’ve discovered “in the wild.”
While Web3 exposes new attack surfaces, it also provides novel detection opportunities. Specifically, the public and transparent nature of the blockchain allows security researchers to immediately explore full details of any attack and, as a result, leads to quick and thorough discoveries. This is a paradigm shift in security research, as current practices only allow a few to learn actual attack details, only some portions of which are shared publicly. This shift in transparency allowed us to independently explore the aforementioned attacks.
Furthermore, we believe we can do even better and go beyond rapid post-mortem reports. We will show how the same raw data we had previously used for a post-mortem analysis can be analyzed in real-time (or even ante factum by “taking a peek” into the blocks that have yet to be mined) to detect and even prevent attacks. This capability is enabled by the online nature of the blockchain and its inherent block time delays. In fact, we can import, with relevant modifications, many of the principles and learnings of current web defenses, including Web Application Firewall (WAF) into the realm of blockchain. By doing so, we introduce a scheme for a Web3 Application Firewall (W3AF) which can greatly improve Web3 security and blockchain-based apps.
1) Bitcoin addresses are generated from public keys through a multi-step process involving hashing, encoding, and adding checksums.
2) Specifically, the public key is hashed using SHA256 and RIPEMD160, then encoded in base58 format.
3) A version byte and checksum are added to the encoded hash to create the final Bitcoin address.
This document discusses data encryption and digital signatures. It defines encryption as disguising information so that only those with the key can access it. There are two main types of encryption - symmetric which uses the same key for encryption and decryption, and asymmetric which uses different keys. Encryption methods include transposition, which rearranges bits or characters, and substitution, which replaces bits or characters. Popular algorithms discussed are DES, RSA, and digital signatures. Digital signatures authenticate the sender, ensure the message isn't altered, and can be used to sign documents and verify certificates from certificate authorities.
This document compares SQL and NoSQL databases. It defines databases, describes different types including relational and NoSQL, and explains key differences between SQL and NoSQL in areas like scaling, modeling, and query syntax. SQL databases are better suited for projects with logical related discrete data requirements and data integrity needs, while NoSQL is more ideal for projects with unrelated, evolving data where speed and scalability are important. MongoDB is provided as an example of a NoSQL database, and the CAP theorem is introduced to explain tradeoffs in distributed systems.
Introduction to Bitcoin's Scripting LanguageJeff Flowers
An introduction to Bitcoin's scripting language. Beginning with a historical perspective all the way to seeing an actual transaction's scripts being run in a stack environment. Further resources are provided in order to learn more about this incredible technology. http://youtu.be/4qz7XehSBCc
MySQL is a popular and freely available open-source relational database management system (RDBMS). It stores data in tables and relationships between data are also stored in tables. MySQL uses SQL and works on many operating systems. It has commands for data definition (CREATE, ALTER, DROP), data manipulation (SELECT, INSERT, UPDATE, DELETE), transaction control (COMMIT, ROLLBACK), and data access control (GRANT, REVOKE). Joins allow retrieving data from multiple tables by linking rows together. Common join types are inner joins, outer joins, and self joins.
Blockchain is a distributed ledger that records transactions in blocks that are linked using cryptography. Each node maintains a copy of the blockchain. Key concepts include:
- Public key cryptography allows nodes to verify transactions without revealing identities.
- Smart contracts enable decentralized applications to execute transactions and store data on the blockchain without an intermediary.
- The Ethereum blockchain supports a Turing-complete scripting language to build decentralized applications with more complex functionality than Bitcoin. It uses ether as its internal currency and charges gas fees to compensate for usage.
Web3 Security: The Blockchain is Your SIEMTal Be'ery
2021’s hottest new tech term, according to TechCrunch, was “definitely Web3”. Web3, as its name suggests, is considered by many as the future of the internet: decentralized, permissionless, and based on modern blockchain technology. While Web3 might have a bright future, it’s in the middle of growing pains: A number of Web3 apps were hacked in 2021, leading to theft of cryptoassets valued at hundreds of millions of US Dollars. In this talk we will present Web3 app technology, dissect new attack surfaces, and suggest new and exciting defense mechanisms.
First, we will dive into the technical details of Web3 applications, showing how Web3 technology opens new attack surfaces by moving app functionality onto the blockchain. We will then analyze these newly-exposed attack surfaces by reviewing a few examples we’ve discovered “in the wild.”
While Web3 exposes new attack surfaces, it also provides novel detection opportunities. Specifically, the public and transparent nature of the blockchain allows security researchers to immediately explore full details of any attack and, as a result, leads to quick and thorough discoveries. This is a paradigm shift in security research, as current practices only allow a few to learn actual attack details, only some portions of which are shared publicly. This shift in transparency allowed us to independently explore the aforementioned attacks.
Furthermore, we believe we can do even better and go beyond rapid post-mortem reports. We will show how the same raw data we had previously used for a post-mortem analysis can be analyzed in real-time (or even ante factum by “taking a peek” into the blocks that have yet to be mined) to detect and even prevent attacks. This capability is enabled by the online nature of the blockchain and its inherent block time delays. In fact, we can import, with relevant modifications, many of the principles and learnings of current web defenses, including Web Application Firewall (WAF) into the realm of blockchain. By doing so, we introduce a scheme for a Web3 Application Firewall (W3AF) which can greatly improve Web3 security and blockchain-based apps.
1) Bitcoin addresses are generated from public keys through a multi-step process involving hashing, encoding, and adding checksums.
2) Specifically, the public key is hashed using SHA256 and RIPEMD160, then encoded in base58 format.
3) A version byte and checksum are added to the encoded hash to create the final Bitcoin address.
This document discusses data encryption and digital signatures. It defines encryption as disguising information so that only those with the key can access it. There are two main types of encryption - symmetric which uses the same key for encryption and decryption, and asymmetric which uses different keys. Encryption methods include transposition, which rearranges bits or characters, and substitution, which replaces bits or characters. Popular algorithms discussed are DES, RSA, and digital signatures. Digital signatures authenticate the sender, ensure the message isn't altered, and can be used to sign documents and verify certificates from certificate authorities.
This document compares SQL and NoSQL databases. It defines databases, describes different types including relational and NoSQL, and explains key differences between SQL and NoSQL in areas like scaling, modeling, and query syntax. SQL databases are better suited for projects with logical related discrete data requirements and data integrity needs, while NoSQL is more ideal for projects with unrelated, evolving data where speed and scalability are important. MongoDB is provided as an example of a NoSQL database, and the CAP theorem is introduced to explain tradeoffs in distributed systems.
Introduction to Bitcoin's Scripting LanguageJeff Flowers
An introduction to Bitcoin's scripting language. Beginning with a historical perspective all the way to seeing an actual transaction's scripts being run in a stack environment. Further resources are provided in order to learn more about this incredible technology. http://youtu.be/4qz7XehSBCc
MySQL is a popular and freely available open-source relational database management system (RDBMS). It stores data in tables and relationships between data are also stored in tables. MySQL uses SQL and works on many operating systems. It has commands for data definition (CREATE, ALTER, DROP), data manipulation (SELECT, INSERT, UPDATE, DELETE), transaction control (COMMIT, ROLLBACK), and data access control (GRANT, REVOKE). Joins allow retrieving data from multiple tables by linking rows together. Common join types are inner joins, outer joins, and self joins.
Blockchain is a distributed ledger that records transactions in blocks that are linked using cryptography. Each node maintains a copy of the blockchain. Key concepts include:
- Public key cryptography allows nodes to verify transactions without revealing identities.
- Smart contracts enable decentralized applications to execute transactions and store data on the blockchain without an intermediary.
- The Ethereum blockchain supports a Turing-complete scripting language to build decentralized applications with more complex functionality than Bitcoin. It uses ether as its internal currency and charges gas fees to compensate for usage.
Rahul Khengare gave a presentation on the CIS Security Benchmark to the DevOps-Pune Meetup Group. The agenda included an introduction to the CIS Benchmark, a discussion of the need for compliance, and a demonstration of automation tools. The CIS Benchmark provides consensus-based security configuration guides for technologies including cloud platforms, operating systems, containers, and SaaS products. It defines policies across categories such as identity and access management, logging, and networking. Open source tools like Prowler and Cloudneeti can be used to automate compliance checks against the CIS Benchmark.
ESP provides encryption, authentication, and integrity for IP packets. It operates on a per-packet basis (ESP header and trailer encapsulate the payload) and supports transport and tunnel modes. The ESP packet fields include the SPI, sequence number, payload, padding, pad length, and ICV. ESP packet processing at the sender involves lookup SA, encryption, authentication, and sequencing. At the receiver, it involves verification of decryption, authentication and sequencing. ESP aims to provide data origin authentication, confidentiality, and traffic flow confidentiality with anti-replay detection.
This document discusses stored procedures in MySQL and MSSQL, including their advantages, syntax, and examples. It also covers the differences between procedures and functions, and provides an example of creating a trigger to update total department salaries when employees are inserted, updated, or deleted.
This document discusses message authentication techniques including message encryption, message authentication codes (MACs), and hash functions. It describes how each technique can be used to authenticate messages and protect against various security threats. It also covers how symmetric and asymmetric encryption can provide authentication when used with MACs or digital signatures. Specific MAC and hash functions are examined like HMAC, SHA-1, and SHA-2. X.509 is introduced as a standard for digital certificates.
The right architecture is key for any IT project. This is especially the case for big data projects, where there are no standard architectures which have proven their suitability over years. This session discusses the different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Streaming Analytics architecture as well as Lambda and Kappa architecture and presents the mapping of components from both Open Source as well as the Oracle stack onto these architectures.
The right architecture is key for any IT project. This is valid in the case for big data projects as well, but on the other hand there are not yet many standard architectures which have proven their suitability over years.
This session discusses different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Event Driven architecture as well as Lambda and Kappa architecture.
Each architecture is presented in a vendor- and technology-independent way using a standard architecture blueprint. In a second step, these architecture blueprints are used to show how a given architecture can support certain use cases and which popular open source technologies can help to implement a solution based on a given architecture.
CIS benchmarks are the industry standard to secure IT systems including Public Cloud platforms. The presentation covers how the benchmarks differ for AWS , Azure and GCP clouds and various cloud native services used to achieve the compliance.
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...Simplilearn
This presentation about Hive will help you understand the history of Hive, what is Hive, Hive architecture, data flow in Hive, Hive data modeling, Hive data types, different modes in which Hive can run on, differences between Hive and RDBMS, features of Hive and a demo on HiveQL commands. Hive is a data warehouse system which is used for querying and analyzing large datasets stored in HDFS. Hive uses a query language called HiveQL which is similar to SQL. Hive issues SQL abstraction to integrate SQL queries (like HiveQL) into Java without the necessity to implement queries in the low-level Java API. Now, let us get started and understand Hadoop Hive in detail
Below topics are explained in this Hive presetntation:
1. History of Hive
2. What is Hive?
3. Architecture of Hive
4. Data flow in Hive
5. Hive data modeling
6. Hive data types
7. Different modes of Hive
8. Difference between Hive and RDBMS
9. Features of Hive
10. Demo on HiveQL
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Understanding hd wallets design and implementationArcBlock
ArcBlock Technical Learning Series Presents Understanding HD Wallets. This talk will look at the building blocks to creating a virtual currency wallet including some of the basic design ideas, and implementation methods.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
This document provides an overview of Oracle SQL functions. It discusses single-row functions that operate on each row returned, including conversion, character, number, and date functions. Character functions covered include LOWER, UPPER, INITCAP, CONCAT, SUBSTR, LENGTH, and INSTR, which can be used for case conversion and character manipulation.
YouTube Link: https://youtu.be/zbMHLJ0dY4w
** MySQL DBA Certification Training: https://www.edureka.co/mysql-dba **
This Edureka video on 'SQL Basics for Beginners' will help you understand the basics of SQL and also sql queries which are very popular and essential.. In this SQL Tutorial for Beginners you will learn SQL from scratch with examples. Following topics have been covered in this sql tutorial.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
The document discusses Ethereum, a decentralized platform for running smart contracts and decentralized applications. It describes how Ethereum uses blockchain technology and smart contracts to allow developers to build decentralized applications that run without downtime, fraud or third party interference. Transactions on Ethereum are recorded on a public distributed ledger called a blockchain, where network participants validate transactions to reach consensus.
Caesar Cipher , Substitution Cipher, PlayFair and Vigenere CipherMona Rajput
The document provides information on various historical cryptosystems and ciphers, beginning with a brief overview of symmetric and asymmetric key encryption. It then discusses several manual ciphers such as the Caesar cipher, simple substitution cipher, Playfair cipher, and Vigenere cipher. The Caesar cipher performs monoalphabetic substitution by shifting letters of the alphabet. The simple substitution cipher and Playfair cipher improve security by using permutation or paired letter substitution instead of just shifting. The Vigenere cipher further enhances security by applying multiple Caesar shifts using a keyword. The document also covers the one-time pad cipher and its information theoretic security if the pad is truly random and never reused.
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
This is a description of the Diffie-Hellman-Merkle Key Exchange process, with a presentation of the essential calculations and some discussion of vulnerabilities
"Building Data Warehouse with Google Cloud Platform", Artem NikulchenkoFwdays
In this talk, we would explore available options for building Data Warehouse for data-oriented business using Google Cloud Platform. We will start by discussing why Data Warehouse can be needed, move to the differences between "traditional" and Cloud Data Warehouses, and finally discuss steps and options for building your own Data Warehouse.
SQL is a standard language for accessing and manipulating databases. It allows users to retrieve, insert, update, and delete data as well as create new databases and tables. Common SQL statements include SELECT, UPDATE, DELETE, and INSERT. SQL uses clauses, operators, and wildcards to filter records based on conditions. Some key points are that SQL is an ANSI standard but different versions exist, it allows querying and modifying data in databases, and is essential for interacting with relational database systems.
Use extensively researched Blockchain PowerPoint Presentation Slides to educate your audience about the secure online payment transactions and cryptographic techniques. Show encryption methods and concept of decentralized network that allows the easy transfer of digital values such as currency and data. Bitcoin developers can incorporate this professionally designed content-ready blockchain PowerPoint presentation templates for their work. This deck covers topics like distributed ledger, working of a distributed ledger, use cases, industrial blockchain benefits, blockchain limitations, and more. Illustrate the idea of transferring funds directly between two parties without any banks or credit card company using blockchain PPT presentation templates. Demonstrate the workings of cryptocurrencies, showcase the process and its benefits with the help of cryptocurrency PPT slides. These templates are completely customizable. You can edit the slides as per your convenience. Change color, text, icon, and font size as per your need. Download now. Engage with disbelievers through our Blockchain Powerpoint Presentation Slides. Explain the grounds for your beliefs. https://bit.ly/2W76JPY
This document summarizes a presentation about network traffic visibility and anomaly detection at scale. It discusses the problems with lack of visibility into network traffic data and tools. It introduces Kentik as a solution for traffic visibility that allows infinite granularity storage for months, real-time queries, and anomaly detection. The presentation outlines Kentik's approach of using an ingest and fusion layer to combine different data sources, a storage layer, and a query layer to provide a platform for network traffic analysis and anomaly detection at large scales.
Rahul Khengare gave a presentation on the CIS Security Benchmark to the DevOps-Pune Meetup Group. The agenda included an introduction to the CIS Benchmark, a discussion of the need for compliance, and a demonstration of automation tools. The CIS Benchmark provides consensus-based security configuration guides for technologies including cloud platforms, operating systems, containers, and SaaS products. It defines policies across categories such as identity and access management, logging, and networking. Open source tools like Prowler and Cloudneeti can be used to automate compliance checks against the CIS Benchmark.
ESP provides encryption, authentication, and integrity for IP packets. It operates on a per-packet basis (ESP header and trailer encapsulate the payload) and supports transport and tunnel modes. The ESP packet fields include the SPI, sequence number, payload, padding, pad length, and ICV. ESP packet processing at the sender involves lookup SA, encryption, authentication, and sequencing. At the receiver, it involves verification of decryption, authentication and sequencing. ESP aims to provide data origin authentication, confidentiality, and traffic flow confidentiality with anti-replay detection.
This document discusses stored procedures in MySQL and MSSQL, including their advantages, syntax, and examples. It also covers the differences between procedures and functions, and provides an example of creating a trigger to update total department salaries when employees are inserted, updated, or deleted.
This document discusses message authentication techniques including message encryption, message authentication codes (MACs), and hash functions. It describes how each technique can be used to authenticate messages and protect against various security threats. It also covers how symmetric and asymmetric encryption can provide authentication when used with MACs or digital signatures. Specific MAC and hash functions are examined like HMAC, SHA-1, and SHA-2. X.509 is introduced as a standard for digital certificates.
The right architecture is key for any IT project. This is especially the case for big data projects, where there are no standard architectures which have proven their suitability over years. This session discusses the different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Streaming Analytics architecture as well as Lambda and Kappa architecture and presents the mapping of components from both Open Source as well as the Oracle stack onto these architectures.
The right architecture is key for any IT project. This is valid in the case for big data projects as well, but on the other hand there are not yet many standard architectures which have proven their suitability over years.
This session discusses different Big Data Architectures which have evolved over time, including traditional Big Data Architecture, Event Driven architecture as well as Lambda and Kappa architecture.
Each architecture is presented in a vendor- and technology-independent way using a standard architecture blueprint. In a second step, these architecture blueprints are used to show how a given architecture can support certain use cases and which popular open source technologies can help to implement a solution based on a given architecture.
CIS benchmarks are the industry standard to secure IT systems including Public Cloud platforms. The presentation covers how the benchmarks differ for AWS , Azure and GCP clouds and various cloud native services used to achieve the compliance.
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...Simplilearn
This presentation about Hive will help you understand the history of Hive, what is Hive, Hive architecture, data flow in Hive, Hive data modeling, Hive data types, different modes in which Hive can run on, differences between Hive and RDBMS, features of Hive and a demo on HiveQL commands. Hive is a data warehouse system which is used for querying and analyzing large datasets stored in HDFS. Hive uses a query language called HiveQL which is similar to SQL. Hive issues SQL abstraction to integrate SQL queries (like HiveQL) into Java without the necessity to implement queries in the low-level Java API. Now, let us get started and understand Hadoop Hive in detail
Below topics are explained in this Hive presetntation:
1. History of Hive
2. What is Hive?
3. Architecture of Hive
4. Data flow in Hive
5. Hive data modeling
6. Hive data types
7. Different modes of Hive
8. Difference between Hive and RDBMS
9. Features of Hive
10. Demo on HiveQL
What is this Big Data Hadoop training course about?
The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab.
What are the course objectives?
This course will enable you to:
1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark
2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management
3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts
4. Get an overview of Sqoop and Flume and describe how to ingest data using them
5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning
6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution
7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations
8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS
9. Gain a working knowledge of Pig and its components
10. Do functional programming in Spark
11. Understand resilient distribution datasets (RDD) in detail
12. Implement and build Spark applications
13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques
14. Understand the common use-cases of Spark and the various interactive algorithms
15. Learn Spark SQL, creating, transforming, and querying Data frames
Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training
Understanding hd wallets design and implementationArcBlock
ArcBlock Technical Learning Series Presents Understanding HD Wallets. This talk will look at the building blocks to creating a virtual currency wallet including some of the basic design ideas, and implementation methods.
IPFS is a distribution protocol that enables the creation of completely distributed applications through content addressing. A very ambitious open source project in Go, IPFS adopts a peer-to-peer hypermedia protocol to protect against a single point of failure. This presentation aims to highlight the design and ideas of IPFS and also touches upon a real world use case.
This document provides an overview of Oracle SQL functions. It discusses single-row functions that operate on each row returned, including conversion, character, number, and date functions. Character functions covered include LOWER, UPPER, INITCAP, CONCAT, SUBSTR, LENGTH, and INSTR, which can be used for case conversion and character manipulation.
YouTube Link: https://youtu.be/zbMHLJ0dY4w
** MySQL DBA Certification Training: https://www.edureka.co/mysql-dba **
This Edureka video on 'SQL Basics for Beginners' will help you understand the basics of SQL and also sql queries which are very popular and essential.. In this SQL Tutorial for Beginners you will learn SQL from scratch with examples. Following topics have been covered in this sql tutorial.
Follow us to never miss an update in the future.
YouTube: https://www.youtube.com/user/edurekaIN
Instagram: https://www.instagram.com/edureka_learning/
Facebook: https://www.facebook.com/edurekaIN/
Twitter: https://twitter.com/edurekain
LinkedIn: https://www.linkedin.com/company/edureka
Castbox: https://castbox.fm/networks/505?country=in
The document discusses Ethereum, a decentralized platform for running smart contracts and decentralized applications. It describes how Ethereum uses blockchain technology and smart contracts to allow developers to build decentralized applications that run without downtime, fraud or third party interference. Transactions on Ethereum are recorded on a public distributed ledger called a blockchain, where network participants validate transactions to reach consensus.
Caesar Cipher , Substitution Cipher, PlayFair and Vigenere CipherMona Rajput
The document provides information on various historical cryptosystems and ciphers, beginning with a brief overview of symmetric and asymmetric key encryption. It then discusses several manual ciphers such as the Caesar cipher, simple substitution cipher, Playfair cipher, and Vigenere cipher. The Caesar cipher performs monoalphabetic substitution by shifting letters of the alphabet. The simple substitution cipher and Playfair cipher improve security by using permutation or paired letter substitution instead of just shifting. The Vigenere cipher further enhances security by applying multiple Caesar shifts using a keyword. The document also covers the one-time pad cipher and its information theoretic security if the pad is truly random and never reused.
This document provides an overview of SQL programming including:
- A brief history of SQL and how it has evolved over time.
- Key SQL fundamentals like database structures, tables, relationships, and normalization.
- How to define and modify database structures using commands like CREATE, ALTER, DROP.
- How to manipulate data using INSERT, UPDATE, DELETE, and transactions.
- How to retrieve data using SELECT statements, joins, and other techniques.
- How to aggregate data using functions like SUM, AVG, MAX, MIN, and COUNT.
- Additional topics covered include subqueries, views, and resources for further learning.
This is a description of the Diffie-Hellman-Merkle Key Exchange process, with a presentation of the essential calculations and some discussion of vulnerabilities
"Building Data Warehouse with Google Cloud Platform", Artem NikulchenkoFwdays
In this talk, we would explore available options for building Data Warehouse for data-oriented business using Google Cloud Platform. We will start by discussing why Data Warehouse can be needed, move to the differences between "traditional" and Cloud Data Warehouses, and finally discuss steps and options for building your own Data Warehouse.
SQL is a standard language for accessing and manipulating databases. It allows users to retrieve, insert, update, and delete data as well as create new databases and tables. Common SQL statements include SELECT, UPDATE, DELETE, and INSERT. SQL uses clauses, operators, and wildcards to filter records based on conditions. Some key points are that SQL is an ANSI standard but different versions exist, it allows querying and modifying data in databases, and is essential for interacting with relational database systems.
Use extensively researched Blockchain PowerPoint Presentation Slides to educate your audience about the secure online payment transactions and cryptographic techniques. Show encryption methods and concept of decentralized network that allows the easy transfer of digital values such as currency and data. Bitcoin developers can incorporate this professionally designed content-ready blockchain PowerPoint presentation templates for their work. This deck covers topics like distributed ledger, working of a distributed ledger, use cases, industrial blockchain benefits, blockchain limitations, and more. Illustrate the idea of transferring funds directly between two parties without any banks or credit card company using blockchain PPT presentation templates. Demonstrate the workings of cryptocurrencies, showcase the process and its benefits with the help of cryptocurrency PPT slides. These templates are completely customizable. You can edit the slides as per your convenience. Change color, text, icon, and font size as per your need. Download now. Engage with disbelievers through our Blockchain Powerpoint Presentation Slides. Explain the grounds for your beliefs. https://bit.ly/2W76JPY
This document summarizes a presentation about network traffic visibility and anomaly detection at scale. It discusses the problems with lack of visibility into network traffic data and tools. It introduces Kentik as a solution for traffic visibility that allows infinite granularity storage for months, real-time queries, and anomaly detection. The presentation outlines Kentik's approach of using an ingest and fusion layer to combine different data sources, a storage layer, and a query layer to provide a platform for network traffic analysis and anomaly detection at large scales.
Nokia is looking to transform its business for the future by regaining leadership in the smartphone market, maintaining leadership in mobile phones, and sustaining its position as a leading mobile products company. It will partner with Microsoft to build a new ecosystem for smartphones and maintain volume and value leadership. Nokia will also focus on bringing the web and apps to new price points, invest in future disruptions like MeeGo, and develop its location and commerce business including building a structured data platform and advanced analytics capabilities using big data.
Big Data Expo 2015 - Schiphol Big Data @ SchipholBigDataExpo
Data Innovation Lab wil bijdragen aan Schiphol’s strategische doelen d.m.v. efficiency voordelen (kostenbesparingen) en nieuwe verdienmodellen door het kapitaliseren van de waarde van data. We willen u graag meenemen in hoe we gaan werken en wat we gaan doen.
If your business is heavily dependent on the Internet, you may be facing an unprecedented level of network traffic analytics data. How to make the most of that data is the challenge. This presentation from Kentik VP Product and former EMA analyst Jim Frey explores the evolving need, the architecture and key use cases for BGP and NetFlow analysis based on scale-out cloud computing and Big Data technologies.
Deepfield Networks provides a cloud-based product called Cloud Genome. They are a privately held company funded by venture capitalists in the early stages of profitability. Founded in 2011 by Craig Labovitz and Joe Eggleston, they focus on cloud and internet infrastructure and network telemetry. Recent updates include new offices, customers, and employees, and they predict continued success, growth, and a potential future IPO.
The document discusses securing data centers from cyber threats. It describes how attacks have evolved from manual to mechanized to sophisticated human-led attacks. It advocates employing segmentation, threat defense and visibility measures like firewalls, IDS/IPS, and NetFlow. The Cisco Cyber Threat Defense solution places these tools at the access, aggregation and core layers, including the ASA firewall, Nexus switches, and StealthWatch for network monitoring and analytics. This provides visibility into network traffic across physical and virtual infrastructure to detect threats and policy violations.
RISC Networks CloudScape simplifies cloud migration planning through a process of discovery, analysis, and migration. It uses intelligent application grouping to understand complex application dependencies and segment workloads by location and function. CloudScape analyzes applications to identify migration drivers and issues. It optimizes cloud pricing across 15+ vendors and provisions resources while factoring in storage, network I/O, and true costs. Migration plans can then be exported and executed, including full network connectivity requirements.
Skydive is a network analyzer that provides network topology exploration and visualization, network traffic capture, and tools to make network troubleshooting easier. It uses a distributed architecture with agents, analyzers, and probes to monitor network devices and traffic. Topology is represented as a graph that is updated in real-time. Probes monitor various network elements and capture packets across the network using techniques like eBPF. Skydive also supports packet injection and provides APIs, visualization, and integrations to support network analysis and troubleshooting workflows.
This document provides an overview of DVTDS (Distributed Virtual Transaction Directory Server), a new LDAP directory server solution from TeraCortex.
Key points:
- DVTDS uses a distributed architecture to partition data across multiple servers for high performance and availability. It supports data replication and distributed transactions.
- It provides data model virtualization through views and mappings to present different application-specific views of the underlying data.
- Benchmarking shows it significantly outperforms existing directory solutions in throughput tests, scaling linearly with additional resources. TeraCortex aims to release a free demo version in 2014.
ClickHouse Paris Meetup. Pragma Analytics Software Suite w/ClickHouse, by Mat...Altinity Ltd
Pragma Innovation is an IT services company focused on time series data solutions. Their PASS (Pragma Analytics Software Suite) allows companies to analyze, report on, and make decisions from time series network data using open source software. It is designed for ISPs, hosting providers, and telecom companies. The solution ingests network and log data, standardizes it, enriches it using tools like GeoIP, and stores it in a time series database. This allows customers to build applications for traffic engineering, security, and business intelligence use cases. Key challenges addressed in version 2.0 of the solution include data sampling, IPv4/IPv6 support, and using ClickHouse as the time series database for its performance and simplicity
Cloud Native Networking & Security with Cilium & eBPFRaphaël PINSON
This document summarizes a presentation about Cilium and eBPF. Cilium provides cloud native networking and security using eBPF. eBPF allows programs to run securely in the Linux kernel for networking, security, and observability. Cilium offers networking features like Kubernetes services, cluster mesh for multi-cluster connectivity, and platform integration. It also provides security using identity-based policies and API authorization. Observability features include flow visibility and service maps. Cilium can be used as a service mesh or with Tetragon for prevention capabilities without proxies.
This document provides an overview and agenda for the Splunk App for Stream, including:
- The architecture of the Stream Forwarder for capturing wire data and routing it to Splunk.
- The architecture of the App for Stream for analyzing wire data in Splunk.
- Examples of deployment architectures for ingesting wire data.
- A customer use case where wire data from the network helped provide visibility that log data could not due to access restrictions.
Safer Commutes & Streaming Data | George Padavick, Ohio Department of Transpo...HostedbyConfluent
The Ohio Department of Transportation has adopted Confluent as the event driven enabler of DriveOhio, a modern Intelligent Transportation System. DriveOhio digitally links sensors, cameras, speed monitoring equipment, and smart highway assets in real time, to dynamically adjust the surface road network to maximize the safety and efficiency for travelers. Over the past 24 months the team has increased the number and types of devices within the DriveOhio environment, while also working to see their vendors adopt Kafka to better participate in data sharing.
Building a data pipeline to ingest data into Hadoop in minutes using Streamse...Guglielmo Iozzia
Slides from my talk at the Hadoop User Group Ireland meetup on June 13th 2016: building a data pipeline to ingest data from sources of different nature into Hadoop in minutes (and no coding at all) using the Open Source Streamsets Data Collector tool.
PLNOG 8: Kazimierz Jantas - Innowacyjne rozwiązania dla IT PROIDEA
This document summarizes Zycko Polska's IT solutions and Riverbed technology products. It discusses how Riverbed solutions such as Steelhead, Granite, Aptimize, and Stingray can optimize application delivery and WAN performance through technologies like WAN optimization, application acceleration, network monitoring and more. The document also provides information on specific Riverbed products and how they can help with challenges like consolidation, disaster recovery, security and more.
Webinar: Unlock the Power of Streaming Data with Kinetica and ConfluentKinetica
The volume, complexity and unpredictability of streaming data is greater than ever before. Innovative organizations require instant insight from streaming data in order to make real-time business decisions. A new technology stack is emerging as traditional databases and data lakes are challenged to analyze streaming data and historical data together in real time.
Confluent Platform, a more complete distribution of Apache Kafka®, works with Kinetica’s GPU-accelerated engine to transform data on the wire, instantly ingest data and analyze it at the same time. With the Kinetica Connector, end users can ingest streaming data from sensors, mobile apps, IoT devices and social media via Kafka into Kinetica’s database to combine it with data at rest. Together, the technologies deliver event-driven and real-time data to power the speed of thought analytics, improve customer experience, deliver targeted marketing offers and increase operational efficiencies.
Exploring the Final Frontier of Data Center Orchestration: Network Elements -...Puppet
The document discusses network element automation using Puppet. It provides context on the challenges of manual network configuration including lack of agility, reliability issues from errors, and time spent on basic tasks. Puppet can automate network elements similar to how it automates servers, reducing errors and improving speed/productivity. The Cisco Nexus platform and NXAPI enable programmatic access for automation using Puppet through technologies like onePK and LXC containers running on the switch.
The document provides an agenda and overview of CryptTech's log management system called CryptoLOG, as well as their hotspot solution called CryptoSPOT. CryptoLOG allows centralized collection, analysis, correlation and reporting of logs from various sources. It supports numerous collection methods including syslog, agents, shares and databases. CryptoLOG also provides high availability clustering, distributed deployment architectures, and security features like role-based access.
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022HostedbyConfluent
Azure Event Hubs - Behind the Scenes With Kasun Indrasiri | Current 2022
Azure Event Hubs is a hyperscale PaaS event stream broker with protocol support for HTTP, AMQP, and Apache Kafka RPC that accepts and forwards several trillion (!) events per day and is available in all global Azure regions. This session is a look behind the curtain where we dive deep into the architecture of Event Hubs and look at the Event Hubs cluster model, resource isolation, and storage strategies and also review some performance figures.
Kubernetes @ Squarespace (SRE Portland Meetup October 2017)Kevin Lynch
In this presentation I talk about our motivation to converting our microservices to run on Kubernetes. I discuss many of the technical challenges we encountered along the way, including networking issues, Java issues, monitoring and alerting, and managing all of our resources!
Flink Forward San Francisco 2019: Building production Flink jobs with Airstre...Flink Forward
AirStream is a realtime stream computation framework that supports Flink as one of its processing engines. It allows engineers and data scientists at Airbnb to easily leverage Flink to build real time data pipelines and feedback loops. Multiple mission critical applications have been built on top of it. In this talk, we will start with an overview of AirStream, and describe how we have designed Airstream to leverage SQL support in Flink to allow users to easily build real time data pipelines. We will go over a few production use cases such as building a user activity profiler and building user identity mapping in realtime. We will also cover how we have integrated Airstream into the data infrastructure ecosystem at Airbnb through easily configurable connectors such as Kafka and Hive that allow users to easily leverage these components in their pipelines.
Similar to Kentik Detect Engine - Network Field Day 2017 (20)
Advanced control scheme of doubly fed induction generator for wind turbine us...IJECEIAES
This paper describes a speed control device for generating electrical energy on an electricity network based on the doubly fed induction generator (DFIG) used for wind power conversion systems. At first, a double-fed induction generator model was constructed. A control law is formulated to govern the flow of energy between the stator of a DFIG and the energy network using three types of controllers: proportional integral (PI), sliding mode controller (SMC) and second order sliding mode controller (SOSMC). Their different results in terms of power reference tracking, reaction to unexpected speed fluctuations, sensitivity to perturbations, and resilience against machine parameter alterations are compared. MATLAB/Simulink was used to conduct the simulations for the preceding study. Multiple simulations have shown very satisfying results, and the investigations demonstrate the efficacy and power-enhancing capabilities of the suggested control system.
Using recycled concrete aggregates (RCA) for pavements is crucial to achieving sustainability. Implementing RCA for new pavement can minimize carbon footprint, conserve natural resources, reduce harmful emissions, and lower life cycle costs. Compared to natural aggregate (NA), RCA pavement has fewer comprehensive studies and sustainability assessments.
Harnessing WebAssembly for Real-time Stateless Streaming PipelinesChristina Lin
Traditionally, dealing with real-time data pipelines has involved significant overhead, even for straightforward tasks like data transformation or masking. However, in this talk, we’ll venture into the dynamic realm of WebAssembly (WASM) and discover how it can revolutionize the creation of stateless streaming pipelines within a Kafka (Redpanda) broker. These pipelines are adept at managing low-latency, high-data-volume scenarios.
A review on techniques and modelling methodologies used for checking electrom...nooriasukmaningtyas
The proper function of the integrated circuit (IC) in an inhibiting electromagnetic environment has always been a serious concern throughout the decades of revolution in the world of electronics, from disjunct devices to today’s integrated circuit technology, where billions of transistors are combined on a single chip. The automotive industry and smart vehicles in particular, are confronting design issues such as being prone to electromagnetic interference (EMI). Electronic control devices calculate incorrect outputs because of EMI and sensors give misleading values which can prove fatal in case of automotives. In this paper, the authors have non exhaustively tried to review research work concerned with the investigation of EMI in ICs and prediction of this EMI using various modelling methodologies and measurement setups.
Adaptive synchronous sliding control for a robot manipulator based on neural ...IJECEIAES
Robot manipulators have become important equipment in production lines, medical fields, and transportation. Improving the quality of trajectory tracking for
robot hands is always an attractive topic in the research community. This is a
challenging problem because robot manipulators are complex nonlinear systems
and are often subject to fluctuations in loads and external disturbances. This
article proposes an adaptive synchronous sliding control scheme to improve trajectory tracking performance for a robot manipulator. The proposed controller
ensures that the positions of the joints track the desired trajectory, synchronize
the errors, and significantly reduces chattering. First, the synchronous tracking
errors and synchronous sliding surfaces are presented. Second, the synchronous
tracking error dynamics are determined. Third, a robust adaptive control law is
designed,the unknown components of the model are estimated online by the neural network, and the parameters of the switching elements are selected by fuzzy
logic. The built algorithm ensures that the tracking and approximation errors
are ultimately uniformly bounded (UUB). Finally, the effectiveness of the constructed algorithm is demonstrated through simulation and experimental results.
Simulation and experimental results show that the proposed controller is effective with small synchronous tracking errors, and the chattering phenomenon is
significantly reduced.
ACEP Magazine edition 4th launched on 05.06.2024Rahul
This document provides information about the third edition of the magazine "Sthapatya" published by the Association of Civil Engineers (Practicing) Aurangabad. It includes messages from current and past presidents of ACEP, memories and photos from past ACEP events, information on life time achievement awards given by ACEP, and a technical article on concrete maintenance, repairs and strengthening. The document highlights activities of ACEP and provides a technical educational article for members.
2. KDE Quick Stats
(kentik detect engine)
NetFlow in the Cloud
• 125+ Billion Flows/Day stored
• 1,000,000+ FPS
• 50 “Large” Queries/s, thousands of sub-qps
• 75+ TB flow data stored/day
(25+ compressed)
SNMP, BGP, network performance too!
3. KDE High-Level
• KDE is a hybrid system:
○ Fusing / Ingest Layer
○ Distributed column store db / query engine
○ Realtime stream processing for anomaly detection
• We evaluated various existing engines: ES, Hadoop,
Cassandra, Storm, Spark, SILK, Druid, Kafka....
• Couldn’t find performance, multi-tenancy, and network
savvy
so we wrote our own...
4. Ingest &
Fusion
layer
Storage layer
(flow specific)
Query
layer
Each layer has separate and different scaling characteristics
Query engine
and UI
Query
interfaces
SQL
WWW
REST
Data
sources Clients
SELECT flow
FROM router
WHERE …
>_
KDE architecture
6. KDE Architecture
BGP VIP
KDE ingest layer
enKryptor
Storage layer
Streaming layer
kFlow
(HTTPS)
NetFlow
(UDP)
NetFlow
(UDP)
kFlow
(HTTPS)
kFlow
(HTTP)
kFlow
(HTTP)
relay
relay
proxy
proxy
proxy
client
C
client
C
client
C
7. KDE ingest layer
enKryptor
Storage layer
Streaming layer
kFlow
(HTTPS)
NetFlow
(UDP)
kFlow
(HTTPS)
kFlow
(HTTPS)
kFlow
(HTTPS)
proxy
proxy
proxy
client
C
client
C
client
C
BGP VIP
NetFlow
(UDP) relay
VIP + Relay
• One IP bound to multiple
servers
• Sharded by Source-IP
• Validate Sender as Kentik
Customer
• Pass flow on (raw UDP
socket) to correct proxy
• Relay handles load balancing
(Kentik specific, UDP+TCP)
relay
8. Proxy
BGP VIP
KDE ingest layer
enKryptor
Storage layer
Streaming layer
kFlow
(HTTPS)
NetFlow
(UDP)
NetFlow
(UDP)
kFlow
(HTTPS)
relay
relay
kFlow
(HTTP)
client
C
client
C
client
C
kFlow
(HTTP)
• Inspect flow & determine type:
V5, V9, IPFIX, SFlow, KFlow
• Need to resample?
• Configured Sample Rate
• Launch Client Process for each
device
• Poll for device changes
• Monitor health
• Relaunch of client crash
proxy
proxy
proxy
9. BGP VIP
KDE ingest layer
enKryptor
Storage layer
Streaming layer
kFlow
(HTTPS)
NetFlow
(UDP)
NetFlow
(UDP)
kFlow
(HTTPS)
relay
relay
proxy
proxy
proxy
kFlow
(HTTP)
kFlow
(HTTP)
client
C
client
C
client
C
Client
(where the magic happens)
• One per device
configured to send flow
• * goes in, KFlow comes
out
client
C
NetFlow
SFlow
IPFix
kFlow
12. Step 2: Enrichment
• BGP - Route data for xxx
• GeoIP - Where does my traffic start and end
• SNMP - Interface names and descriptions
• Tagging - business classification: cost-centers,
user-info, peering info
• App Specific Data - URL/DNS requests, MYSQL
query
• Performance data (NPM) - Retransmits, network latency,
appl latency
• coming soon:
• Timestamped event Data (syslog)
• Threat feeds
13. DATA FUSION in
CLIENT
Decoder
Modules
Mem
Tables
NetFlow v5
NetFlow v9
IPFIX
BGP RIB
Custom Tags
SNMP Poller
BGP
Daemon
Enrichment
DB
DATA
FUSION
Geo ←→ IP
ASN ←→ IP
SFlow
ROUTER
FLOW FRIENDLY DATASTORE
Single flow
fused row
sent to storage
PCAP
PCAP
agent
proxy
14. Step 3: Resampling & Unification
• Long term (>1 Month)
• What a process (device) said over an hour
• Two tricks:
• Flow Unification
• Resampling
16. Storage Layer
• Fused KFlow as input...Cap'n Proto (like
protobuffers)
• Shard data into small chunks
• HTTP to N distributed storage nodes
• Metadata supervisor DB handles shard locations
• Row Oriented to Column Oriented
• Compressed using ZFS
DISK
17. Multi-Tenancy DB
Needed Multitenancy for a large-scale SaaS product
Could not find other DB’s @scale with it
We succeeded by building in:
● Fairness
queries are chopped into small chunks, users are rate limited and
prioritized
● Security
data is isolated between “users” down to the thread level
● Multiuser caching with fairness
Built a cache that cannot be monopolized by any 1 user
18. Ingest &
Fusion
layer
Storage layer
(flow specific)
Query
layer
Query engine
and UI
Query
interfaces
SQL
WWW
REST
Data
sources Clients
SELECT flow
FROM router
WHERE …
>_
● SQL interface
PSQL FDW
● UI/UX
feat. advanced
data-viz
● REST API based
interface
build your own
24. • DDoS is a simple use case of anomaly detection
• V1 anomaly detection relied on KDE queries. Abusive
• V2 needed stream processing and in-ram baseline
storage
• Typically avoided streaming db’s due to aggregation
• Streaming db’s for anomaly detection+our long term
flow storage is a powerful combination
• Evaluated Spark, Storm, Samza, PipelineDB. Fail
Detecting Anomalies
25. BGP VIP
KDE ingest layer
enKryptor
Storage layer
kFlow
(HTTPS)
NetFlow
(UDP)
NetFlow
(UDP)
kFlow
(HTTPS)
kFlow
(HTTP)
kFlow
(HTTP)
relay
relay
proxy
proxy
proxy
client
C
client
C
client
C
Streaming layer