August 2016 HUG: Better together: Fast Data with Apache Spark™ and Apache Ign...Yahoo Developer Network
Spark and Ignite are two of the most popular open source projects in the area of high-performance Big Data and Fast Data. But did you know that one of the best ways to boost performance for your next generation real-time applications is to use them together? In this session, Dmitriy Setrakyan, Apache Ignite Project Management Committee Chairman and co-founder and CPO at GridGain will explain in detail how IgniteRDD — an implementation of native Spark RDD and DataFrame APIs — shares the state of the RDD across other Spark jobs, applications and workers. Dmitriy will also demonstrate how IgniteRDD, with its advanced in-memory indexing capabilities, allows execution of SQL queries many times faster than native Spark RDDs or Data Frames. Don't miss this opportunity to learn from one of the experts how to use Spark and Ignite better together in your projects.
Speakers:
Dmitriy Setrakyan, is a founder and CPO at GridGain Systems. Dmitriy has been working with distributed architectures for over 15 years and has expertise in the development of various middleware platforms, financial trading systems, CRM applications and similar systems. Prior to GridGain, Dmitriy worked at eBay where he was responsible for the architecture of an add-serving system processing several billion hits a day. Currently Dmitriy also acts as PMC chair of Apache Ignite project.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
Learning Objectives:
• Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing.
• How to deploy and tune scalable clusters running Spark on Amazon EMR.
• How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3.
• Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
700 Updatable Queries Per Second: Spark as a Real-Time Web ServiceEvan Chan
700 Updatable Queries Per Second: Spark as a Real-Time Web Service. Find out how to use Apache Spark with FiloDb for low-latency queries - something you never thought possible with Spark. Scale it down, not just scale it up!
August 2016 HUG: Better together: Fast Data with Apache Spark™ and Apache Ign...Yahoo Developer Network
Spark and Ignite are two of the most popular open source projects in the area of high-performance Big Data and Fast Data. But did you know that one of the best ways to boost performance for your next generation real-time applications is to use them together? In this session, Dmitriy Setrakyan, Apache Ignite Project Management Committee Chairman and co-founder and CPO at GridGain will explain in detail how IgniteRDD — an implementation of native Spark RDD and DataFrame APIs — shares the state of the RDD across other Spark jobs, applications and workers. Dmitriy will also demonstrate how IgniteRDD, with its advanced in-memory indexing capabilities, allows execution of SQL queries many times faster than native Spark RDDs or Data Frames. Don't miss this opportunity to learn from one of the experts how to use Spark and Ignite better together in your projects.
Speakers:
Dmitriy Setrakyan, is a founder and CPO at GridGain Systems. Dmitriy has been working with distributed architectures for over 15 years and has expertise in the development of various middleware platforms, financial trading systems, CRM applications and similar systems. Prior to GridGain, Dmitriy worked at eBay where he was responsible for the architecture of an add-serving system processing several billion hits a day. Currently Dmitriy also acts as PMC chair of Apache Ignite project.
Organizations need to perform increasingly complex analysis on data — streaming analytics, ad-hoc querying, and predictive analytics — in order to get better customer insights and actionable business intelligence. Apache Spark has recently emerged as the framework of choice to address many of these challenges. In this session, we show you how to use Apache Spark on AWS to implement and scale common big data use cases such as real-time data processing, interactive data science, predictive analytics, and more. We will talk about common architectures, best practices to quickly create Spark clusters using Amazon EMR, and ways to integrate Spark with other big data services in AWS.
Learning Objectives:
• Learn why Spark is great for ad-hoc interactive analysis and real-time stream processing.
• How to deploy and tune scalable clusters running Spark on Amazon EMR.
• How to use EMR File System (EMRFS) with Spark to query data directly in Amazon S3.
• Common architectures to leverage Spark with Amazon DynamoDB, Amazon Redshift, Amazon Kinesis, and more.
700 Updatable Queries Per Second: Spark as a Real-Time Web ServiceEvan Chan
700 Updatable Queries Per Second: Spark as a Real-Time Web Service. Find out how to use Apache Spark with FiloDb for low-latency queries - something you never thought possible with Spark. Scale it down, not just scale it up!
Building a Scalable Web Crawler with Hadoop by Ahad Rana from CommonCrawl
Ahad Rana, engineer at CommonCrawl, will go over CommonCrawl’s extensive use of Hadoop to fulfill their mission of building an open, and accessible Web-Scale crawl. He will discuss their Hadoop data processing pipeline, including their PageRank implementation, describe techniques they use to optimize Hadoop, discuss the design of their URL Metadata service, and conclude with details on how you can leverage the crawl (using Hadoop) today.
Stream Computing (The Engineer's Perspective)Ilya Ganelin
This is a ground zero introduction to stream processing. The focus is on what differentiates them - this turns out not to be performance, but how they solve the challenges scalability, availability, durability, and failure-handling.
We look at Storm, Flink, and Apex as case studies to understand the space.
a comprehensive good introduction to the the Big data world in AWS cloud, hadoop, Streaming, batch, Kinesis, DynamoDB, Hbase, EMR, Athena, Hive, Spark, Piq, Impala, Oozie, Data pipeline, Security , Cost, Best practices
An over-ambitious introduction to Spark programming, test and deployment. This slide tries to cover most core technologies and design patterns used in SpookyStuff, the fastest query engine for data collection/mashup from the deep web.
For more information please follow: https://github.com/tribbloid/spookystuff
A bug in PowerPoint used to cause transparent background color not being rendered properly. This has been fixed in a recent upload.
AWS Big Data Demystified #3 | Zeppelin + spark sql, jdbc + thrift, ganglia, r...Omid Vahdaty
AWS Big Data Demystified is all about knowledge sharing b/c knowledge should be given for free. in this lecture we will dicusss the advantages of working with Zeppelin + spark sql, jdbc + thrift, ganglia, r+ spark r + livy, and a litte bit about ganglia on EMR.\
subscribe to you youtube channel to see the video of this lecture:
https://www.youtube.com/channel/UCzeGqhZIWU-hIDczWa8GtgQ?view_as=subscriber
Building a Scalable Web Crawler with Hadoop by Ahad Rana from CommonCrawl
Ahad Rana, engineer at CommonCrawl, will go over CommonCrawl’s extensive use of Hadoop to fulfill their mission of building an open, and accessible Web-Scale crawl. He will discuss their Hadoop data processing pipeline, including their PageRank implementation, describe techniques they use to optimize Hadoop, discuss the design of their URL Metadata service, and conclude with details on how you can leverage the crawl (using Hadoop) today.
Stream Computing (The Engineer's Perspective)Ilya Ganelin
This is a ground zero introduction to stream processing. The focus is on what differentiates them - this turns out not to be performance, but how they solve the challenges scalability, availability, durability, and failure-handling.
We look at Storm, Flink, and Apex as case studies to understand the space.
a comprehensive good introduction to the the Big data world in AWS cloud, hadoop, Streaming, batch, Kinesis, DynamoDB, Hbase, EMR, Athena, Hive, Spark, Piq, Impala, Oozie, Data pipeline, Security , Cost, Best practices
An over-ambitious introduction to Spark programming, test and deployment. This slide tries to cover most core technologies and design patterns used in SpookyStuff, the fastest query engine for data collection/mashup from the deep web.
For more information please follow: https://github.com/tribbloid/spookystuff
A bug in PowerPoint used to cause transparent background color not being rendered properly. This has been fixed in a recent upload.
AWS Big Data Demystified #3 | Zeppelin + spark sql, jdbc + thrift, ganglia, r...Omid Vahdaty
AWS Big Data Demystified is all about knowledge sharing b/c knowledge should be given for free. in this lecture we will dicusss the advantages of working with Zeppelin + spark sql, jdbc + thrift, ganglia, r+ spark r + livy, and a litte bit about ganglia on EMR.\
subscribe to you youtube channel to see the video of this lecture:
https://www.youtube.com/channel/UCzeGqhZIWU-hIDczWa8GtgQ?view_as=subscriber
Cahier spécial "Compétitivité internationale des PME" pour Planète PME 15 juin 2010 en présence du Président de la République, M. Nicolas Sarkozy
How To Collaborate And Deploy SharePointNick Inglis
Hi, I’m Nick Inglis and I’m the SharePoint Program Manager at AIIM International. AIIM is the community that provides education, research, and best practices to help organizations find, control, and optimize their information… and I am the SharePoint guy at AIIM. You can learn more about us at http://www.AIIM.org. Today we’re going to be talking about how to Collaborate and Adopt SharePoint successfully.
CloudCamp. Paul Hopton, @relayr_cloud - 'The WunderBar - Bootstrapping the In...Chris Purrington
Paul Hopton, @relayr_cloud - 'The WunderBar - Bootstrapping the Internet of Things
How to move beyond corporate hype, and make the Internet of Things happen (almost) now.
These are the slides from my presentation at CLOUDCOMP 2009 on AppScale, an open source platform for running Google App Engine apps on. See our project home page at http://appscale.cs.ucsb.edu or our code page at http://code.google.com/p/appscale
Hadoop in Practice (SDN Conference, Dec 2014)Marcel Krcah
You sit on a big pile of data and want to know how to leverage it in your company? Interested in use-cases, examples and practical demos about the full Hadoop stack? Looking for big-data inspiration?
In this talk we will cover:
- Use-cases how implementing a Hadoop stack in TheNewMotion drastically helped us, software engineers, with our everyday challenges. And how Hadoop enables our management team, marketing and operations to become more data-driven.
- Practical introduction into our data warehouse, analytical and visualization stack: Apache Pig, Impala, Hue, Apache Spark, IPython notebook and Angular with D3.js.
- Easy deployment of the Hadoop stack to the cloud.
- Hermes - our homegrown command-line tool which helps us automate data-related tasks.
- Examples of exciting machine learning challenges that we are currently tackling
- Hadoop with Azure and Microsoft stack.
Two popular tools for doing Machine Learning on top of JVM ecosystem is H2O and SparkML. This presentation compares these two tools as Machine Learning libraries (Didn't consider Spark's Data Munjing perspective). This work was done during June of 2018.
Building a Just-in-Time Application Stack for AnalystsAvere Systems
Slide presentation from Webinar on February 17, 2016.
People in analytical roles are demanding more and more compute and storage to get their jobs done. Instead of building out infrastructure for a few employees or a department, systems engineers and IT managers can find value in creating a compute stack in the cloud to meet the fluctuating demand of their clients.
In this 45-minute webinar, you’ll learn:
- How to identify the right analytical workloads
- How to create a scalable compute environment using the cloud for analysts in under 10 minutes
- How to best manage costs associated with the cloud compute stack
- How to create dedicated client stacks with their own scratch space as well as general access to reference data
Health systems departments, research & development departments, and business analyst groups all face silos of these challenging, compute-intensive use cases. By learning how to quickly build this flexible workflow that can be scaled up and down (or off) instantly, you can support business objectives while efficiently managing costs.
http://bit.ly/1BTaXZP – As organizations look for even faster ways to derive value from big data, they are turning to Apache Spark is an in-memory processing framework that offers lightning-fast big data analytics, providing speed, developer productivity, and real-time processing advantages. The Spark software stack includes a core data-processing engine, an interface for interactive querying, Spark Streaming for streaming data analysis, and growing libraries for machine-learning and graph analysis. Spark is quickly establishing itself as a leading environment for doing fast, iterative in-memory and streaming analysis. This talk will give an introduction the Spark stack, explain how Spark has lighting fast results, and how it complements Apache Hadoop. By the end of the session, you’ll come away with a deeper understanding of how you can unlock deeper insights from your data, faster, with Spark.
Real time Analytics with Apache Kafka and Apache SparkRahul Jain
A presentation cum workshop on Real time Analytics with Apache Kafka and Apache Spark. Apache Kafka is a distributed publish-subscribe messaging while other side Spark Streaming brings Spark's language-integrated API to stream processing, allows to write streaming applications very quickly and easily. It supports both Java and Scala. In this workshop we are going to explore Apache Kafka, Zookeeper and Spark with a Web click streaming example using Spark Streaming. A clickstream is the recording of the parts of the screen a computer user clicks on while web browsing.
This is a presentation on apache hadoop technology. This presentation may be helpful for the beginners to know about the terminologies of hadoop. This presentation contains some pictures which describes about the working function of this technology. I hope it will be helpful for the beginners.
Thank you.
This presentation is about apache hadoop technology. This may be helpful for the beginners. The beginners will know about some terminologies of hadoop technology. There is also some diagrams which will show the working of this technology.
Thank you.
The analysis of large amounts of data equires database
NoSQL, software framework that supports distributed computing and search engine. On these two fronts Amazon Web Services provides us the services DynamoDB, Elastic MapReduce and Cloud Search
Apache Spark for RDBMS Practitioners: How I Learned to Stop Worrying and Lov...Databricks
This talk is about sharing experience and lessons learned on setting up and running the Apache Spark service inside the database group at CERN. It covers the many aspects of this change with examples taken from use cases and projects at the CERN Hadoop, Spark, streaming and database services. The talks is aimed at developers, DBAs, service managers and members of the Spark community who are using and/or investigating “Big Data” solutions deployed alongside relational database processing systems. The talk highlights key aspects of Apache Spark that have fuelled its rapid adoption for CERN use cases and for the data processing community at large, including the fact that it provides easy to use APIs that unify, under one large umbrella, many different types of data processing workloads from ETL, to SQL reporting to ML.
Spark can also easily integrate a large variety of data sources, from file-based formats to relational databases and more. Notably, Spark can easily scale up data pipelines and workloads from laptops to large clusters of commodity hardware or on the cloud. The talk also addresses some key points about the adoption process and learning curve around Apache Spark and the related “Big Data” tools for a community of developers and DBAs at CERN with a background in relational database operations.
PaulJohnston CloudCamp London Ethics Climate Change Nov 2019Chris Purrington
A 5 minute Lightning Talk - Too Hot for Business as Usual - Climate Change and the next Decade in tech - how climate change is going to affect employee engagement, hiring, tech/cloud choice, emissions accounting, legislation, and how are companies going to deal with it all.
Dr Caitlin McDonald CloudCamp London - Sustainable Digital Ethics through Evo...Chris Purrington
A 5 minutes Lighting Talk introducing Dr McDoland's 'ethics through evolution' model matching different ethical schools of thought (and therefore different tools/practical approaches) to different stages of product/service maturity
Chris Swan's introduction to the 38th CloudCamp London - "Ethics and Corporate Social Responsibility" Chris uses Wardley Mapping to introduce the evenings Lightning Talks
CloudCamp. Julian Fischer Anynines - migrating a cloud foundry from vm war...Chris Purrington
Julian Fischer @railshoster - 'VMWare to OpenStack with a running Cloud Foundry' Learn from experience recently collected at anynines on how a VMWare based Cloud Foundry is moved to an OpenStack Havana infrastructure with barely no downtime.
CloudCamp. Philip Carey: 'Grey Cloud' do you pass the Yorkshire Test. A lig...Chris Purrington
Grey Clouds. Do you pass the Yorkshire Test. A light hearted look at selling Cloud apps to the older generation, do the apps and sales pitches pass 3 simple tests based on sound Yorkshire principals
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
UiPath Test Automation using UiPath Test Suite series, part 4DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 4. In this session, we will cover Test Manager overview along with SAP heatmap.
The UiPath Test Manager overview with SAP heatmap webinar offers a concise yet comprehensive exploration of the role of a Test Manager within SAP environments, coupled with the utilization of heatmaps for effective testing strategies.
Participants will gain insights into the responsibilities, challenges, and best practices associated with test management in SAP projects. Additionally, the webinar delves into the significance of heatmaps as a visual aid for identifying testing priorities, areas of risk, and resource allocation within SAP landscapes. Through this session, attendees can expect to enhance their understanding of test management principles while learning practical approaches to optimize testing processes in SAP environments using heatmap visualization techniques
What will you get from this session?
1. Insights into SAP testing best practices
2. Heatmap utilization for testing
3. Optimization of testing processes
4. Demo
Topics covered:
Execution from the test manager
Orchestrator execution result
Defect reporting
SAP heatmap example with demo
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Kubernetes & AI - Beauty and the Beast !?! @KCD Istanbul 2024Tobias Schneck
As AI technology is pushing into IT I was wondering myself, as an “infrastructure container kubernetes guy”, how get this fancy AI technology get managed from an infrastructure operational view? Is it possible to apply our lovely cloud native principals as well? What benefit’s both technologies could bring to each other?
Let me take this questions and provide you a short journey through existing deployment models and use cases for AI software. On practical examples, we discuss what cloud/on-premise strategy we may need for applying it to our own infrastructure to get it to work from an enterprise perspective. I want to give an overview about infrastructure requirements and technologies, what could be beneficial or limiting your AI use cases in an enterprise environment. An interactive Demo will give you some insides, what approaches I got already working for real.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
Slack (or Teams) Automation for Bonterra Impact Management (fka Social Soluti...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on the notifications, alerts, and approval requests using Slack for Bonterra Impact Management. The solutions covered in this webinar can also be deployed for Microsoft Teams.
Interested in deploying notification automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Dev Dives: Train smarter, not harder – active learning and UiPath LLMs for do...UiPathCommunity
💥 Speed, accuracy, and scaling – discover the superpowers of GenAI in action with UiPath Document Understanding and Communications Mining™:
See how to accelerate model training and optimize model performance with active learning
Learn about the latest enhancements to out-of-the-box document processing – with little to no training required
Get an exclusive demo of the new family of UiPath LLMs – GenAI models specialized for processing different types of documents and messages
This is a hands-on session specifically designed for automation developers and AI enthusiasts seeking to enhance their knowledge in leveraging the latest intelligent document processing capabilities offered by UiPath.
Speakers:
👨🏫 Andras Palfi, Senior Product Manager, UiPath
👩🏫 Lenka Dulovicova, Product Program Manager, UiPath
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
2. Topics
• Auto-Scaling Using Amazon EC2 and Scalr
• Nginx and Memcached on EC2, a 400% boost!
• NASDAQ exchange re-play on AWS
• Persistent Django on Amazon EC2 and EBS
• Taking Massive Distributed Computing to the
Common Man - Hadoop on Amazon EC2/S3
9. Scalr overview
• By using Scalr, you can create a server farm that uses prebuilt AMIs
for load balancing, web servers, and databases. You also can
customize a generic AMI, which you can use to host your actual
application.
• Scalr monitors the health of the entire server farm, ensuring that
instances stay running and that load averages stay below a
configurable threshold. If an instance crashes, another one of the
proper type will be launched and added to the load balancer.
10. Scalr (2)
• Scalr is an open source, fully redundant, self-curing, and
self-scaling hosting environment that uses Amazon EC2.
• Scalr allows network administrators to create virtual
server farms, using prebuilt components. Scalr uses four
Amazon Machine Instances (AMIs) for load balancing,
databases, application server, and a generic base
image.
• Administrators can preconfigure one machine and, when
the load warrants, bring online additional machines with
the same image, to handle the increased requests.
13. Originally developed by Igor Sysoev for rambler.ru (second largest
Russian web-site), it is a high-performance HTTP server / reverse
proxy known for its stability, performance, and ease of use. The great
track record, a lot of great modules, and an active development
community have rightfully earned it a steady uptick of users
14. memcached is a high-performance, distributed memory object
caching system, generic in nature, but intended for use in
speeding up dynamic web applications by alleviating database
load.
“Memcached, the darling of every web-developer, is
capable of turning almost any application into a speed-
demon. Benchmarking one of my own Rails applications
resulted in ~850 req/s on commodity, non-optimized
hardware - more than enough in the case of this
application. However, what if we took Mongrel out of the
equation? Nginx, by default, comes prepackaged with the
Memcached module, which allows us to bypass the
Mongrel (from rubyforge) servers and talk to Memcached
directly. Same hardware, and a quick test later: ~3,550
req/s, or almost a 400% improvement!”
22. Credit:
Thomas Brox Røst,
Visiting researcher, Decision Systems Group, Harvard
Persistent Django
on Amazon EC2 and EBS - The easy way
thomas.broxrost.com
tinyurl.com/6b48g9
23. Now that Amazon’s Elastic Block Store (EBS) is publicly available,
running a complete Django installation on Amazon Web Services
(AWS) is easier than ever.
---
EBS provides persistent storage, which means that the Django database
is kept safe even after the Django EC2 instances terminate
24. To setup Django with persistent PostgreSQL database on AWS:
Set up an AWS account
Download and install the Elasticfox Firefox extension
Add your AWS credentials to Firefox
Create a new EC2 security group
By default, EC2 instances are an introverted lot: They prefer keeping to themselves and don’t expose any
of their ports to the outside world. We will be running a web application on port 8000 so therefore port
8000 has to be opened. (Normally we would be opening port 80, but since I will only be using the Django
development web server then port 8000 is preferable). SSH access is also essential, so port 22 should be
opened as well. To make this happen we must create a new security group where these ports are opened.
25. Set up a key pair
Launch an EC2 Instance
Connect with your new instance (ssh using putty)
- Install subversion
- Install, initialize and launch PostgreSQL
- Modify PostgreSQL config to avoid username/password problems
- Restart PostgreSQL to enable new security policy
- Set up a database for Django
- Install Django (checkout from SVN)
- Install psycopg2 (for database access from Python)
Set up a Django project
Test the installation
Launch the dev server
Create a Django app
Create and mount an EBS Instance
Mount the filesystem
Move the database to persistent storage (with server stopped)
41. Hadoop
• Parallel Computing platform
– Distributed FileSystem (HDFS)
– Parallel Processing model (Map/Reduce)
– Express Computation in any language
– Job execution for Map/Reduce jobs
(scheduling+localization+retries/speculation)
• Open-Source
– Most popular Apache project!
– Highly Extensible Java Stack (@ expense of Efficiency)
– Develop/Test on EC2!
• Ride the commodity curve:
– Cheap (but reliable) shared nothing storage
– Data Local computing (don’t need high speed networks)
– Highly Scalable (@expense of Efficiency)
47. Why HIVE?
• Large installed base of SQL users
– ie. map-reduce is for ultra-geeks
– much much easier to write sql query
• Analytics SQL queries translate really well
to map-reduce
• Files as insufficient data management
abstraction
– Tables, Schemas, Partitions, Indices
48.
49. Hive Query Language
• Basic SQL
– From clause subquery
– ANSI JOIN (equi-join only)
– Multi-table Insert
– Multi group-by
– Sampling
– Objects traversal
• Extensibility
– Pluggable Map-reduce scripts using
TRANSFORM
50. Data Warehousing at Facebook
(Scribe is a server for aggregating log data streamed in real time from a large
number of servers. It is designed to be scalable, extensible without client-side
modification, and robust to failure of the network or any specific machine)
Web Servers Scribe Servers
Filers
Hive on
Hadoop Cluster
Oracle RAC Federated MySQL
51. Hadoop Usage @ Facebook
• Data warehouse running Hive
• 600 machines, 4800 cores
• 3200 jobs per day
• 50+ engineers have used Hadoop
• Data statistics:
– Total Data: ~2.5PB
– Net Data added/day: ~15TB
• 6TB of uncompressed source logs
• 4TB of uncompressed dimension data reloaded daily
– Compression Factor ~5x (gzip, more with bzip)
• Usage statistics:
– 3200 jobs/day with 800K tasks(map-reduce tasks)/day
– 55TB of compressed data scanned daily
– 15TB of compressed output data written to hdfs
– 80 MM compute minutes/day
52. Hadoop Job types @ Facebook
• Production jobs: load data, compute
statistics, detect spam, etc
• Long experiments: machine learning, etc
• Small ad-hoc queries: Hive jobs, sampling
• GOAL: Provide fast response times for
small jobs and guaranteed service levels
for production jobs
53. Usage patterns in Yahoo
• ETL
– Put large data source (eg. Log files) onto the Hadoop File System
– Perform aggregations, transformations, normalizations on the data
– Load into RDBMS / data mart
• Reporting and Analytics
– Run canned and ad-hoc queries over large data
– Run analytics and data mining operations on large data
– Produce reports for end-user consumption or loading into data mart
54. Usage patterns in Yahoo
• Data Processing Pipelines
– Multi-step pipelines for data processing
– Coordination, scheduling, data collection and publishing of feeds
– SLA carrying, regularly scheduled jobs
• Machine Learning & Graph Algorithms
– Traverse large graphs and data sets, building models and classifiers
– Implement machine learning algorithms over massive data sets
• General Back end processing
– Implement significant portions of back-end, batch oriented processing on the grid
– General computation framework
– Simplify back-end architecture
55. What is Hadoop Pig
Pig is a platform for analyzing large data sets that consists of a
high-level language for expressing data analysis programs, coupled
with infrastructure for evaluating these programs.
http://www.cloudera.com/hadoop-training-pig-introduction
56.
57.
58. Thanks to the kind sponsorship
to the AWS LONDON USER
GROUP
from
200bytes/transaction
Milk – assuming each transaction is for 1Gallon
Who needs another programming Language (PLSQL )
Gotchas later on (about networking trends)
Anyone can rent a computer!!!! (UC Berkeley)
UC Berkeley EC2 example
UC Berkeley EC2 example
Point out that now we know how HDFS works – we can run maps close to data
Point out that now we know how HDFS works – we can run maps close to data
Point out that now we know how HDFS works – we can run maps close to data
Nomenclature: Core switch and Top of Rack
Simple map-reduce is easy – but it can get complicated very quickly.
Multi table inserts and multi group by’s allow us to reduce the number of scans required. Poor man’s alternative to MQO.