Twitter provides a platform for user-generated content in the form of short messages called tweets. It handles a massive volume of data, with over 230 million tweets and 2 billion search queries per day. Twitter has developed a customized search and indexing system to handle this scale. It uses a modular system that is scalable, cost-effective, and allows for incremental development. The system includes components for crawling Twitter data, preprocessing and aggregating tweets, building an inverted index, and distributing the index across server machines for low-latency search.
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a new breed of messaging system built for the "big data" world. Coming out of LinkedIn (and donated to Apache), it is a distributed pub/sub system built in Scala. It has been an Apache TLP now for several months with the first Apache release imminent. Built for speed, scalability, and robustness, Kafka should definitely be one of the data tools you consider when designing distributed data-oriented applications.
The talk will cover a general overview of the project and technology, with some use cases, and a demo.
Modeling data and best practices for the Azure Cosmos DB.Mohammad Asif
Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. In this session we covered ,modeling of data using NOSQL cosmos database and how it's helpful for distributed application to maintain high availability ,scaling in multiple region and throughput.
Introduction and Overview of Apache Kafka, TriHUG July 23, 2013mumrah
Apache Kafka is a new breed of messaging system built for the "big data" world. Coming out of LinkedIn (and donated to Apache), it is a distributed pub/sub system built in Scala. It has been an Apache TLP now for several months with the first Apache release imminent. Built for speed, scalability, and robustness, Kafka should definitely be one of the data tools you consider when designing distributed data-oriented applications.
The talk will cover a general overview of the project and technology, with some use cases, and a demo.
Modeling data and best practices for the Azure Cosmos DB.Mohammad Asif
Azure Cosmos DB is Microsoft's globally distributed, multi-model database service. In this session we covered ,modeling of data using NOSQL cosmos database and how it's helpful for distributed application to maintain high availability ,scaling in multiple region and throughput.
Working with JSON Data in PostgreSQL vs. MongoDBScaleGrid.io
In this post, we are going to show you tips and techniques on how to effectively store and index JSON data in PostgreSQL vs. MongoDB. Learn more in the blog post: https://scalegrid.io/blog/using-jsonb-in-postgresql-how-to-effectively-store-index-json-data-in-postgresql
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
Searching Relational Data with Elasticsearchsirensolutions
Second Galway Data Meetup, 29th April 2015
Elasticsearch was originally developed for searching flat documents. However, as real world data is inherently more complex, e.g., nested json data, relational data, interconnected documents and entities, Elasticsearch quickly evolves to support more advanced search scenarios. In this presentation, we will review existing features and plugins to support such scenarios, discuss their advantages and disadvantages, and understand which one is more appropriate for a particular scenario.
Building Robust ETL Pipelines with Apache SparkDatabricks
Stable and robust ETL pipelines are a critical component of the data infrastructure of modern enterprises. ETL pipelines ingest data from a variety of sources and must handle incorrect, incomplete or inconsistent records and produce curated, consistent data for consumption by downstream applications. In this talk, we’ll take a deep dive into the technical details of how Apache Spark “reads” data and discuss how Spark 2.2’s flexible APIs; support for a wide variety of datasources; state of art Tungsten execution engine; and the ability to provide diagnostic feedback to users, making it a robust framework for building end-to-end ETL pipelines.
OSA Con 2022 - Apache Iceberg_ An Architectural Look Under the Covers - Alex ...Altinity Ltd
OSA Con 2022: Apache Iceberg: An Architectural Look Under the Covers
Alex Merced - Dremio
The data lakehouse is one of the most exciting trends in the data space promising to merge the best aspects of data lakes and data warehouses without either of their problems. Open source tech is making this promise a reality and in this talk Dremio Developer Advocate, Alex Merced, explores these technologies.
In this talk Alex Merced will cover:
- What is a Data Lakehouse?
- Why open matters in preserving the promise of lakehouses (better costs, vendor freedom, data freedom)
- What are technologies that enable lakehouses like Apache Iceberg, Apache Parquet, Apache Arrow and Project Nessie
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. In this session we will learn how to create data integration solutions using the Data Factory service and ingest data from various data stores, transform/process the data, and publish the result data to the data stores.
Making Apache Spark Better with Delta LakeDatabricks
Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake offers ACID transactions, scalable metadata handling, and unifies the streaming and batch data processing. It runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
In this talk, we will cover:
* What data quality problems Delta helps address
* How to convert your existing application to Delta Lake
* How the Delta Lake transaction protocol works internally
* The Delta Lake roadmap for the next few releases
* How to get involved!
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
This presentation shortly describes key features of Apache Cassandra. It was held at the Apache Cassandra Meetup in Vienna in January 2014. You can access the meetup here: http://www.meetup.com/Vienna-Cassandra-Users/
Deep Dive on ElasticSearch Meetup event on 23rd May '15 at www.meetup.com/abctalks
Agenda:
1) Introduction to NOSQL
2) What is ElasticSearch and why is it required
3) ElasticSearch architecture
4) Installation of ElasticSearch
5) Hands on session on ElasticSearch
Mario Molina, Software Engineer
CDC systems are usually used to identify changes in data sources, capture and replicate those changes to other systems. Companies are using CDC to sync data across systems, cloud migration or even applying stream processing, among others.
In this presentation we’ll see CDC patterns, how to use it in Apache Kafka, and do a live demo!
https://www.meetup.com/Mexico-Kafka/events/277309497/
JSONB in PostgreSQL is one of the main attractive feature for modern
application developers, no matter what some RDBMS purists are thinking.
People often use simple one-column-json schema for their projects and rely
on ability of database to store,index and query json. Postgres has long
history of supporting the non-structured data and has pioneered the
adoption of JSON by relational databases, so eventually JSON became and
official feature (SQL/JSON) of SQL standard.
Implementing Click-through Relevance Ranking in Solr and LucidWorks EnterpriseLucidworks (Archived)
This talk will present what are click-through events and how to process them with LucidWorks Enterprise. This innovative technique puts powerful search and relevancy at your fingertips -- at a fraction of the time and effort required to program them yourself with native Apache Solr.
Working with JSON Data in PostgreSQL vs. MongoDBScaleGrid.io
In this post, we are going to show you tips and techniques on how to effectively store and index JSON data in PostgreSQL vs. MongoDB. Learn more in the blog post: https://scalegrid.io/blog/using-jsonb-in-postgresql-how-to-effectively-store-index-json-data-in-postgresql
Intro to MongoDB
Get a jumpstart on MongoDB, use cases, and next steps for building your first app with Buzz Moschetti, MongoDB Enterprise Architect.
@BuzzMoschetti
Searching Relational Data with Elasticsearchsirensolutions
Second Galway Data Meetup, 29th April 2015
Elasticsearch was originally developed for searching flat documents. However, as real world data is inherently more complex, e.g., nested json data, relational data, interconnected documents and entities, Elasticsearch quickly evolves to support more advanced search scenarios. In this presentation, we will review existing features and plugins to support such scenarios, discuss their advantages and disadvantages, and understand which one is more appropriate for a particular scenario.
Building Robust ETL Pipelines with Apache SparkDatabricks
Stable and robust ETL pipelines are a critical component of the data infrastructure of modern enterprises. ETL pipelines ingest data from a variety of sources and must handle incorrect, incomplete or inconsistent records and produce curated, consistent data for consumption by downstream applications. In this talk, we’ll take a deep dive into the technical details of how Apache Spark “reads” data and discuss how Spark 2.2’s flexible APIs; support for a wide variety of datasources; state of art Tungsten execution engine; and the ability to provide diagnostic feedback to users, making it a robust framework for building end-to-end ETL pipelines.
OSA Con 2022 - Apache Iceberg_ An Architectural Look Under the Covers - Alex ...Altinity Ltd
OSA Con 2022: Apache Iceberg: An Architectural Look Under the Covers
Alex Merced - Dremio
The data lakehouse is one of the most exciting trends in the data space promising to merge the best aspects of data lakes and data warehouses without either of their problems. Open source tech is making this promise a reality and in this talk Dremio Developer Advocate, Alex Merced, explores these technologies.
In this talk Alex Merced will cover:
- What is a Data Lakehouse?
- Why open matters in preserving the promise of lakehouses (better costs, vendor freedom, data freedom)
- What are technologies that enable lakehouses like Apache Iceberg, Apache Parquet, Apache Arrow and Project Nessie
Azure Data Factory is a cloud-based data integration service that orchestrates and automates the movement and transformation of data. In this session we will learn how to create data integration solutions using the Data Factory service and ingest data from various data stores, transform/process the data, and publish the result data to the data stores.
Making Apache Spark Better with Delta LakeDatabricks
Delta Lake is an open-source storage layer that brings reliability to data lakes. Delta Lake offers ACID transactions, scalable metadata handling, and unifies the streaming and batch data processing. It runs on top of your existing data lake and is fully compatible with Apache Spark APIs.
In this talk, we will cover:
* What data quality problems Delta helps address
* How to convert your existing application to Delta Lake
* How the Delta Lake transaction protocol works internally
* The Delta Lake roadmap for the next few releases
* How to get involved!
Cluster computing frameworks such as Hadoop or Spark are tremendously beneficial in processing and deriving insights from data. However, long query latencies make these frameworks sub-optimal choices to power interactive applications. Organizations frequently rely on dedicated query layers, such as relational databases and key/value stores, for faster query latencies, but these technologies suffer many drawbacks for analytic use cases. In this session, we discuss using Druid for analytics and why the architecture is well suited to power analytic applications.
User-facing applications are replacing traditional reporting interfaces as the preferred means for organizations to derive value from their datasets. In order to provide an interactive user experience, user interactions with analytic applications must complete in an order of milliseconds. To meet these needs, organizations often struggle with selecting a proper serving layer. Many serving layers are selected because of their general popularity without understanding the possible architecture limitations.
Druid is an analytics data store designed for analytic (OLAP) queries on event data. It draws inspiration from Google’s Dremel, Google’s PowerDrill, and search infrastructure. Many enterprises are switching to Druid for analytics, and we will cover why the technology is a good fit for its intended use cases.
Speaker
Nishant Bangarwa, Software Engineer, Hortonworks
This presentation shortly describes key features of Apache Cassandra. It was held at the Apache Cassandra Meetup in Vienna in January 2014. You can access the meetup here: http://www.meetup.com/Vienna-Cassandra-Users/
Deep Dive on ElasticSearch Meetup event on 23rd May '15 at www.meetup.com/abctalks
Agenda:
1) Introduction to NOSQL
2) What is ElasticSearch and why is it required
3) ElasticSearch architecture
4) Installation of ElasticSearch
5) Hands on session on ElasticSearch
Mario Molina, Software Engineer
CDC systems are usually used to identify changes in data sources, capture and replicate those changes to other systems. Companies are using CDC to sync data across systems, cloud migration or even applying stream processing, among others.
In this presentation we’ll see CDC patterns, how to use it in Apache Kafka, and do a live demo!
https://www.meetup.com/Mexico-Kafka/events/277309497/
JSONB in PostgreSQL is one of the main attractive feature for modern
application developers, no matter what some RDBMS purists are thinking.
People often use simple one-column-json schema for their projects and rely
on ability of database to store,index and query json. Postgres has long
history of supporting the non-structured data and has pioneered the
adoption of JSON by relational databases, so eventually JSON became and
official feature (SQL/JSON) of SQL standard.
Implementing Click-through Relevance Ranking in Solr and LucidWorks EnterpriseLucidworks (Archived)
This talk will present what are click-through events and how to process them with LucidWorks Enterprise. This innovative technique puts powerful search and relevancy at your fingertips -- at a fraction of the time and effort required to program them yourself with native Apache Solr.
Smart society in Smart cities: lezioni per la PA e per i professionisti dell'...Marco Dal Pozzo
La cultura digitale dve andare di pari passo con quella teconologica. Questo significa intercettare i nuovi processi di comunicazione e saperne gestire i flussi e gli strumenti. Una nuova responsabilità per chi all'interno della PA lavora nella comunicazione e per chi nel mondo dell'informazione approfondisce i temi della nuova società e della comunicazione pubblica.
Forum PA, Roma 24 Maggio 2016
Boosting Documents in Solr by Recency, Popularity and Personal Preferences - ...lucenerevolution
See conference video - http://www.lucidimagination.com/devzone/events/conferences/revolution/2011
Attendees with come away from this presentation with a good understanding and access to source
code for boosting and/or filtering documents by recency, popularity, and personal preferences. My
solution improves upon the common “recipe” based solution for boosting by document age. The
framework also supports boosting documents by a popularity score, which is calculated and
managed outside the index. I will present a few different ways to calculate popularity in a scalable
manner. Lastly, my solution supports the concept of a personal document collection, where each
user is only interested in a subset of the total number of documents in the index.
Hadoop at Bloomberg:Medium data for the financial industryMatthew Hunt
Overview of big data systems origins, strengths, weaknesses, and uses at Bloomberg for solving "medium data" timeseries issues. Medium data requires modest clusters with consistently low latency and high availability, and Bloomberg has driven changes to core Hadoop components to address.
Semantic & Multilingual Strategies in Lucene/SolrTrey Grainger
When searching on text, choosing the right CharFilters, Tokenizer, stemmers, and other TokenFilters for each supported language is critical. Additional tools of the trade include language detection through UpdateRequestProcessors, parts of speech analysis, entity extraction, stopword and synonym lists, relevancy differentiation for exact vs. stemmed vs. conceptual matches, and identification of statistically interesting phrases per language. For multilingual search, you also need to choose between several strategies such as: searching across multiple fields, using a separate collection per language combination, or combining multiple languages in a single field (custom code is required for this and will be open sourced). These all have their own strengths and weaknesses depending upon your use case. This talk will provide a tutorial (with code examples) on how to pull off each of these strategies as well as compare and contrast the different kinds of stemmers, review the precision/recall impact of stemming vs. lemmatization, and describe some techniques for extracting meaningful relationships between terms to power a semantic search experience per-language. Come learn how to build an excellent semantic and multilingual search system using the best tools and techniques Lucene/Solr has to offer!
Lessons from Highly Scalable Architectures at Social Networking SitesPatrick Senti
What are the techniques and technolgies used by popular social networking sites such as Facebook, Twitter, Tumblr, Pinterest or Instagram? How do they architect their systems to scale to multiples of 100 million of visits per day?
Boosting Documents in Solr by Recency, Popularity, and User PreferencesLucidworks (Archived)
Presentation on how to and access to source code for boosting and/or filtering documents by recency, popularity, and personal preferences. My solution improves upon the common "recip" based solution for boosting by document age.
DirtyTooth: It´s only Rock'n Roll but I like itTelefónica
The Bluetooth connection of iPhones with peripherals such as speakers, headphones or sound equipment imply risk for the user's privacy as these elements could extract private information from the iPhone, without the user being aware of it.
The hack or trick puts users privacy at risk. The iOS configuration does not notify the profile change and allows the execution of the functions and actions associated with the new profile, so that the users' data are at risk of being stolen by a potential attacker.
I this paper, there are information about how a DrityTooth Hack can be done. More info at http://www.dirtytooth.com
[System design] Design a tweeter-like systemAree Oh
Presented in Simple step system design study.
As an interviewer, what should I do during system design interviews?
There are five steps.
Understanding requirements of a system is the first step that every interviewer should do.
Next step is to design interface and data model.
Based on the interface and requirements, the interviewer draws an initial system design.
Now, I the interviewer should think how to improve system qualities by asking right questions (scalability, availability, load balancing, caching, …)
At the end of the interview, it should have the final architecture of the system.
This presentation will go through the above steps with example scenario: design a tweeter-like system.
Groundhog Day: Near-Duplicate Detection on Twitter Ke Tao
Presentation given by Ke Tao at 22nd International World Wide Web Conference in Rio de Janeiro Brazil, titled Groundhog Day: Near-Duplicate Detection on Twitter, during the track of Social Web Engineering
Page 18Goal Implement a complete search engine. Milestones.docxsmile790243
Page 1/8
Goal: Implement a complete search engine. Milestones Overview
Milestone Goal #1 Produce an initial index for the corpus and a basic retrieval component
#2 Complete Search System
Page 2/8
PROJECT: SEARCH ENGINE Corpus: all ICS web pages We will provide you with the crawled data as a zip file (webpages_raw.zip). This contains the downloaded content of the ICS web pages that were crawled by a previous quarter. You are expected to build your search engine index off of this data. Main challenges: Full HTML parsing, File/DB handling, handling user input (either using command line or desktop GUI application or web interface) COMPONENT 1 - INDEX: Create an inverted index for all the corpus given to you. You can either use a database to store your index (MongoDB, Redis, memcached are some examples) or you can store the index in a file. You are free to choose an approach here. The index should store more than just a simple list of documents where the token occurs. At the very least, your index should store the TF-IDF of every term/document. Sample Index:
Note: This is a simplistic example provided for your understanding. Please do not consider this as the expected index format. A good inverted index will store more information than this. Index Structure: token – docId1, tf-idf1 ; docId2, tf-idf2
Example: informatics – doc_1, 5 ; doc_2, 10 ; doc_3, 7 You are encouraged to come up with heuristics that make sense and will help in retrieving relevant search results. For e.g. - words in bold and in heading (h1, h2, h3) could be treated as more important than the other words. These are useful metadata that could be added to your inverted index data. Optional (1 point for each meta data item up to 2 points max):: Extra credit will be given for ideas that improve the quality of the retrieval, so you may add more metadata to your index, if you think it will help improve the quality of the retrieval. For this, instead of storing a simple TF-IDF count for every page, you can store more information related to the page (e.g. position of the words in the page). To store this information, you need to design your index in such a way that it can store and retrieve all this metadata efficiently. Your index lookup during search should not be horribly slow, so pay attention to the structure of your index COMPONENT 2 – SEARCH AND RETRIEVE: Your program should prompt the user for a query. This doesn’t need to be a Web interface, it can be a console prompt. At the time of the query, your program will look up your index, perform some calculations (see ranking below) and give out the ranked list of pages that are relevant for the query.
COMPONENT 3 - RANKING:
At the very least, your ranking formula should include tf-idf scoring, but you should feel free to add additional components to this formula if you think they improve the retrieval. Optional (1 point for each parameter up to 2 points max): Extra credit will be given if your ranking formula includes par.
Twitter Timeline and Search Distributed System.pptxMd. Rakib Trofder
Design the Twitter timeline and search
Note: This document links directly to relevant areas found in the system design topics to avoid duplication. Refer to the linked content for general talking points, tradeoffs, and alternatives.
Design the Facebook feed and Design Facebook search are similar questions.
Step 1: Outline use cases and constraints
Gather requirements and scope the problem. Ask questions to clarify use cases and constraints. Discuss assumptions.
Without an interviewer to address clarifying questions, we'll define some use cases and constraints.
Use cases
We'll scope the problem to handle only the following use cases
User posts a tweet
Service pushes tweets to followers, sending push notifications and emails
User views the user timeline (activity from the user)
User views the home timeline (activity from people the user is following)
User searches keywords
Service has high availability
Out of scope
Service pushes tweets to the Twitter Firehose and other streams
Service strips out tweets based on users' visibility settings
Hide @reply if the user is not also following the person being replied to
Respect 'hide retweets' setting
Analytics
Constraints and assumptions
State assumptions
General
Traffic is not evenly distributed
Posting a tweet should be fast
Fanning out a tweet to all of your followers should be fast, unless you have millions of followers
100 million active users
500 million tweets per day or 15 billion tweets per month
Each tweet averages a fanout of 10 deliveries
5 billion total tweets delivered on fanout per day
150 billion tweets delivered on fanout per month
250 billion read requests per month
10 billion searches per month
Process mining in business process managementRamez Al-Fayez
An overview of Process mining in Business Process Management ...
References :
- Jan Claes, Geert Poels, Process Mining and the ProM Framework: An Exploratory Survey, Business Process Management Conference Workshops, LNBIP 132, p. 187-198, 2012. http://janclaes.info/paper.php?paper=pubbpi2012
Bridging the Digital Gap Brad Spiegel Macon, GA Initiative.pptxBrad Spiegel Macon GA
Brad Spiegel Macon GA’s journey exemplifies the profound impact that one individual can have on their community. Through his unwavering dedication to digital inclusion, he’s not only bridging the gap in Macon but also setting an example for others to follow.
1.Wireless Communication System_Wireless communication is a broad term that i...JeyaPerumal1
Wireless communication involves the transmission of information over a distance without the help of wires, cables or any other forms of electrical conductors.
Wireless communication is a broad term that incorporates all procedures and forms of connecting and communicating between two or more devices using a wireless signal through wireless communication technologies and devices.
Features of Wireless Communication
The evolution of wireless technology has brought many advancements with its effective features.
The transmitted distance can be anywhere between a few meters (for example, a television's remote control) and thousands of kilometers (for example, radio communication).
Wireless communication can be used for cellular telephony, wireless access to the internet, wireless home networking, and so on.
This 7-second Brain Wave Ritual Attracts Money To You.!nirahealhty
Discover the power of a simple 7-second brain wave ritual that can attract wealth and abundance into your life. By tapping into specific brain frequencies, this technique helps you manifest financial success effortlessly. Ready to transform your financial future? Try this powerful ritual and start attracting money today!
# Internet Security: Safeguarding Your Digital World
In the contemporary digital age, the internet is a cornerstone of our daily lives. It connects us to vast amounts of information, provides platforms for communication, enables commerce, and offers endless entertainment. However, with these conveniences come significant security challenges. Internet security is essential to protect our digital identities, sensitive data, and overall online experience. This comprehensive guide explores the multifaceted world of internet security, providing insights into its importance, common threats, and effective strategies to safeguard your digital world.
## Understanding Internet Security
Internet security encompasses the measures and protocols used to protect information, devices, and networks from unauthorized access, attacks, and damage. It involves a wide range of practices designed to safeguard data confidentiality, integrity, and availability. Effective internet security is crucial for individuals, businesses, and governments alike, as cyber threats continue to evolve in complexity and scale.
### Key Components of Internet Security
1. **Confidentiality**: Ensuring that information is accessible only to those authorized to access it.
2. **Integrity**: Protecting information from being altered or tampered with by unauthorized parties.
3. **Availability**: Ensuring that authorized users have reliable access to information and resources when needed.
## Common Internet Security Threats
Cyber threats are numerous and constantly evolving. Understanding these threats is the first step in protecting against them. Some of the most common internet security threats include:
### Malware
Malware, or malicious software, is designed to harm, exploit, or otherwise compromise a device, network, or service. Common types of malware include:
- **Viruses**: Programs that attach themselves to legitimate software and replicate, spreading to other programs and files.
- **Worms**: Standalone malware that replicates itself to spread to other computers.
- **Trojan Horses**: Malicious software disguised as legitimate software.
- **Ransomware**: Malware that encrypts a user's files and demands a ransom for the decryption key.
- **Spyware**: Software that secretly monitors and collects user information.
### Phishing
Phishing is a social engineering attack that aims to steal sensitive information such as usernames, passwords, and credit card details. Attackers often masquerade as trusted entities in email or other communication channels, tricking victims into providing their information.
### Man-in-the-Middle (MitM) Attacks
MitM attacks occur when an attacker intercepts and potentially alters communication between two parties without their knowledge. This can lead to the unauthorized acquisition of sensitive information.
### Denial-of-Service (DoS) and Distributed Denial-of-Service (DDoS) Attacks
Multi-cluster Kubernetes Networking- Patterns, Projects and GuidelinesSanjeev Rampal
Talk presented at Kubernetes Community Day, New York, May 2024.
Technical summary of Multi-Cluster Kubernetes Networking architectures with focus on 4 key topics.
1) Key patterns for Multi-cluster architectures
2) Architectural comparison of several OSS/ CNCF projects to address these patterns
3) Evolution trends for the APIs of these projects
4) Some design recommendations & guidelines for adopting/ deploying these solutions.
2. TWITTER
User-generated content
140 characters called Tweet
Informal language, free-form
Diverse topics
Images, videos and links
SPAM L
Very high volume
Ø Information overload
2
“When you've got 5 minutes to fill,
Twitter is a great way to fill 35
minutes”
@mattcutts
3. TWITTER STATS
3
2 BILLION
QUERIES PER DAY
230 MILLION
TWEETS PER DAY
< 10 S
INDEXING LATENCY
50 MS
AVG. QUERY
RESPONSE TIME
1 BILLION
REGISTERED USER
143,199
TWEETS PER SECONDS
5. WHAT TO SEARCH IN TWITTER?
Tweets
Images (Tweets that have images)
Users
News(Tweets that have links)
5
6. SEARCHING FOR “IPAD” ON TWITTER
6
More than 50 tweets
mentioning “iPad”
posted within
1-minute
7. CUSTOMIZED IR FOR TWITTER
Feature of Twitter’s IR
§ Modularity
§ Scalability
§ Cost effectiveness
§ Simple interface
§ Incremental development
7
8. CUSTOMIZED IR FOR TWITTER
The system consists four main parts
§ Batched data aggregation and preprocess pipeline
§ An inverted index builder;
§ Earlybird shards
§ Earlybird roots
8
9. CRAWLING TWITTER
HoseBird API Client
Client
hosebirdClient
=
builder.build();
StatusesFilterEndpoint
endpoint
=
new
StatusesFilterEndpoint();
//
Optional:
set
up
some
followings
and
track
terms
List<Long>
followings
=
Lists.newArrayList(1234L,
566788L);
List<String>
terms
=
Lists.newArrayList("twitter",
"api");
endpoint.followings(followings);
endpoint.trackTerms(terms);
10. INDEXING TWITTER
In November 18, 2014 Twitter inc. announce that Twitter now
indexes every public Tweet since 2006
§ Temporal sharding: The Tweet corpus was first divided into multiple time tiers.
§ Hash partitioning: Within each time tier, data was divided into partitions based on a
hash function.
§ Earlybird: Within each hash partition, data was further divided into chunks called
Segments. Segments were grouped together based on how many could fit on each
Earlybird machine.
§ Replicas: Each Earlybird machine is replicated to increase serving capacity and
resilience
11. DATA AGGREGATION
11
§ Engagement aggregator: Counts the number of engagements for each Tweet in a
given day. These engagement counts are used later as an input in scoring each Tweet.
§ Aggregation: Joins multiple data sources together based on Tweet ID.
§ Ingestion: Performs different types of preprocessing — language identification,
tokenization, text feature extraction, URL resolution and more.
§ Scorer: Computes a score based on features extracted during Ingestion. For the
smaller historical indices, this score determined which Tweets were selected into the
index.
§ Partitioner: Divides the data into smaller chunks through our hashing algorithm. The
final output is stored into HDFS.
13. INVERT INDEX
13
§ Segment partitioner: Groups multiple batches of preprocessed daily Tweet data from
the same partition into bundles. We call these bundles “segments.”
§ Segment indexer: Inverts each Tweet in a segment, builds an inverted index and
stores the inverted index into HDFS.
15. SEARCH PROCESS
15
Earlybirds shards:
The inverted index builders produced hundreds of inverted index segments. These segments
were then distributed to machines called Earlybirds. Since each Earlybird machine could
only serve a small portion of the full Tweet corpus, we had to introduce sharding
two-dimensional sharding scheme to distribute index segments onto serving Earlybirds
Multiple time tiers
Hash partitioning
Each Earlybird machine is replicated to increase serving capacity and resilience
Earlybird roots:
The roots perform a two level scatter-gather as shown in the below diagram, merging
search results and term statistics histograms
18. RANKING
18
§ Different types of content are searched separately
§ Uniscores: used as a means to blend different content types into the search result
§ Score unification: Individual content is assigned a “raw” score, then converted into
uniscores
§ Burst: is used to filter out content types with low or no bursts. It’s also used to boost the
score of corresponding content types, as a feature for a multi-class classifier that
predicts the most likely content type for a query, and in additional components of the
ranking system.
19. RANKING
19
Search ranker chose News1 followed by Tweet1 so far and is presented with three candidatesTweet2,
User Group, and News2 to pick the content after Tweet1.
News2 has the highest uniscore but search ranker picks Tweet2, instead of News2 as we penalize change
in type between consecutive content by decreasing the score of News2 from 0.65 to 0.55, for instance
20. RANKING
20
Normalized image and news counts are matched to one of n=5 states : 1
average, 2 above, and 2 below. Matched states curves show a more
stable quantization of original sequence which has the effect of removal
of small noisy peaks
Query of “Photo” shows three sequences of number of
Tweets over eight 15 minute buckets from bucket 1 (2
hours ago) to 8 (most recent).
21. REFERENCES
§ Anirudh Todi, TSAR, a TimeSeries AggregatoR ,
https://blog.twitter.com/2014/tsar-a-timeseries-aggregator
§ Youngin Shin, New Twitter search results,
https://blog.twitter.com/2013/new-twitter-search-results
§ Yi Zhuang, Building a complete Tweet index,
https://blog.twitter.com/2014/building-a-complete-tweet-index
§ J. Kleinberg, Bursty and Hierarchical Structure in Streams, Proc. 8th ACM SIGKDD
Intl. Conf. on Knowledge Discovery and Data Mining, 2002
§ Brendan O'Connor, Michel Krieger, and David Ahn. 2010b. TweetMotif:
Exploratory search and topic summarization for Twitter. In Proc. of ICWSM
21