The document discusses the monitor object pattern, which provides a thread-safe interface for shared objects. It allows multiple threads to safely access a shared passive object by encapsulating it within a monitor class that implements locking and notification. This prevents inconsistencies by allowing only one thread to access the shared object at a time. The pattern simplifies concurrency control and scheduling execution but can lead to complex extensibility issues and inheritance anomalies when the monitor object is tightly coupled. Examples provided are a thread-safe queue and connection pool.
This document discusses the core ideas and technologies used in the Android stack created by Juno, including immutability, concise code, modularity, testability, and use of Kotlin and RxJava. The stack emphasizes safety through immutable and final classes, easy concurrency, and null safety. It aims for concise code through properties, type inference, and stream-like APIs. Modularity is achieved through clean architecture principles. Tests are enabled by separation of concerns and isolating business logic. Kotlin and RxJava integration works well overall, though some challenges remain around testing and null safety.
Alex Lang from the University of Oxford presented on categorical quantum computing. He studies category theory and quantum computing. His work develops a graphical language called the ZX-calculus to represent quantum programs and circuits using red and green dots and wires. The rules of the ZX-calculus allow determining if two programs are equivalent and how to execute programs. This approach is simple yet universal for representing quantum computations.
This document discusses the challenges of developing highly scalable data-centric apps and how Google App Engine addresses them. It notes issues like scalability, security, replication and maintenance. Google App Engine solves these problems through techniques like distribution, replication, load balancing and using BigTable and GFS. It also provides details on how App Engine implements entities and attributes, key field types, and how duplicate keys can overwrite data. The document concludes with instructions on creating configuration files and using queries with App Engine.
This document discusses server modeling with MySQL for scalability and consistency issues. It covers:
- The easy problem of a single web server and database layer bottleneck which can be solved by adding more web servers.
- Harder problems of database layer bottlenecks which incorrect solutions like adding more web servers will not solve, requiring database partitioning or other techniques.
- Common solutions like master-slave replication where one database is the master and updates are replicated to slave databases which can balance reads. This allows for high reads but low consistency.
- More advanced solutions like two phase commit and semi-sync replication in MySQL to provide higher consistency between master and slave databases during writes.
- MySQL Cluster which stores data
This document discusses the core ideas and technologies used in the Android stack created by Juno, including immutability, concise code, modularity, testability, and use of Kotlin and RxJava. The stack emphasizes safety through immutable and final classes, easy concurrency, and null safety. It aims for concise code through properties, type inference, and stream-like APIs. Modularity is achieved through clean architecture principles. Tests are enabled by separation of concerns and isolating business logic. Kotlin and RxJava integration works well overall, though some challenges remain around testing and null safety.
Alex Lang from the University of Oxford presented on categorical quantum computing. He studies category theory and quantum computing. His work develops a graphical language called the ZX-calculus to represent quantum programs and circuits using red and green dots and wires. The rules of the ZX-calculus allow determining if two programs are equivalent and how to execute programs. This approach is simple yet universal for representing quantum computations.
This document discusses the challenges of developing highly scalable data-centric apps and how Google App Engine addresses them. It notes issues like scalability, security, replication and maintenance. Google App Engine solves these problems through techniques like distribution, replication, load balancing and using BigTable and GFS. It also provides details on how App Engine implements entities and attributes, key field types, and how duplicate keys can overwrite data. The document concludes with instructions on creating configuration files and using queries with App Engine.
This document discusses server modeling with MySQL for scalability and consistency issues. It covers:
- The easy problem of a single web server and database layer bottleneck which can be solved by adding more web servers.
- Harder problems of database layer bottlenecks which incorrect solutions like adding more web servers will not solve, requiring database partitioning or other techniques.
- Common solutions like master-slave replication where one database is the master and updates are replicated to slave databases which can balance reads. This allows for high reads but low consistency.
- More advanced solutions like two phase commit and semi-sync replication in MySQL to provide higher consistency between master and slave databases during writes.
- MySQL Cluster which stores data
The document discusses consistent hashing, which is a technique for distributing data across multiple servers. It works by assigning each server and data item a unique hash value and storing each data item on the first server whose hash value comes after the data's hash value. This allows redistributing only a fraction of data when servers are added or removed. The key aspects are using a hash function to assign all items unique values and treating the hash ring as a circular space to determine data placement.
Integration between Filebeat and logstash DaeMyung Kang
Filebeat sends log files to Logstash. There are several cases described for integrating Filebeat and Logstash:
1) A simple configuration where one log file is sent from Filebeat to Logstash and output to one file.
2) Another simple configuration where multiple log files are sent from Filebeat to Logstash using a wildcard, and output to one file.
3) An advanced configuration where multiple log files are sent from Filebeat to Logstash, and Logstash outputs each file to a separate file based on the original file name using filtering.
4) A more advanced configuration where log files are sent from Filebeat to Logstash, Logstash parses the timestamp and uses it as the output
This document discusses Kafka timestamps and offsets. It explains that Kafka assigns timestamps to messages by default as the sending time from the client. The timestamps are stored in the timeindex file, which uses binary search to fetch logs by timestamp. When a log segment rolls, it is typically due to the segment size exceeding the max, the time since the oldest message exceeding the max, or the indexes becoming full. If a message is appended with an older timestamp than what is in the timeindex, it will overwrite the existing entries.
This document discusses how Kafka handles timestamps and offsets. It explains that Kafka maintains offset and time-based indexes to allow fetching log data by offset or timestamp. When new log records are appended, the indexes are updated with the largest offset and timestamp. If a record has a timestamp older than the existing minimum in the time index, Kafka will still append it but the time index entry will not be updated.
This document discusses Redis access control and the Redis ACL protocol version 1 (RCP1). It provides background on security issues with exposing Redis and Memcached servers publicly without authentication. RCP1 aims to address limitations of the existing requirepass authentication by defining user permissions through command groups and implementing access control using bit arrays. The presenter then demonstrates RCP1.
The document discusses consistent hashing, which is a technique for distributing data across multiple servers. It works by assigning each server and data item a unique hash value and storing each data item on the first server whose hash value comes after the data's hash value. This allows redistributing only a fraction of data when servers are added or removed. The key aspects are using a hash function to assign all items unique values and treating the hash ring as a circular space to determine data placement.
Integration between Filebeat and logstash DaeMyung Kang
Filebeat sends log files to Logstash. There are several cases described for integrating Filebeat and Logstash:
1) A simple configuration where one log file is sent from Filebeat to Logstash and output to one file.
2) Another simple configuration where multiple log files are sent from Filebeat to Logstash using a wildcard, and output to one file.
3) An advanced configuration where multiple log files are sent from Filebeat to Logstash, and Logstash outputs each file to a separate file based on the original file name using filtering.
4) A more advanced configuration where log files are sent from Filebeat to Logstash, Logstash parses the timestamp and uses it as the output
This document discusses Kafka timestamps and offsets. It explains that Kafka assigns timestamps to messages by default as the sending time from the client. The timestamps are stored in the timeindex file, which uses binary search to fetch logs by timestamp. When a log segment rolls, it is typically due to the segment size exceeding the max, the time since the oldest message exceeding the max, or the indexes becoming full. If a message is appended with an older timestamp than what is in the timeindex, it will overwrite the existing entries.
This document discusses how Kafka handles timestamps and offsets. It explains that Kafka maintains offset and time-based indexes to allow fetching log data by offset or timestamp. When new log records are appended, the indexes are updated with the largest offset and timestamp. If a record has a timestamp older than the existing minimum in the time index, Kafka will still append it but the time index entry will not be updated.
This document discusses Redis access control and the Redis ACL protocol version 1 (RCP1). It provides background on security issues with exposing Redis and Memcached servers publicly without authentication. RCP1 aims to address limitations of the existing requirepass authentication by defining user permissions through command groups and implementing access control using bit arrays. The presenter then demonstrates RCP1.