The document discusses different techniques for implementing server-sent events, including short polling, long polling, Server-Sent Events, and WebSockets. Code examples are provided for implementing server-sent events using WebSockets, Server-Sent Events, and long polling with Redis pub/sub. Fallback techniques are also described to handle events if the main infrastructure is not available.
Implementing data sync apis for mibile apps @cloudconfMichele Orselli
Today mobile apps are everywhere. These apps cannot count on a reliable and constant internet connection: working in offline mode is becoming a common pattern. This is quite easy for read-only apps but it becomes rapidly tricky for apps that create data in offline mode. This talk is a case study about a possible architecture for enabling data synchronization in these situations. Some of the topics touched will be:
- id generation
- hierarchical data
- managing differente data types
- sync algorithm
Server side data sync for mobile apps with silexMichele Orselli
oday mobile apps are everywhere. These apps cannot count on a reliable and constant internet connection: working in offline mode is becoming a common pattern. This is quite easy for read-only apps but it becomes rapidly tricky for apps that create data in offline mode. This talk is a case study about a possible architecture for enabling data synchronization in these situations. Some of the topics touched will be:
- id generation
- hierarchical data
- managing differente data types
- sync algorithm
MongoDB .local Paris 2020: La puissance du Pipeline d'Agrégation de MongoDBMongoDB
Le pipeline d'agrégation a été en mesure d'alimenter votre analyse de données depuis la version 2.2. Dans la version 4.2, nous avons ajouté plus de puissance et vous pouvez maintenant l'utiliser pour des requêtes plus puissantes, des mises à jour et la sortie de vos données dans des collections existantes. Venez découvrir comment vous pouvez tout faire avec le pipeline, y compris les vues uniques, ETL, les cumuls de données et les vues matérialisées.
Powering Heap With PostgreSQL And CitusDB (PGConf Silicon Valley 2015)Dan Robinson
At Heap, we lean on PostgreSQL for all our backend heavy lifting. We support an expressive set of queries — conversion funnels with filtering and grouping, retention analysis, and behavioral cohorting to name a few — across billions of users and tens of billions of events. Results need to come back in a matter of seconds and reflect up-to-the-minute data.
This talk will discuss these challenges, with a particular focus on:
- Using CitusDB for interactive analysis across 50 terabytes of data and counting.
- PostgreSQL and Kafka: two great tastes that taste great together.
- UDFs in C and PL/pgSQL, partial indexes for pre-aggregation, and other tricks up our sleeves.
Implementing data sync apis for mibile apps @cloudconfMichele Orselli
Today mobile apps are everywhere. These apps cannot count on a reliable and constant internet connection: working in offline mode is becoming a common pattern. This is quite easy for read-only apps but it becomes rapidly tricky for apps that create data in offline mode. This talk is a case study about a possible architecture for enabling data synchronization in these situations. Some of the topics touched will be:
- id generation
- hierarchical data
- managing differente data types
- sync algorithm
Server side data sync for mobile apps with silexMichele Orselli
oday mobile apps are everywhere. These apps cannot count on a reliable and constant internet connection: working in offline mode is becoming a common pattern. This is quite easy for read-only apps but it becomes rapidly tricky for apps that create data in offline mode. This talk is a case study about a possible architecture for enabling data synchronization in these situations. Some of the topics touched will be:
- id generation
- hierarchical data
- managing differente data types
- sync algorithm
MongoDB .local Paris 2020: La puissance du Pipeline d'Agrégation de MongoDBMongoDB
Le pipeline d'agrégation a été en mesure d'alimenter votre analyse de données depuis la version 2.2. Dans la version 4.2, nous avons ajouté plus de puissance et vous pouvez maintenant l'utiliser pour des requêtes plus puissantes, des mises à jour et la sortie de vos données dans des collections existantes. Venez découvrir comment vous pouvez tout faire avec le pipeline, y compris les vues uniques, ETL, les cumuls de données et les vues matérialisées.
Powering Heap With PostgreSQL And CitusDB (PGConf Silicon Valley 2015)Dan Robinson
At Heap, we lean on PostgreSQL for all our backend heavy lifting. We support an expressive set of queries — conversion funnels with filtering and grouping, retention analysis, and behavioral cohorting to name a few — across billions of users and tens of billions of events. Results need to come back in a matter of seconds and reflect up-to-the-minute data.
This talk will discuss these challenges, with a particular focus on:
- Using CitusDB for interactive analysis across 50 terabytes of data and counting.
- PostgreSQL and Kafka: two great tastes that taste great together.
- UDFs in C and PL/pgSQL, partial indexes for pre-aggregation, and other tricks up our sleeves.
Dpilot is a cloud based file transfer application that allows its user to upload data on cloud server and the receiver on the other hand can downlaod the data from the server. The Downlaod information is send to the receiver via mail service.
Other Features include:-
Secure Login system
Easy data Access
Lightening Fast Uploads and Downloads
Connect with your Facebook Or Gmail Account for easy access
Designing The Right Schema To Power Heap (PGConf Silicon Valley 2016)Dan Robinson
Heap's analytics infrastructure is built around PostgreSQL. The most important choice to make when building a system this way is the schema you'll use to represent your data. This foundation will determine your write throughput, what sorts of read queries will be fast, what indexing strategies will be available to you, and what data inconsistencies will be possible. With the wrong choice, you won't be able to leverage PostgreSQL's most powerful features.
This talk walks through the different schemas we've used to power Heap over the last three years, their relative strengths and weaknesses, and the mistakes we've made.
Async/await is a new language feature that will ship with Swift 5.5 this year. There’s no doubt it will have a significant impact on how we write asynchronous code.
In this talk, we’re going to look at some use cases for async/await, how we can call existing Swift APIs using this new feature, and why your decision to write your SDK in Objective-C might turn out to have been a very clever move.
We’ll also have a look at the refactoring support Apple is adding to Xcode and how it will help you migrate your existing code base.
Everyone knows Python's basic datatypes and their most common containers (list, tuple, dict and set).
However, few people know that they should use a deque to implement a queue, that using defaultdict their code would be cleaner and that they could be a bit more efficient using namedtuples instead of creating new classes.
This talk will review the data structures of Python's "collections" module of the standard library (namedtuple, deque, Counter, defaultdict and OrderedDict) and we will also compare them with the built-in basic datatypes.
Dpilot is a cloud based file transfer application that allows its user to upload data on cloud server and the receiver on the other hand can downlaod the data from the server. The Downlaod information is send to the receiver via mail service.
Other Features include:-
Secure Login system
Easy data Access
Lightening Fast Uploads and Downloads
Connect with your Facebook Or Gmail Account for easy access
Designing The Right Schema To Power Heap (PGConf Silicon Valley 2016)Dan Robinson
Heap's analytics infrastructure is built around PostgreSQL. The most important choice to make when building a system this way is the schema you'll use to represent your data. This foundation will determine your write throughput, what sorts of read queries will be fast, what indexing strategies will be available to you, and what data inconsistencies will be possible. With the wrong choice, you won't be able to leverage PostgreSQL's most powerful features.
This talk walks through the different schemas we've used to power Heap over the last three years, their relative strengths and weaknesses, and the mistakes we've made.
Async/await is a new language feature that will ship with Swift 5.5 this year. There’s no doubt it will have a significant impact on how we write asynchronous code.
In this talk, we’re going to look at some use cases for async/await, how we can call existing Swift APIs using this new feature, and why your decision to write your SDK in Objective-C might turn out to have been a very clever move.
We’ll also have a look at the refactoring support Apple is adding to Xcode and how it will help you migrate your existing code base.
Everyone knows Python's basic datatypes and their most common containers (list, tuple, dict and set).
However, few people know that they should use a deque to implement a queue, that using defaultdict their code would be cleaner and that they could be a bit more efficient using namedtuples instead of creating new classes.
This talk will review the data structures of Python's "collections" module of the standard library (namedtuple, deque, Counter, defaultdict and OrderedDict) and we will also compare them with the built-in basic datatypes.
An experiment in a distributed approach to processing the real-time data generated by a large scale social media campaign. Presented at Cambridge Geek Nights 13.
JS Fest 2018. Martin Chaov. SSE vs WebSockets vs Long PollingJSFestUA
If you have a huge amount of data to deliver quickly you might have tried using web sockets to do so. However, sockets are hard to maintain and scale, not to mention multiplex. In this presentation I do comparison of the three methods of delivering data to the front-end. Server-Sent Events gives you the ability to deliver your content with less overhead on the infrastructural side. In this talk I will cover SSE, Long-Polling, and Web-Sockets with their pros and cons, including technical demo. We will touch a bit on connectionless push and available optimizations for the mobile network.
Monitoring Your ISP Using InfluxDB Cloud and Raspberry PiInfluxData
When a large group of people change their habits, it can be tricky for infrastructures! Working from home and spending time indoor today means attending video calls and streaming movies and tv shows. This leads to increased internet traffic that can create congestion on the network infrastructure. So how do you get real-time visibility into your ISP connection? In this meetup, Mirko presents his setup based on a time series database and Raspberry Pi to better understand his ISP connection quality and speed — including upload and download speeds. Join us to discover how he does it using Telegraf, InfluxDB Cloud, Astro Pi, Telegram and Grafana! Finally, proof that your ISP connection is (or is not) as fast as it promises.
Zoom is a comprehensive platform designed to connect individuals and teams efficiently. With its user-friendly interface and powerful features, Zoom has become a go-to solution for virtual communication and collaboration. It offers a range of tools, including virtual meetings, team chat, VoIP phone systems, online whiteboards, and AI companions, to streamline workflows and enhance productivity.
Need for Speed: Removing speed bumps from your Symfony projects ⚡️Łukasz Chruściel
No one wants their application to drag like a car stuck in the slow lane! Yet it’s all too common to encounter bumpy, pothole-filled solutions that slow the speed of any application. Symfony apps are not an exception.
In this talk, I will take you for a spin around the performance racetrack. We’ll explore common pitfalls - those hidden potholes on your application that can cause unexpected slowdowns. Learn how to spot these performance bumps early, and more importantly, how to navigate around them to keep your application running at top speed.
We will focus in particular on tuning your engine at the application level, making the right adjustments to ensure that your system responds like a well-oiled, high-performance race car.
GraphSummit Paris - The art of the possible with Graph TechnologyNeo4j
Sudhir Hasbe, Chief Product Officer, Neo4j
Join us as we explore breakthrough innovations enabled by interconnected data and AI. Discover firsthand how organizations use relationships in data to uncover contextual insights and solve our most pressing challenges – from optimizing supply chains, detecting fraud, and improving customer experiences to accelerating drug discoveries.
Check out the webinar slides to learn more about how XfilesPro transforms Salesforce document management by leveraging its world-class applications. For more details, please connect with sales@xfilespro.com
If you want to watch the on-demand webinar, please click here: https://www.xfilespro.com/webinars/salesforce-document-management-2-0-smarter-faster-better/
May Marketo Masterclass, London MUG May 22 2024.pdfAdele Miller
Can't make Adobe Summit in Vegas? No sweat because the EMEA Marketo Engage Champions are coming to London to share their Summit sessions, insights and more!
This is a MUG with a twist you don't want to miss.
Top Features to Include in Your Winzo Clone App for Business Growth (4).pptxrickgrimesss22
Discover the essential features to incorporate in your Winzo clone app to boost business growth, enhance user engagement, and drive revenue. Learn how to create a compelling gaming experience that stands out in the competitive market.
Atelier - Innover avec l’IA Générative et les graphes de connaissancesNeo4j
Atelier - Innover avec l’IA Générative et les graphes de connaissances
Allez au-delà du battage médiatique autour de l’IA et découvrez des techniques pratiques pour utiliser l’IA de manière responsable à travers les données de votre organisation. Explorez comment utiliser les graphes de connaissances pour augmenter la précision, la transparence et la capacité d’explication dans les systèmes d’IA générative. Vous partirez avec une expérience pratique combinant les relations entre les données et les LLM pour apporter du contexte spécifique à votre domaine et améliorer votre raisonnement.
Amenez votre ordinateur portable et nous vous guiderons sur la mise en place de votre propre pile d’IA générative, en vous fournissant des exemples pratiques et codés pour démarrer en quelques minutes.
AI Genie Review: World’s First Open AI WordPress Website CreatorGoogle
AI Genie Review: World’s First Open AI WordPress Website Creator
👉👉 Click Here To Get More Info 👇👇
https://sumonreview.com/ai-genie-review
AI Genie Review: Key Features
✅Creates Limitless Real-Time Unique Content, auto-publishing Posts, Pages & Images directly from Chat GPT & Open AI on WordPress in any Niche
✅First & Only Google Bard Approved Software That Publishes 100% Original, SEO Friendly Content using Open AI
✅Publish Automated Posts and Pages using AI Genie directly on Your website
✅50 DFY Websites Included Without Adding Any Images, Content Or Doing Anything Yourself
✅Integrated Chat GPT Bot gives Instant Answers on Your Website to Visitors
✅Just Enter the title, and your Content for Pages and Posts will be ready on your website
✅Automatically insert visually appealing images into posts based on keywords and titles.
✅Choose the temperature of the content and control its randomness.
✅Control the length of the content to be generated.
✅Never Worry About Paying Huge Money Monthly To Top Content Creation Platforms
✅100% Easy-to-Use, Newbie-Friendly Technology
✅30-Days Money-Back Guarantee
See My Other Reviews Article:
(1) TubeTrivia AI Review: https://sumonreview.com/tubetrivia-ai-review
(2) SocioWave Review: https://sumonreview.com/sociowave-review
(3) AI Partner & Profit Review: https://sumonreview.com/ai-partner-profit-review
(4) AI Ebook Suite Review: https://sumonreview.com/ai-ebook-suite-review
#AIGenieApp #AIGenieBonus #AIGenieBonuses #AIGenieDemo #AIGenieDownload #AIGenieLegit #AIGenieLiveDemo #AIGenieOTO #AIGeniePreview #AIGenieReview #AIGenieReviewandBonus #AIGenieScamorLegit #AIGenieSoftware #AIGenieUpgrades #AIGenieUpsells #HowDoesAlGenie #HowtoBuyAIGenie #HowtoMakeMoneywithAIGenie #MakeMoneyOnline #MakeMoneywithAIGenie
A Study of Variable-Role-based Feature Enrichment in Neural Models of CodeAftab Hussain
Understanding variable roles in code has been found to be helpful by students
in learning programming -- could variable roles help deep neural models in
performing coding tasks? We do an exploratory study.
- These are slides of the talk given at InteNSE'23: The 1st International Workshop on Interpretability and Robustness in Neural Software Engineering, co-located with the 45th International Conference on Software Engineering, ICSE 2023, Melbourne Australia
E-commerce Application Development Company.pdfHornet Dynamics
Your business can reach new heights with our assistance as we design solutions that are specifically appropriate for your goals and vision. Our eCommerce application solutions can digitally coordinate all retail operations processes to meet the demands of the marketplace while maintaining business continuity.
Software Engineering, Software Consulting, Tech Lead, Spring Boot, Spring Cloud, Spring Core, Spring JDBC, Spring Transaction, Spring MVC, OpenShift Cloud Platform, Kafka, REST, SOAP, LLD & HLD.
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Navigating the Metaverse: A Journey into Virtual Evolution"Donna Lenk
Join us for an exploration of the Metaverse's evolution, where innovation meets imagination. Discover new dimensions of virtual events, engage with thought-provoking discussions, and witness the transformative power of digital realms."
Introducing Crescat - Event Management Software for Venues, Festivals and Eve...Crescat
Crescat is industry-trusted event management software, built by event professionals for event professionals. Founded in 2017, we have three key products tailored for the live event industry.
Crescat Event for concert promoters and event agencies. Crescat Venue for music venues, conference centers, wedding venues, concert halls and more. And Crescat Festival for festivals, conferences and complex events.
With a wide range of popular features such as event scheduling, shift management, volunteer and crew coordination, artist booking and much more, Crescat is designed for customisation and ease-of-use.
Over 125,000 events have been planned in Crescat and with hundreds of customers of all shapes and sizes, from boutique event agencies through to international concert promoters, Crescat is rigged for success. What's more, we highly value feedback from our users and we are constantly improving our software with updates, new features and improvements.
If you plan events, run a venue or produce festivals and you're looking for ways to make your life easier, then we have a solution for you. Try our software for free or schedule a no-obligation demo with one of our product specialists today at crescat.io
In the ever-evolving landscape of technology, enterprise software development is undergoing a significant transformation. Traditional coding methods are being challenged by innovative no-code solutions, which promise to streamline and democratize the software development process.
This shift is particularly impactful for enterprises, which require robust, scalable, and efficient software to manage their operations. In this article, we will explore the various facets of enterprise software development with no-code solutions, examining their benefits, challenges, and the future potential they hold.
Large Language Models and the End of ProgrammingMatt Welsh
Talk by Matt Welsh at Craft Conference 2024 on the impact that Large Language Models will have on the future of software development. In this talk, I discuss the ways in which LLMs will impact the software industry, from replacing human software developers with AI, to replacing conventional software with models that perform reasoning, computation, and problem-solving.
8. Server-Side Events
• Inform users about stuff happening while they are
using the site
• Edits made to the current resource by other people
• Chat Messages
• Mobile Devices interacting with the Account
9. Additional Constraints
• Must not lose events
• Events must be unique
• Must work with shared sessions
• Separate channels per user
• Must work* even when hand-written daemons are down
• Must work* in development without massaging daemons
10. Not losing events
• Race condition between event happening and
infrastructure coming up on page load
• Need to persist events
• Using a database
• Using a sequence (auto increment ID) to identify last
sent event
• Falling back to timestamps if not available (initial page
load)
13. Short Polling
• Are we there yet?
• Are we there yet?
• Are we there yet?
• And now?
14. Long Polling
• Send a Query to the Server
• Have the server only* reply when an event is available
• Keep the connection open otherwise
• Response means: event has happened
• Have the client reconnect immediately
19. index.html
<script>
(function(){
var channel = new EventChannel();
var log_ws = $('#websockets');
$(channel.bind('cheese_created', function(e){
log_ws.prepend($(‘<li>').text(
e.pieces + ' pieces of ' + e.cheese_type
));
});
})();
</script>
20. index.html
<script>
(function(){
var channel = new EventChannel();
var log_ws = $('#websockets');
$(channel.bind('cheese_created', function(e){
log_ws.prepend($(‘<li>').text(
e.pieces + ' pieces of ' + e.cheese_type
));
});
})();
</script>
21. index.html
<script>
(function(){
var channel = new EventChannel();
var log_ws = $('#websockets');
$(channel.bind('cheese_created', function(e){
log_ws.prepend($(‘<li>').text(
e.pieces + ' pieces of ' + e.cheese_type
));
});
})();
</script>
22. index.html
<script>
(function(){
var channel = new EventChannel();
var log_ws = $('#websockets');
$(channel.bind('cheese_created', function(e){
log_ws.prepend($(‘<li>').text(
e.pieces + ' pieces of ' + e.cheese_type
));
});
})();
</script>
23. publish.js
• Creates between 5 and 120
pieces of random Swiss cheese
• Publishes an event about this
• We’re using redis as our Pub/Sub
mechanism, but you could use
other solutions too
• Sorry for the indentation, but
code had to fit the slide
var cheese_types = ['Emmentaler',
'Appenzeller', 'Gruyère',
'Vacherin', ‘Sprinz'
];
function create_cheese(){
return {
pieces: Math.floor(Math.random()
* 115) + 5,
cheese_type:
cheese_types[Math.floor(
Math.random()
*cheese_types.length
)]
}
}
var cheese_delivery =
create_cheese();
publish(cheese_delivery);
25. Server
• Do not try this at home
• Use a library.You might know of socket.io – Me
personally, I used ws.
• Our code: only 32 lines.
26. This is it
var WebSocketServer = require('ws').Server;
var redis = require('redis');
var wss = new WebSocketServer({port: 8080});
wss.on('connection', function(ws) {
var client = redis.createClient(6379, 'localhost');
ws.on('close', function(){
client.end();
});
client.select(2, function(err, result){
if (err) {
console.log("Failed to set redis database");
return;
}
client.subscribe('channels:cheese');
client.on('message', function(chn, message){
ws.send(message);
});
})
});
27. Actually, this is the meat
client.subscribe('channels:cheese');
client.on('message', function(chn, message){
ws.send(message);
});
28. And here’s the client
(function(window){
window.EventChannelWs = function(){
var socket = new WebSocket("ws://localhost:8080/");
var self = this;
socket.onmessage = function(evt){
var event_info = JSON.parse(evt.data);
var evt = jQuery.Event(event_info.type, event_info.data);
$(self).trigger(evt);
}
}
})(window);
29. And here’s the client
(function(window){
window.EventChannelWs = function(){
var socket = new WebSocket("ws://localhost:8080/");
var self = this;
socket.onmessage = function(evt){
var event_info = JSON.parse(evt.data);
var evt = jQuery.Event(event_info.type, event_info.data);
$(self).trigger(evt);
}
}
})(window);
30. And here’s the client
(function(window){
window.EventChannelWs = function(){
var socket = new WebSocket("ws://localhost:8080/");
var self = this;
socket.onmessage = function(evt){
var event_info = JSON.parse(evt.data);
var evt = jQuery.Event(event_info.type, event_info.data);
$(self).trigger(evt);
}
}
})(window);
31. And here’s the client
(function(window){
window.EventChannelWs = function(){
var socket = new WebSocket("ws://localhost:8080/");
var self = this;
socket.onmessage = function(evt){
var event_info = JSON.parse(evt.data);
var evt = jQuery.Event(event_info.type, event_info.data);
$(self).trigger(evt);
}
}
})(window);
32. Sample was very simple
• No synchronisation with server for initial event
• No fallback when the web socket server is down
• No reverse proxy involved
• No channel separation
35. PoweringYour 39 Lines
• 6K lines of JavaScript code
• Plus 3.3K lines of C code
• Plus 508 lines of C++ code
• Which is the body that you actually run (excluding
tests and benchmarks)
• Some of which redundant because NPM
36. WebSockets are a bloody mess™
• RFC6455 is 71 pages long
• Adding a lot of bit twiddling to intentionally break
proxy servers
• Proxies that work might only actually work*
• Many deployments require a special port to run over
39. Client
var cheese_channel = new EventSource(url);
var log_source = $('#eventsource');
cheese_channel.addEventListener('cheese_created', function(e){
var data = JSON.parse(e.data);
log_source.prepend($(‘<li>').text(
data.pieces + ' pieces of ' + data.cheese_type
));
});
40. Client
var cheese_channel = new EventSource(url);
var log_source = $('#eventsource');
cheese_channel.addEventListener('cheese_created', function(e){
var data = JSON.parse(e.data);
log_source.prepend($(‘<li>').text(
data.pieces + ' pieces of ' + data.cheese_type
));
});
41. Client
var cheese_channel = new EventSource(url);
var log_source = $('#eventsource');
cheese_channel.addEventListener('cheese_created', function(e){
var data = JSON.parse(e.data);
log_source.prepend($(‘<li>').text(
data.pieces + ' pieces of ' + data.cheese_type
));
});
42. Client
var cheese_channel = new EventSource(url);
var log_source = $('#eventsource');
cheese_channel.addEventListener('cheese_created', function(e){
var data = JSON.parse(e.data);
log_source.prepend($(‘<li>').text(
data.pieces + ' pieces of ' + data.cheese_type
));
});
43. Server
• Keeps the connection open
• Sends blank-line separated groups of key/value pairs as events happen
• Can tell the client how long to wait when reconnecting
44. Disillusioning
• Bound to the 6-connections per host rule
• Still needs manual synchronising if you don’t want to lose events
• Browser support is as always
45. Disillusioning
• Bound to the 6-connections per host rule
• Still needs manual synchronising if you don’t want to lose events
• Browser support is as always
47. I like it
• Works even with IE6 (god forbid you have to do this)
• Works fine with proxies
• On both ends
• Works fine over HTTP
• Needs some help due to the connection limit
• Works even when your infrastructure is down*
48. Production code
• The following code samples form the basis of the
initial demo
• It’s production code
• No support issue caused by this.
• Runs fine in a developer-hostile environment
50. Synchronising using the
database
events_since_id = (channel, id, cb)->
q = """
select * from events
where channel_id = $1 and id > $2
order by id asc
"""
query q, [channel, id], cb
events_since_time = (channel, ts, cb)->
q = """
select * from events o
where channel_id = $1
and ts > (SELECT TIMESTAMP WITH TIME ZONE 'epoch'
+ $2 * INTERVAL '1 second’
)
order by id asc
"""
query q, [channel, ts], cb
51. The meat
handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
fetch_events channel, last_event_id, (err, evts)->
return http_error res, 500, 'Failed to get event data: ' + err if err
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if waiting() or (evts and evts.length > 0)
abort_processing = write(res, evts, not waiting());
if waiting() or abort_processing
unsubscribe channel, handle_subscription
res.end()
set_waiting()
subscribe channel, handle_subscription
52. if waiting() or (evts and evts.length > 0)
abort_processing = write(res, evts, not waiting());
if waiting() or abort_processing
unsubscribe channel, handle_subscription
res.end()
• If events are pending
• Or if there’s already a connection waiting for the same
channel
• Then return the event data immediately
• And tell the client when to reconnect
• The abort_processing mess is because of support for
both EventSource and long-polling
53. if waiting() or (evts and evts.length > 0)
abort_processing = write(res, evts, not waiting());
if waiting() or abort_processing
unsubscribe channel, handle_subscription
res.end()
• If events are pending
• Or if there’s already a connection waiting for the same
channel
• Then return the event data immediately
• And tell the client when to reconnect
• The abort_processing mess is because of support for
both EventSource and long-polling
54. if waiting() or (evts and evts.length > 0)
abort_processing = write(res, evts, not waiting());
if waiting() or abort_processing
unsubscribe channel, handle_subscription
res.end()
• If events are pending
• Or if there’s already a connection waiting for the same
channel
• Then return the event data immediately
• And tell the client when to reconnect
• The abort_processing mess is because of support for
both EventSource and long-polling
55. if waiting() or (evts and evts.length > 0)
abort_processing = write(res, evts, not waiting());
if waiting() or abort_processing
unsubscribe channel, handle_subscription
res.end()
• If events are pending
• Or if there’s already a connection waiting for the same
channel
• Then return the event data immediately
• And tell the client when to reconnect
• The abort_processing mess is because of support for
both EventSource and long-polling
56. if waiting() or (evts and evts.length > 0)
abort_processing = write(res, evts, not waiting());
if waiting() or abort_processing
unsubscribe channel, handle_subscription
res.end()
• If events are pending
• Or if there’s already a connection waiting for the same
channel
• Then return the event data immediately
• And tell the client when to reconnect
• The abort_processing mess is because of support for
both EventSource and long-polling
57. handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
set_waiting()
subscribe channel, handle_subscription
Waiting
58. handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
set_waiting()
subscribe channel, handle_subscription
Waiting
59. handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
set_waiting()
subscribe channel, handle_subscription
Waiting
60. handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
set_waiting()
subscribe channel, handle_subscription
Waiting
61. handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
set_waiting()
subscribe channel, handle_subscription
Waiting
LOL - Boolean
parameter!!!
62. handle_subscription = (c, message)->
fetch_events channel, last_event_id, (err, evts)->
return http_error 500, 'Failed to get event data' if err
abort_processing = write res, evts, true
last_event_id = evts[evts.length-1].id if (evts and evts.length > 0)
if abort_processing
unsubscribe channel, handle_subscription
clear_waiting()
res.end()
set_waiting()
subscribe channel, handle_subscription
Waiting
63. Fallback
• Our fronted code connects to /e.php
• Our reverse proxy redirects that to the node
daemon
• If that daemon is down or no reverse proxy is there,
there’s an actual honest-to god /e.php …
• …which follows the exact same interface but is
always* short-polling
64. Client is more complicated
poll: =>
url = "#{@endpoint}/#{@channel}/#{@wait_id}"
$.ajax url,
cache: false,
dataType: 'json',
headers:
'Last-Event-Id': @last_event_id
success: (data, s, xhr) =>
return unless @enabled
@fireAll data
reconnect_in = parseInt xhr.getResponseHeader('x-ps-reconnect-in'), 10
reconnect_in = 10 unless reconnect_in >= 0
setTimeout @poll, reconnect_in*1000 if @enabled
error: (xhr, textStatus, error) =>
return unless @enabled
# 504 means nginx gave up waiting. This is totally to be
# expected and we can just treat it as an invitation to
# reconnect immediately. All other cases are likely bad, so
# we remove a bit of load by waiting a really long time
# 12002 is the ie proprietary way to report an WinInet timeout
# if it was registry-hacked to a low ReadTimeout.
# This isn't a server-error, so we can just reconnect.
rc = if (xhr.status in [504, 12002]) || (textStatus == 'timeout') then 0 else 10000
setTimeout @poll, rc if @enabled
65. Client is more complicated
poll: =>
url = "#{@endpoint}/#{@channel}/#{@wait_id}"
$.ajax url,
cache: false,
dataType: 'json',
headers:
'Last-Event-Id': @last_event_id
success: (data, s, xhr) =>
return unless @enabled
@fireAll data
reconnect_in = parseInt xhr.getResponseHeader('x-ps-reconnect-in'), 10
reconnect_in = 10 unless reconnect_in >= 0
setTimeout @poll, reconnect_in*1000 if @enabled
error: (xhr, textStatus, error) =>
return unless @enabled
# 504 means nginx gave up waiting. This is totally to be
# expected and we can just treat it as an invitation to
# reconnect immediately. All other cases are likely bad, so
# we remove a bit of load by waiting a really long time
# 12002 is the ie proprietary way to report an WinInet timeout
# if it was registry-hacked to a low ReadTimeout.
# This isn't a server-error, so we can just reconnect.
rc = if (xhr.status in [504, 12002]) || (textStatus == 'timeout') then 0 else 10000
setTimeout @poll, rc if @enabled
66. Client is more complicated
poll: =>
url = "#{@endpoint}/#{@channel}/#{@wait_id}"
$.ajax url,
cache: false,
dataType: 'json',
headers:
'Last-Event-Id': @last_event_id
success: (data, s, xhr) =>
return unless @enabled
@fireAll data
reconnect_in = parseInt xhr.getResponseHeader('x-ps-reconnect-in'), 10
reconnect_in = 10 unless reconnect_in >= 0
setTimeout @poll, reconnect_in*1000 if @enabled
error: (xhr, textStatus, error) =>
return unless @enabled
# 504 means nginx gave up waiting. This is totally to be
# expected and we can just treat it as an invitation to
# reconnect immediately. All other cases are likely bad, so
# we remove a bit of load by waiting a really long time
# 12002 is the ie proprietary way to report an WinInet timeout
# if it was registry-hacked to a low ReadTimeout.
# This isn't a server-error, so we can just reconnect.
rc = if (xhr.status in [504, 12002]) || (textStatus == 'timeout') then 0 else 10000
setTimeout @poll, rc if @enabled
67. Client is more complicated
poll: =>
url = "#{@endpoint}/#{@channel}/#{@wait_id}"
$.ajax url,
cache: false,
dataType: 'json',
headers:
'Last-Event-Id': @last_event_id
success: (data, s, xhr) =>
return unless @enabled
@fireAll data
reconnect_in = parseInt xhr.getResponseHeader('x-ps-reconnect-in'), 10
reconnect_in = 10 unless reconnect_in >= 0
setTimeout @poll, reconnect_in*1000 if @enabled
error: (xhr, textStatus, error) =>
return unless @enabled
# 504 means nginx gave up waiting. This is totally to be
# expected and we can just treat it as an invitation to
# reconnect immediately. All other cases are likely bad, so
# we remove a bit of load by waiting a really long time
# 12002 is the ie proprietary way to report an WinInet timeout
# if it was registry-hacked to a low ReadTimeout.
# This isn't a server-error, so we can just reconnect.
rc = if (xhr.status in [504, 12002]) || (textStatus == 'timeout') then 0 else 10000
setTimeout @poll, rc if @enabled
68. Client is more complicated
poll: =>
url = "#{@endpoint}/#{@channel}/#{@wait_id}"
$.ajax url,
cache: false,
dataType: 'json',
headers:
'Last-Event-Id': @last_event_id
success: (data, s, xhr) =>
return unless @enabled
@fireAll data
reconnect_in = parseInt xhr.getResponseHeader('x-ps-reconnect-in'), 10
reconnect_in = 10 unless reconnect_in >= 0
setTimeout @poll, reconnect_in*1000 if @enabled
error: (xhr, textStatus, error) =>
return unless @enabled
# 504 means nginx gave up waiting. This is totally to be
# expected and we can just treat it as an invitation to
# reconnect immediately. All other cases are likely bad, so
# we remove a bit of load by waiting a really long time
# 12002 is the ie proprietary way to report an WinInet timeout
# if it was registry-hacked to a low ReadTimeout.
# This isn't a server-error, so we can just reconnect.
rc = if (xhr.status in [504, 12002]) || (textStatus == 'timeout') then 0 else 10000
setTimeout @poll, rc if @enabled
69. Client is more complicated
poll: =>
url = "#{@endpoint}/#{@channel}/#{@wait_id}"
$.ajax url,
cache: false,
dataType: 'json',
headers:
'Last-Event-Id': @last_event_id
success: (data, s, xhr) =>
return unless @enabled
@fireAll data
reconnect_in = parseInt xhr.getResponseHeader('x-ps-reconnect-in'), 10
reconnect_in = 10 unless reconnect_in >= 0
setTimeout @poll, reconnect_in*1000 if @enabled
error: (xhr, textStatus, error) =>
return unless @enabled
# 504 means nginx gave up waiting. This is totally to be
# expected and we can just treat it as an invitation to
# reconnect immediately. All other cases are likely bad, so
# we remove a bit of load by waiting a really long time
# 12002 is the ie proprietary way to report an WinInet timeout
# if it was registry-hacked to a low ReadTimeout.
# This isn't a server-error, so we can just reconnect.
rc = if (xhr.status in [504, 12002]) || (textStatus == 'timeout') then 0 else 10000
setTimeout @poll, rc if @enabled
70. So.Why a daemon?
• Evented architecture lends itself well to many open
connections never really using CPU
• You do not want to long-poll with forking
architectures
• Unless you have unlimited RAM
78. • If your clients use browsers (and IE10+)
• and if you have a good reverse proxy
79. • If your clients use browsers (and IE10+)
• and if you have a good reverse proxy
• and if you can use SSL
80. • If your clients use browsers (and IE10+)
• and if you have a good reverse proxy
• and if you can use SSL
• then use WebSockets
81. • If your clients use browsers (and IE10+)
• and if you have a good reverse proxy
• and if you can use SSL
• then use WebSockets
• Otherwise use long polling
82. • If your clients use browsers (and IE10+)
• and if you have a good reverse proxy
• and if you can use SSL
• then use WebSockets
• Otherwise use long polling
• Also, only use one - don’t mix - not worth the effort
83. • If your clients use browsers (and IE10+)
• and if you have a good reverse proxy
• and if you can use SSL
• then use WebSockets
• Otherwise use long polling
• Also, only use one - don’t mix - not worth the effort
• EventSource, frankly, sucks
84. Thank you!
• @pilif on twitter
• https://github.com/pilif/server-side-events
Also: We are looking for a front-end designer with CSS
skills and a backend developer. If you are interested or
know somebody, come to me