19. Sharing data models and full in-
memory state between the
client and the server is:
1. Fun, convenient
2. Hard to secure, not scalable
3. Great for prototyping.
20. You can use redis as shared
memory between processes/
languages (you just have to
agree on how to use it)
21. Thoonk is a contract with
redis
github.com/andyet/
thoonk.js
github.com/andyet/
27. How it works
initial state transfer
1. User authenticates, we
retrieve and emit team
state of teams they’re on.
Subscribe them to updates
for those feeds.
29. How it stays in sync
1. Client makes rpc calls
based on user interactions.
30. 2. Server validates the rpc
call and puts it into a
thoonk (redis) job queue.
31. 3. Workers take jobs, and
necessary models are
pulled and inflated into
Capsule models.
Permissions are verified by
inflated model. Resulting
objects are edited back to
thoonk feeds.
32. 4. Subscribed thoonk-
clients (on any node
process connected to the
same redis db) transmit
change events to the
browser.
33. 5. Changes are applied to
corresponding models in
the client, the UI responds.
I’m Henrik Joreteg\nI’m one of the owners at &yet\n\nWe love building realtime web apps, which is why we decided to put on this conference.\n\nI’ve written a lot of javascript on several apps we’ve shipped in the last year or so. I’m gonna talk a bit about what we’ve learned.\n
The three apps i’m gonna talk about are:\n
\n
Here’s what it looks like, queue of people waiting on the left, internal chat on the right, active chats in windows in the middle.\n
\n
\n
Observable models are absolutely a necessity for building complex client-side apps. \n\nEspecially when you expect things to happen without user interaction.\n\nJS is great at events, use an evented model system, such as backbone. \n\nYour HTML should just respond to changes in your models, not the other way around.\n
People talk a lot about loading pages as quickly as possible on the web.\n\nThis certainly isn’t quite as big of a deal with a single page app, where you load it once, and then keep it there.\n\nHowever, having to auth the browser session, create the page, and then then re-authenticate with XMPP is a bit on the slow side.\n
Also, for any custom pubsub stuff you end up writing a plugin layer to parse the XML just to get it to a state where your app has the info it needs.\n\nThere are other solutions where you do a bit more of that logic on the server and you can start with info you need in form that’s easier to deal with.\n
The second case it Recon Dynamics\n
Very complex single page app, lots of items on the page. \n\nLive, asset tracking system, like something out of a spy movie. \n\nThey have this amazing, incredibly accurate hardware, with great battery life. \n\nWith the app you can draw geofences and get various types of alerts, really cool, stuff.\n\nThey are currently launched and live in Boise. \n
\n
One of the biggest challenges with this app was the fact that not everything was live. \n\nSo we have a job system that is processing incoming data and dumping it into a normal relational database.\n\nMixed sources of state proved a bit challenging.\n\nSo, we use kind of a traditional ajax approach to get *most* of the data. But as a user filters down the lists of assets they care about. We then subscribe to the live updates for those assets. \n\nIt works, but despite a unified model system, having updates from multiple data sources certainly complicated things.\n
When you’ve got a dataset bigger than you want to ship to the client all at once you need a way to selectively retrieve and subscribe to updates for the items you care about.\n\n\n
A browser can handle a pretty significant amount of data, but it doesn’t makes sense to send a ton of stuff if it’s not being used. Also, a lot of the things we care about as being “live” just become a historical log. \n\nThose logs don’t need to be “live” for ever. \n\nHaving a means of retrieving the history is good. \n\nAnalyze your data and chose what you’re willing to ship the browser at once, what actually needs to be “live” and what just needs to be retrievable somehow.\n
The third app is &!\n
\n
\n
I wrote a little tool called Capsule.js, that lets you mirror the state of things on the client and the server.\n\nEssentially, this lets you quickly prototype a realtime app without really having to build a solid, scalable server infrastructure.\n\nBUT... it’s not very secure or scaleable, but it can buy you some agility while you’re still figuring out what models you actually need.\n
This is a big one... \n\nYou can use Redis as shared memory between processes or platforms. \n\nIf you interact with Redis according to a defined set of rules, you can essentially use it as your go-between for different processes.\n\nUnlike building something that just stores models in memory on the server redis can scale quite well.\n
For that, we use Thoonk.\n\nThoonk is essentially a contract for how a library should interract with redis. It was written by Nathan Fritz and Lance Stout from our team. \n\nThere are currently implementations for node and Python.\n
We use thoonk like an evented database. Essentially what you would normally put into a table in a relational database, becomes a “feed” in thoonk.\n\nBut unlike a database table, you can subscribe to a feed and be notified of changes.\n\n\n
This is the real magic....\n\nChanges to thoonk, *FROM ANYWHERE* will get pushed all the way out to the browser.\n\nWe can write a worker in “C” that every 20 minutes runs a job that pulls active users on a team and publishes their status to the group. Or a bot that plops in the weather every morning.\n
Each collection of models has a corresponding feed in Thoonk\n
THOONK also has a job queue system.\n\nSo, as we’re building the API for &! rather than just tell other services to modify things in thoonk, we actually have a job system that handles any application logic and performs any related operations.\n\nFor example, if you delete a task from andbang, that’s currently marked as the task you’re working on, we need to update the user model so it doesn’t show the user as still working on a task.\n\nSo, anything we build on top of &! will just add jobs to the job system. Thoonk handles the rest.\n
We still use Capsule models on the server as a way to encapsulate and re-use model-level behavior. \n\nFor example, determining whether a user is allowed to edit a model. We run the same permission check to know whether to display the form that lets them edit something in the interface of the app, as we do to actually determine whether they’re allowed to do it when a job comes through the job system.\n\nWe grab the models’s attributes from our thoonk feed, inflate our model and run that permission check. \n\nThat’s a simple example where you don’t gain that much, but as your app grows its really nice to be able to keep all that logic in one place.\n
\n
Capsule imports the nested data structure it receives and the views react and draw the app.\n
As the user clicks, navigates and modifies data. RPC calls for those actions are sent to the server.\n
The server validates the rpc call, tacks on the currently logged in user and adds it to the corresponding job queue.\n
Workers take jobs and retrieve any models it needs from other feeds, checks permissions or does other conditional logic by inflating models and running methods on them. \n\nModified objects are written back to thoonk feeds.\n
From the there pub-sub system takes over.\n\nChanges are propagated by redis to any subscribed thoonk clients. In our case on whatever node process got it.\n
Those are shipped to the client with socket.io where they’re applied and the UI reacts.\n\nIt may sound a bit arduous, but it works, and it’s quick. We can also add as many worker processes as our scaling needs increase.\n\nSo, now for the fun stuff...\n