Seastar is an open source framework that provides highly scalable and asynchronous distributed applications. It uses a shared-nothing architecture with no locks or threads to achieve linear scaling across cores. Applications built on Seastar can handle millions of connections and I/O operations in parallel. It uses an asynchronous programming model based on promises and futures with zero-copy networking and disk I/O for high performance.
2. ● New tech, runs on physical machines, VMs,Linux/OSv
● Multi-million IOPS, fully scalable
● Perfect building block for database/filesystem/cache
● Share-nothing, fully asynchronous model
● Open Source
SeaStar Technology
5. Problem with today’s programing
model
+ Single core performance (frequency, IPC) no
longer growing
+ #core grows but it’s hard to utilize. Apps don’t
scale
+ Locks have costs even w/o contention
+ Data is allocated on one core, copied and used on
others
+ Software can’t keep up with the recent hardware
(SSD, line rate for 10Gbps, NUMA, etc)
Kernel
Application
TCP/IPScheduler
queuequeuequeuequeuequeue
threads
NIC
Queues
Kernel
Traditional stack
Memory
6. SeaStar Framework
Linear scaling by #core
+ Each engine is executed by each core
+ Shared-nothing per-core design
+ Fits existing shared-nothing distributed
applications model
+ Full kernel bypass, supports zero-copy
+ No threads, no context switch and no locks
+ Instead, asynchronous lambda
invocation
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
7. Kernel
SeaStar Framework Comparison
Application
TCP/IPScheduler
queuequeuequeuequeuequeue
threads
NIC
Queues
Kernel
Traditional stack SeaStar’s sharded stack
Memory
Lock contention
Cache contention
NUMA unfriendly
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
Application
TCP/IP
Task Scheduler
queuequeuequeuequeuequeuesmp queue
NIC
Queue
DPDK
Kernel
(isn’t
involved)
Userspace
No contention
Linear scaling
NUMA friendly
8. SeaStar handles 1,000,000s
connections in parallel!
Traditional stack SeaStar’s sharded stack
Promise
Task
Promise
Task
Promise
Task
Promise
Task
CPU
Promise
Task
Promise
Task
Promise
Task
Promise
Task
CPU
Promise
Task
Promise
Task
Promise
Task
Promise
Task
CPU
Promise
Task
Promise
Task
Promise
Task
Promise
Task
CPU
Promise
Task
Promise
Task
Promise
Task
Promise
Task
CPU
Promise is a
pointer to
eventually
computed value
Task is a
pointer to a
lambda function
Scheduler
CPU
Scheduler
CPU
Scheduler
CPU
Scheduler
CPU
Scheduler
CPU
Thread
Stack
Thread
Stack
Thread
Stack
Thread
Stack
Thread
Stack
Thread
Stack
Thread
Stack
Thread
Stack
Thread is a
function pointer
Stack is a byte
array from 64k
to megabytes
Context switch cost is
high. Large stacks
pollutes the caches
No sharing, millions
of parallel events
11. F-P-C defined: Future
A future is a result of a computation
that may not be available yet.
■ Data buffer from the network
■ Timer expiration
■ Completion of a disk write
■ Result computation that requires the values from one or
more other futures.
12. F-P-C defined: Promise
A promise is an object or function
that provides you with a future, with
the expectation that it will fulfil the
future.
13. Basic future/promise
future<int> get(); // promises an int will be produced eventually
future<> put(int) // promises to store an int
void f() {
get().then([] (int value) {
put(value + 1).then([] {
std::cout << "value stored successfullyn";
});
});
}
14. Chaining
future<int> get(); // promises an int will be produced eventually
future<> put(int) // promises to store an int
void f() {
get().then([] (int value) {
return put(value + 1);
}).then([] {
std::cout << "value stored successfullyn";
});
}
16. Zero copy friendly (2)
future<size_t>
connected_socket::write(temporary_buffer);
■ Future becomes ready when TCP window allows
sending more data (usually immediately)
■ temporary_buffer discarded after data is ACKed
■ can call delete[] or decrement a reference count