© 2019 SPLUNK INC.
Apache Pulsar @splunk
Nov 2020
Karthik Ramasamy
Splunk
© 2020 SPLUNK INC.
Karthik
Ramasamy
Senior Director of Engineering
@karthikz
streaming @splunk | ex-CEO of @streamlio | co-creator of @heronstreaming | ex @Twitter | Ph.D
During the course of this presentation, we may make forward-looking statements
regarding future events or plans of the company. We caution you that such statements
reflect our current expectations and estimates based on factors currently known to us
and that actual events or results may differ materially. The forward-looking statements
made in the this presentation are being made as of the time and date of its live
presentation. If reviewed after its live presentation, it may not contain current or
accurate information. We do not assume any obligation to update any forward-
looking statements made herein.
In addition, any information about our roadmap outlines our general product direction
and is subject to change at any time without notice. It is for informational purposes only,
and shall not be incorporated into any contract or other commitment. Splunk undertakes
no obligation either to develop the features or functionalities described or to include any
such feature or functionality in a future release.
Splunk, Splunk>, Data-to-Everything, D2E and Turn Data Into Doing are trademarks and registered trademarks of Splunk Inc. in the
United States and other countries. All other brand names, product names or trademarks belong to their respective owners. © 2020
Splunk Inc. All rights reserved
Forward-
Looking
Statements
© 2020 SPLUNK INC.
© 2019 SPLUNK INC.
Agenda 1) Introduction to Splunk
2) Streaming system requirements
3) How Pulsar satisfies the requirements?
4) Apache Pulsar at Splunk
5) Questions?
© 2020 SPLUNK INC.
Data

LakesMaster Data
Management
ETL
Point Data
Management 

Solutions
Data

Silos
Business
Processes
The 

Data-to-Everything
Platform
IT
Security
DevOps
© 2019 SPLUNK INC.
Core of Emerging Use Cases
Streaming data
transformation
Data
distribution
Real-time analytics
Real-time monitoring and
notifications
IoT analytics
!
Event-driven workflows
Messaging / Streaming Systems
Interactive applications
Log processing and
analytics
© 2020 SPLUNK INC.
Streaming System Requirements
DurabilityScalability
Fault
Tolerance
High
Availability
Sharing &
Isolation
Messaging
Models
Client
Languages
Persistence Type Safety
Deployment in
k8s
© 2020 SPLUNK INC.
Streaming System Requirements
AdoptionEcosystem Community Licensing
Disaster
Recovery
Operability TCO Observability
© 2019 SPLUNK INC.
Requirement #1 - Scalability
✦ Traffic can wildly vary while the system in production
✦ System need to scale up with no effect to publish/consume throughput and latency
✦ Support for linear increase/decrease in publish/consume throughput as new nodes are added
✦ Automatic spreading out load to new machines as new nodes are added
✦ Scalability across different dimensions - serving and storage
© 2019 SPLUNK INC.
Scalability
Consumer
Producer
Producer
Producer
Consumer
Consumer
Consumer
Messaging
Broker Broker Broker
Bookie Bookie Bookie Bookie Bookie
Event storage
Function Processing
WorkerWorker
✦ Independent layers for processing, serving and storage
✦ Messaging and processing built on Apache Pulsar
✦ Storage built on Apache BookKeeper
© 2019 SPLUNK INC.
Requirement #2 - Durability
✦ Splunk applications have different types of durability
✦ Persistent Durability - No data loss in the presence of nodes failures or entire cluster failure - e.g security &
compliance
✦ Replicated Durability - No data loss in the presence of limited nodes failures - e.g, machine logs
✦ Transient Durability - Data loss in the presence of failures - e.g metrics data
© 2019 SPLUNK INC.
Durability
Bookie
Bookie
BookieBrokerProducer
Journal
Journal
Journal
fsync
fsync
fsync
© 2019 SPLUNK INC.
Requirement #3 - Fault Tolerance
✦ Ability of the system to function under component failures
✦ Ideally without any manual intervention up to a certain degree
© 2019 SPLUNK INC.
Pulsar Fault Tolerance
Segment 1
Segment 2
Segment n
.
.
.
Segment 2
Segment 3
Segment n
.
.
.
Segment 3
Segment 1
Segment n
.
.
.
Segment 1
Segment 2
Segment n
.
.
.
Storage
Broker
Serving
Broker Broker
✦ Broker Failure
✦ Topic reassigned to available broker based on load
✦ Can construct the previous state consistently
✦ No data needs to be copied
✦ Bookie Failure
✦ Immediate switch to a new node
✦ Background process copies segments to other bookies
to maintain replication factor
© 2019 SPLUNK INC.
Requirement #4 - High Availability
✦ System should continue to function in the cloud or on-prem in following conditions, if applicable
✦ When two nodes/instances fail
✦ When an availability zone or a rack fails
© 2019 SPLUNK INC.
Pulsar High Availability
Segment 1
Segment 2
Segment n
.
.
.
Segment 2
Segment 3
Segment n
.
.
.
Segment 3
Segment 1
Segment n
.
.
.
Storage
Broker
Serving
Broker Broker
✦ Node Failures
✦ Broker failures
✦ Bookie failures
✦ Handled similar to respective component failures
✦ Zone/Rack Failures
✦ Bookies provide rack awareness
✦ Broker replicate data to different racks/zones
✦ In the presence of zone/rack failure, data is available
in other zones
Zone A Zone B Zone C
© 2019 SPLUNK INC.
Requirement #5 - Sharing and Isolation
✦ System should have the capabilities to
✦ Share many applications on the same cluster for cost and manageability purposes
✦ Isolate different applications on their own machines in the same cluster when needed
© 2019 SPLUNK INC.
Sharing and Isolation
Apache Pulsar Cluster
Product
Safety
ETL
Fraud
Detection
Topic-1
Account History
Topic-2
User Clustering
Topic-1
Risk Classification
MarketingCampaigns
ETL
Topic-1
Budgeted Spend
Topic-2
Demographic Classification
Topic-1
Location Resolution
Data
Serving
Microservice
Topic-1
Customer Authentication
10 TB
7 TB
5 TB
✦ Software isolation
Storage quotas, flow control, back pressure, rate limiting
✦ Hardware isolation
Constrain some tenants on a subset of brokers/bookies
© 2019 SPLUNK INC.
Requirement #6 - Client Languages
Apache Pulsar Cluster
Java
Python
Go
C++ C
Officially supported by the project
© 2019 SPLUNK INC.
Requirement #7 - Multiple Messaging Models
✦ Splunk applications require different consuming models
✦ Collect once and deliver once capability (e.g) process S3 file and ingest into index
✦ Receive data once and deliver many times (e.g) multiple pipelines sharing same data for different
types of processing
✦ Avoid two systems, if possible - from cost and operations perspective
✦ Avoid any additional infra-level code, if possible, that emulates one semantics on top of another
system
© 2020 SPLUNK INC.
Pulsar Messaging Models
• Shared Subscription
• Key Shared Subscription
Messaging Queuing
• Exclusive Subscription
• Failover Subscription
Native support avoids two systems and extra infrastructure code
that requires maintenance
© 2019 SPLUNK INC.
Messaging Models - Streaming
Pulsar topic/
partition
Producer 2
Producer 1
Consumer 1
Consumer 2
Subscription
A
M4
M3
M2
M1
M0
M4
M3
M2
M1
M0
X
Exclusive
© 2019 SPLUNK INC.
Messaging Models - Streaming
Pulsar topic/
partition
Producer 2
Producer 1
Consumer 1
Consumer 2
Subscription
B
M4
M3
M2
M1
M0
M4
M3
M2
M1
M0
Failover
In case of failure in
consumer 1
© 2019 SPLUNK INC.
Messaging Models - Queuing
Pulsar topic/
partition
Producer 2
Producer 1
Consumer 2
Consumer 3
Subscription
C
M4
M3
M2
M1
M0
Shared
Traffic is equally distributed
across consumers
Consumer 1
M4M3
M2M1M0
© 2019 SPLUNK INC.
Messaging Models - Queuing
Pulsar topic/
partition
Producer 2
Producer 1
Consumer 2
Consumer 3
Subscription
D
K3
K1
K3
K2
K1
Key Shared
Traffic is distributed
across consumers based on key
Consumer 1
K3K1
K3K2K1
© 2019 SPLUNK INC.
Selective vs Cumulative Acknowledgements
M0
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
M11
M12
M13
M14
M0
M1
M2
M3
M4
M5
M6
M7
M8
M9
M10
M11
M12
M13
M14
Cumulative
Ack (M12)
Ack (M7) Ack (M12)
© 2019 SPLUNK INC.
Requirement #8 - Persistence
Producer
Producer
Producer
Consumer
Consumer
Cold storage
Hot storage
Topic
✦ Offload cold data to lower-cost storage (e.g.
cloud storage, HDFS)
✦ Manual or automatic (configurable threshold)
✦ Transparent to publishers and consumers
✦ Allows near-infinite event storage at low cost
(e.g) compliance and security
© 2019 SPLUNK INC.
Requirement #9 - Type Safety
✦ Splunk applications are varied
✦ One class requires fixed schema
✦ Another class requires fixed schema with evolution
✦ Other class requires flexibility for no schema or handled at the application level
✦ Avoid bringing another system for schema management
✦ Support for multiple different types -
© 2019 SPLUNK INC.
Pulsar Schema Registry
✦ Provides type safety to applications built on top of Pulsar
✦ Server side - system enforces type safety and ensures that producers and consumers remain synced
✦ Schema registry enables clients to upload data schemas on a topic basis.
✦ Schemas dictate which data types are recognized as valid for that topic
© 2019 SPLUNK INC.
Requirement #10 - Ease of Deployment in k8s
✦ Splunk uses k8s for orchestration
✦ System should be easily deployable in k8s
✦ Surface area of the system exposed outside k8s should be minimal - one single end point backed by
✦ Should be able to segregate the nodes receiving external traffic
✦ Should be flexible to deploy from CI/CD pipelines for testing and development
© 2019 SPLUNK INC.
Pulsar Deployment in k8s
Broker Broker Broker
Segment 1
Segment 2
Segment n
.
.
.
Segment 2
Segment 3
Segment n
.
.
.
Segment 3
Segment 1
Segment n
.
.
.
Segment 1
Segment 2
Segment n
.
.
.
S
LB
Proxy Proxy Proxy
Broker Broker Broker
Segment 1
Segment 2
Segment n
.
.
.
Segment 2
Segment 3
Segment n
.
.
.
Segment 3
Segment 1
Segment n
.
.
.
Segment 1
Segment 2
Segment n
.
.
.
S
LB
Proxy Proxy Proxy
Aggregated Deployment Segregated Deployment
© 2019 SPLUNK INC.
Requirement #11 - Operability
✦ System should be online and continue to serve production traffic in the following scenarios
✦ OS upgrades
✦ Security patches
✦ Disk swapping
✦ Upgrading
✦ Self adjusting components
✦ Bookies turn themselves into readonly when 90% of disk is full
✦ Load manager to balance traffic across brokers
© 2019 SPLUNK INC.
Requirement #12 - Disaster Recovery
✦ Critical enterprise data flows through Splunk products
✦ Customer expect continuous availability in cloud / on-premise
✦ Required to handle data center failures seamlessly
✦ Pulsar provides both
✦ Asynchronous Replication
✦ Synchronous Replication
© 2019 SPLUNK INC.
Disaster Recovery - Async Replication
✦ Two independent clusters, primary/
standby or primary/primary
configuration
✦ Configured tenants and namespaces
replicate to standby
✦ Data published to primary is
asynchronously replicated to standby
✦ Producers and consumers restarted in
second datacenter upon primary failure
✦ With replicated subscriptions,
consumers start close to where they
left off
Producers
(active)
Datacenter A
Consumers
(active)
Pulsar Cluster
(primary)
Datacenter B
Producers
(standby)
Consumers
(standby)
Pulsar Cluster
(standby)
Pulsar
replication
ZooKeeper ZooKeeper
© 2019 SPLUNK INC.
Requirement #13 - Performance & TCO
✦ Splunk application requirements are very varied
✦ real-time (< 10 ms)
✦ near real-time (< few mins)
✦ high throughput (ability to handle multi PB/day in a single cluster)
✦ Conducted a detailed performance study comparing with Kafka
© 2019 SPLUNK INC.
Perfomance Experiments
© 2019 SPLUNK INC.
Settings
AWS - i3.8xlarge
32 vCPU
244 GB of RAM
4 x 1,900 GB NVMe exposed as bonded RAID0
10 Gbps full duplex
7 Gbps dedicated EBS
© 2019 SPLUNK INC.
Settings
20 - i3.8xlarge instances in two tainted groups
Pulsar tainted group - 15 instances, for running Pulsar/Kafka
Pulsar client tainted group - 5 instances, for producing/consuming traffic
© 2019 SPLUNK INC.
Settings
Message size 1 KiB
Batch size 128 KiB
Max delay 1 ms
Message size 1 KiB
Batch size 128 KiB
Linger time 1 ms
Apache Pulsar Apache Kafka
© 2019 SPLUNK INC.
Open Messaging Benchmark
• Designed to measure performance of distributed messaging systems

• Supports various “drivers” (Kafka, Pulsar, RocketMQ, RabbitMQ, ActiveMQ Artemis, NATS, NSQ)

• Automated deployment in EC2

• Configure workloads through a YAML file
© 2019 SPLUNK INC.
Open Messaging Benchmark
Coordinator will take the workload definition and propagate to multiple workers — Collects and
reports stats
© 2019 SPLUNK INC.
Publish Latency - 1 GiB/s in - 1 GiB/s out
• Pulsar latency is consistently lower
• Varies 5-140x
Latency
0 ms
500 ms
1000 ms
1500 ms
2000 ms
Pulsar EBS 

With Journal
Pulsar EBS 

No Journal
Pulsar NVMe 

No Journal
Kafka EBS Kafka NVMe
1959.8
1178.9
14.513.9
219.1
43.8
102.5
7.77.917.5
50 pct 99 pct
© 2019 SPLUNK INC.
Publish Latency - 1 GiB/s in - 3 GiB/s out
Latency
0 ms
750 ms
1500 ms
2250 ms
3000 ms
Pulsar EBS 

With Journal
Pulsar EBS 

No Journal
Pulsar NVMe 

No Journal
Kafka EBS Kafka NVMe
1500.7
2475.1
15.114.2
257
70.734.88.18.119.3
50 pct 99 pct
• Pulsar latency is consistently lower
• Varies 5-150x
• Pulsar EBS - With Journal that
guarantees durability still lower than
Kafka without durability
© 2019 SPLUNK INC.
Publish Latency - 3 GiB/s in - 3 GiB/s out
Latency
0 ms
350 ms
700 ms
1050 ms
1400 ms
Pulsar EBS 

No Journal
Pulsar NVMe 

No Journal
Kafka EBS Kafka NVMe
1276.5
866.1
14.113.9 21.926.664.44.4
50 pct 99 pct
• Pulsar latency is consistently lower
• Varies 5-90x
© 2019 SPLUNK INC.
Pulsar provides consistently
5x-50x lower latency
© 2019 SPLUNK INC.
Brokers/Bookies Used - 1 GiB/s in - 1 GiB/s out
• Pulsar uses 20-40% less brokers +
bookies than Kafka
• Due to higher bandwidth utilization
• Pulsar EBS - With Journal requires 30%
more brokers + bookies for durability
Brokers + Bookies
0
3.5
7
10.5
14
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
3
5
10
10
9
333
Brokers Bookies
© 2019 SPLUNK INC.
Brokers/Bookies Used - 1 GiB/s in - 3 GiB/s out
Brokers + Bookies
0
4
8
12
16
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
3
5
10
10
15
55
6
Brokers Bookies
• Pulsar uses 20-50% less brokers +
bookies than Kafka
• Pulsar requires more brokers than 1 GiB/s
out case due to the additional bandwidth
required for 3 GiB/s out
• Pulsar EBS - With Journal requires just
7% more brokers + bookies for durability
© 2019 SPLUNK INC.
Disk Write Bandwidth - 1 GiB/s in - 1 GiB/s out
Bandwidth Per VM
0 MB/s
225 MB/s
450 MB/s
675 MB/s
900 MB/s
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
350350
850
530
250
Bandwidth
• Pulsar provides as much as 850 MB/s per
VM instance in NVMe
• Pulsar in EBS - No Journal provides 530
MB/s out of 875 MB/s available
• Kafka uses only 350 MB/s per VM
instance independent of EBS or NVMe
• Pulsar EBS - With Journal provides 250
MB/s since the data is written twice -
effectively utilizing 500 MB/s of EBS disk
write bandwidth
© 2019 SPLUNK INC.
Disk Write Bandwidth - 1 GiB/s in - 3 GiB/s out
Bandwidth Per VM
0 MB/s
225 MB/s
450 MB/s
675 MB/s
900 MB/s
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
310310
850
510
250
Bandwidth
• Pulsar provides as much as 850 MB/s per
VM instance in NVMe
• Pulsar in EBS - No Journal provides 510
MB/s out of 875 MB/s available
• Kafka uses only 310 MB/s per VM
instance independent of EBS or NVMe
• Pulsar EBS - With Journal provides 250
MB/s since the data is written twice -
effectively utilizing 500 MB/s of EBS disk
write bandwidth
© 2019 SPLUNK INC.
Pulsar uses 20-30% less brokers
+ bookies since it exploits
available disk bandwidth
© 2019 SPLUNK INC.
CPU Usage - 1 GiB/s in - 1 GiB/s out
CPU Usage
0
15
30
45
60
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
56
39.6
24.2724.82
28.47
cores
• Pulsar consumes 40-60% less cores than
Kafka
• Kafka uses more CPU due to CRC32
computation and Scala overhead
© 2019 SPLUNK INC.
CPU Usage - 1 GiB/s in - 3 GiB/s out
CPU Usage
0
20
40
60
80
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
56.6
75.6
28.5729.65
33.66
cores
• Pulsar consumes 40-60% less cores than
Kafka
• Pulsar CPU usage is more or less the
same for 1 GiB out and 3 GiB out
© 2019 SPLUNK INC.
NIC Usage - 1 GiB/s in - 1 GiB/s out
NIC Usage
23
23.75
24.5
25.25
26
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
26
25.2
25.02
23.97
24.21
26
25.2
24.11
24.66
24.82
In Out
• NIC usage is pretty much the same in both
Kafka and Pulsar
© 2019 SPLUNK INC.
NIC Usage - 1 GiB/s in - 3 GiB/s out
NIC Usage
0
10.5
21
31.5
42
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
4242
40.8640.741.26
2625.5
23.68
24.6524.66
In Out
• NIC usage is pretty much the same in both
Kafka and Pulsar
© 2019 SPLUNK INC.
Pulsar uses 50–60% less CPU
cores with complete control of
memory
© 2019 SPLUNK INC.
VMs Needed- 1 GiB/s in - 1 GiB/s out
VMs Required
0
2.5
5
7.5
10
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
10
9
4
5
10
i3.8xlarge
• Pulsar uses 30-60% less VMs than Kafka
• This is due to effective use of bandwidth
per VM by Pulsar
© 2019 SPLUNK INC.
VMs Needed- 1 GiB/s in - 3 GiB/s out
VMs Required
0
4
8
12
16
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
10
15
55
10
i3.8xlarge
• Pulsar uses 30-60% less VMs than Kafka
• This is due to effective use of bandwidth
per VM by Pulsar
© 2019 SPLUNK INC.
VMs Needed - 3 GiB/s in - 3 GiB/s out
VMs Required
0
7.5
15
22.5
30
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
7
5
15
1515151515
VMs Additional VMs
• Pulsar still uses 25-50% less VMs than
Kafka
• Kafka was able to sustain only 2.3 GiB/s in
and 2.3 GiB/s out, in this case
• Pulsar EBS - With Journal requires 30%
more VMs for durability and no data loss
© 2019 SPLUNK INC.
Pulsar uses 25–50% less VMs for
the given throughput. With
additional 30% more VMs Pulsar
supports durability
© 2019 SPLUNK INC.
Single Partition Throughput
0 MB/s
100 MB/s
200 MB/s
300 MB/s
400 MB/s
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
82.1
54.7
285.4
304.9
277.4
Throughput
• Pulsar partition is not limited by a single
disk I/O - takes advantage of storage
striping in BookKeeper
© 2019 SPLUNK INC.
Single Partition Latency
0 ms
750 ms
1500 ms
2250 ms
3000 ms
Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe
945.6
2441
11.76.3
193.1
128.1
21.83.24.220.4
50 pct 99 pct
• Pulsar latency is consistently lower than
kafka
• Varies around 5x-100x
© 2019 SPLUNK INC.
Pulsar is 1.5-2x lower in capex
cost with 5-50x improvement in
latency and 2-3x lower in opex
due to layered architecture
© 2019 SPLUNK INC.
Performance
✦ Pulsar provides consistently 5x-50x lower in latency
✦ Pulsar uses 20-30% less brokers + bookies as it efficiently exploits available disk bandwidth
✦ Pulsar uses 50–60% less CPU cores with complete control of memory
✦ Pulsar single partition throughput is 5x higher and 5x-50x lower in latency
© 2019 SPLUNK INC.
Requirement #14 - Observability
✦ When in production, we need visibility about overall health of the system and its components
✦ System should expose detailed relevant metrics
✦ Should be able to easy to debug and troubleshoot
© 2019 SPLUNK INC.
Pulsar Observability
✦ System overview metrics
✦ Messaging metrics
✦ Topic metrics
✦ Function metrics
✦ Broker metrics
✦ Bookie metrics
✦ Proxy metrics
✦ JVM metrics
✦ Log metrics
✦ Zookeeper metrics
✦ Container metrics
✦ Host metrics
© 2019 SPLUNK INC.
Requirement #15 - Ecosystem
It is growing!
© 2019 SPLUNK INC.
Requirement #16 - Adoption
Over 600 companies and growing!
© 2020 SPLUNK INC.
Requirement #17 - Community
320
contributors
30
committers
600+
Companies
6.7K
github stars
© 2019 SPLUNK INC.
Requirement #18 - Licensing
✦ Apache License 2.0
✦ Affiliated with vendor neutral institutions - Apache/CNCF
✦ Avoid vendor controlled components, if needed
✦ Vendor could change the license later
© 2019 SPLUNK INC.
Apache Pulsar vs Apache Kafka
Multi-tenancy
A single cluster can support many
tenants and use cases
Seamless Cluster Expansion
Expand the cluster without any
down time
High throughput & Low Latency
Can reach 1.8 M messages/s in a
single partition and publish
latency of 5ms at 99pct
Durability
Data replicated and synced to disk
Geo-replication
Out of box support for
geographically distributed
applications
Unified messaging model
Support both Topic & Queue
semantic in a single model
Tiered Storage
Hot/warm data for real time access and
cold event data in cheaper storage
Pulsar Functions
Flexible light weight compute
Highly scalable
Can support millions of topics, makes
data modeling easier
Licensing
Apache 2.0 - no vendor specific
licensing
Multiprotocol Handlers
Support for AMPQ, MQTT and
Kafka
OSS
Several core features of Pulsar are in
Apache as compared to Kafka
© 2019 SPLUNK INC.
Apache Pulsar at Splunk
✦ Apache Pulsar as a service running in production processing several billions of messages/day
✦ Apache Pulsar is integrated as the message bus with Splunk DSP 1.1.0 - core streaming product
✦ Apache Pulsar is being introduced in other initiatives as well.
© 2019 SPLUNK INC.
Splunk DSP
A real time stream processing solution that collects, processes and delivers data to Splunk and other
destinations in milliseconds
Splunk Data Stream Processor
Detect Data Patterns or Conditions
Mask Sensitive Data
Aggregate Format
Normalize Transform
Filter Enhance
Turn Raw Data Into

High-value Information
Protect Sensitive Data
Distribute Data To Splunk

Or Other Destinations
Data

Warehouse
Public

Cloud
Message

Bus
© 2019 SPLUNK INC.
DSP Architecture
HEC
S2S
Batch
Apache Pulsar
Stream Processing
Engine
External
Systems
REST Client
Forwarders
Data Source
Splunk
Indexer
Apache Pulsar is at the core of DSP
© 2020 SPLUNK INC.
Closing Remarks
Future Work
✦ Auto-partitioning
✦ Pluggable metadata store
✦ Enhancing the state store
Current Work
✦ Improved Go client
✦ Support for batch connectors
✦ Pulsar k8s operator
✦ Critical bug fixes
Splunk is committed to advancing Apache Pulsar - as it is used by our core products and cloud services
Visit our booth for a demo of DSP!
We are hiring!
Thank You
© 2019 SPLUNK INC.

Apache Pulsar @Splunk

  • 1.
    © 2019 SPLUNKINC. Apache Pulsar @splunk Nov 2020 Karthik Ramasamy Splunk
  • 2.
    © 2020 SPLUNKINC. Karthik Ramasamy Senior Director of Engineering @karthikz streaming @splunk | ex-CEO of @streamlio | co-creator of @heronstreaming | ex @Twitter | Ph.D
  • 3.
    During the courseof this presentation, we may make forward-looking statements regarding future events or plans of the company. We caution you that such statements reflect our current expectations and estimates based on factors currently known to us and that actual events or results may differ materially. The forward-looking statements made in the this presentation are being made as of the time and date of its live presentation. If reviewed after its live presentation, it may not contain current or accurate information. We do not assume any obligation to update any forward- looking statements made herein. In addition, any information about our roadmap outlines our general product direction and is subject to change at any time without notice. It is for informational purposes only, and shall not be incorporated into any contract or other commitment. Splunk undertakes no obligation either to develop the features or functionalities described or to include any such feature or functionality in a future release. Splunk, Splunk>, Data-to-Everything, D2E and Turn Data Into Doing are trademarks and registered trademarks of Splunk Inc. in the United States and other countries. All other brand names, product names or trademarks belong to their respective owners. © 2020 Splunk Inc. All rights reserved Forward- Looking Statements © 2020 SPLUNK INC.
  • 4.
    © 2019 SPLUNKINC. Agenda 1) Introduction to Splunk 2) Streaming system requirements 3) How Pulsar satisfies the requirements? 4) Apache Pulsar at Splunk 5) Questions?
  • 5.
    © 2020 SPLUNKINC. Data
 LakesMaster Data Management ETL Point Data Management 
 Solutions Data
 Silos Business Processes The 
 Data-to-Everything Platform IT Security DevOps
  • 6.
    © 2019 SPLUNKINC. Core of Emerging Use Cases Streaming data transformation Data distribution Real-time analytics Real-time monitoring and notifications IoT analytics ! Event-driven workflows Messaging / Streaming Systems Interactive applications Log processing and analytics
  • 7.
    © 2020 SPLUNKINC. Streaming System Requirements DurabilityScalability Fault Tolerance High Availability Sharing & Isolation Messaging Models Client Languages Persistence Type Safety Deployment in k8s
  • 8.
    © 2020 SPLUNKINC. Streaming System Requirements AdoptionEcosystem Community Licensing Disaster Recovery Operability TCO Observability
  • 9.
    © 2019 SPLUNKINC. Requirement #1 - Scalability ✦ Traffic can wildly vary while the system in production ✦ System need to scale up with no effect to publish/consume throughput and latency ✦ Support for linear increase/decrease in publish/consume throughput as new nodes are added ✦ Automatic spreading out load to new machines as new nodes are added ✦ Scalability across different dimensions - serving and storage
  • 10.
    © 2019 SPLUNKINC. Scalability Consumer Producer Producer Producer Consumer Consumer Consumer Messaging Broker Broker Broker Bookie Bookie Bookie Bookie Bookie Event storage Function Processing WorkerWorker ✦ Independent layers for processing, serving and storage ✦ Messaging and processing built on Apache Pulsar ✦ Storage built on Apache BookKeeper
  • 11.
    © 2019 SPLUNKINC. Requirement #2 - Durability ✦ Splunk applications have different types of durability ✦ Persistent Durability - No data loss in the presence of nodes failures or entire cluster failure - e.g security & compliance ✦ Replicated Durability - No data loss in the presence of limited nodes failures - e.g, machine logs ✦ Transient Durability - Data loss in the presence of failures - e.g metrics data
  • 12.
    © 2019 SPLUNKINC. Durability Bookie Bookie BookieBrokerProducer Journal Journal Journal fsync fsync fsync
  • 13.
    © 2019 SPLUNKINC. Requirement #3 - Fault Tolerance ✦ Ability of the system to function under component failures ✦ Ideally without any manual intervention up to a certain degree
  • 14.
    © 2019 SPLUNKINC. Pulsar Fault Tolerance Segment 1 Segment 2 Segment n .
.
. Segment 2 Segment 3 Segment n .
.
. Segment 3 Segment 1 Segment n .
.
. Segment 1 Segment 2 Segment n .
.
. Storage Broker Serving Broker Broker ✦ Broker Failure ✦ Topic reassigned to available broker based on load ✦ Can construct the previous state consistently ✦ No data needs to be copied ✦ Bookie Failure ✦ Immediate switch to a new node ✦ Background process copies segments to other bookies to maintain replication factor
  • 15.
    © 2019 SPLUNKINC. Requirement #4 - High Availability ✦ System should continue to function in the cloud or on-prem in following conditions, if applicable ✦ When two nodes/instances fail ✦ When an availability zone or a rack fails
  • 16.
    © 2019 SPLUNKINC. Pulsar High Availability Segment 1 Segment 2 Segment n .
.
. Segment 2 Segment 3 Segment n .
.
. Segment 3 Segment 1 Segment n .
.
. Storage Broker Serving Broker Broker ✦ Node Failures ✦ Broker failures ✦ Bookie failures ✦ Handled similar to respective component failures ✦ Zone/Rack Failures ✦ Bookies provide rack awareness ✦ Broker replicate data to different racks/zones ✦ In the presence of zone/rack failure, data is available in other zones Zone A Zone B Zone C
  • 17.
    © 2019 SPLUNKINC. Requirement #5 - Sharing and Isolation ✦ System should have the capabilities to ✦ Share many applications on the same cluster for cost and manageability purposes ✦ Isolate different applications on their own machines in the same cluster when needed
  • 18.
    © 2019 SPLUNKINC. Sharing and Isolation Apache Pulsar Cluster Product Safety ETL Fraud Detection Topic-1 Account History Topic-2 User Clustering Topic-1 Risk Classification MarketingCampaigns ETL Topic-1 Budgeted Spend Topic-2 Demographic Classification Topic-1 Location Resolution Data Serving Microservice Topic-1 Customer Authentication 10 TB 7 TB 5 TB ✦ Software isolation Storage quotas, flow control, back pressure, rate limiting ✦ Hardware isolation Constrain some tenants on a subset of brokers/bookies
  • 19.
    © 2019 SPLUNKINC. Requirement #6 - Client Languages Apache Pulsar Cluster Java Python Go C++ C Officially supported by the project
  • 20.
    © 2019 SPLUNKINC. Requirement #7 - Multiple Messaging Models ✦ Splunk applications require different consuming models ✦ Collect once and deliver once capability (e.g) process S3 file and ingest into index ✦ Receive data once and deliver many times (e.g) multiple pipelines sharing same data for different types of processing ✦ Avoid two systems, if possible - from cost and operations perspective ✦ Avoid any additional infra-level code, if possible, that emulates one semantics on top of another system
  • 21.
    © 2020 SPLUNKINC. Pulsar Messaging Models • Shared Subscription • Key Shared Subscription Messaging Queuing • Exclusive Subscription • Failover Subscription Native support avoids two systems and extra infrastructure code that requires maintenance
  • 22.
    © 2019 SPLUNKINC. Messaging Models - Streaming Pulsar topic/ partition Producer 2 Producer 1 Consumer 1 Consumer 2 Subscription A M4 M3 M2 M1 M0 M4 M3 M2 M1 M0 X Exclusive
  • 23.
    © 2019 SPLUNKINC. Messaging Models - Streaming Pulsar topic/ partition Producer 2 Producer 1 Consumer 1 Consumer 2 Subscription B M4 M3 M2 M1 M0 M4 M3 M2 M1 M0 Failover In case of failure in consumer 1
  • 24.
    © 2019 SPLUNKINC. Messaging Models - Queuing Pulsar topic/ partition Producer 2 Producer 1 Consumer 2 Consumer 3 Subscription C M4 M3 M2 M1 M0 Shared Traffic is equally distributed across consumers Consumer 1 M4M3 M2M1M0
  • 25.
    © 2019 SPLUNKINC. Messaging Models - Queuing Pulsar topic/ partition Producer 2 Producer 1 Consumer 2 Consumer 3 Subscription D K3 K1 K3 K2 K1 Key Shared Traffic is distributed across consumers based on key Consumer 1 K3K1 K3K2K1
  • 26.
    © 2019 SPLUNKINC. Selective vs Cumulative Acknowledgements M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 M0 M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12 M13 M14 Cumulative Ack (M12) Ack (M7) Ack (M12)
  • 27.
    © 2019 SPLUNKINC. Requirement #8 - Persistence Producer Producer Producer Consumer Consumer Cold storage Hot storage Topic ✦ Offload cold data to lower-cost storage (e.g. cloud storage, HDFS) ✦ Manual or automatic (configurable threshold) ✦ Transparent to publishers and consumers ✦ Allows near-infinite event storage at low cost (e.g) compliance and security
  • 28.
    © 2019 SPLUNKINC. Requirement #9 - Type Safety ✦ Splunk applications are varied ✦ One class requires fixed schema ✦ Another class requires fixed schema with evolution ✦ Other class requires flexibility for no schema or handled at the application level ✦ Avoid bringing another system for schema management ✦ Support for multiple different types -
  • 29.
    © 2019 SPLUNKINC. Pulsar Schema Registry ✦ Provides type safety to applications built on top of Pulsar ✦ Server side - system enforces type safety and ensures that producers and consumers remain synced ✦ Schema registry enables clients to upload data schemas on a topic basis. ✦ Schemas dictate which data types are recognized as valid for that topic
  • 30.
    © 2019 SPLUNKINC. Requirement #10 - Ease of Deployment in k8s ✦ Splunk uses k8s for orchestration ✦ System should be easily deployable in k8s ✦ Surface area of the system exposed outside k8s should be minimal - one single end point backed by ✦ Should be able to segregate the nodes receiving external traffic ✦ Should be flexible to deploy from CI/CD pipelines for testing and development
  • 31.
    © 2019 SPLUNKINC. Pulsar Deployment in k8s Broker Broker Broker Segment 1 Segment 2 Segment n .
.
. Segment 2 Segment 3 Segment n .
.
. Segment 3 Segment 1 Segment n .
.
. Segment 1 Segment 2 Segment n .
.
. S LB Proxy Proxy Proxy Broker Broker Broker Segment 1 Segment 2 Segment n .
.
. Segment 2 Segment 3 Segment n .
.
. Segment 3 Segment 1 Segment n .
.
. Segment 1 Segment 2 Segment n .
.
. S LB Proxy Proxy Proxy Aggregated Deployment Segregated Deployment
  • 32.
    © 2019 SPLUNKINC. Requirement #11 - Operability ✦ System should be online and continue to serve production traffic in the following scenarios ✦ OS upgrades ✦ Security patches ✦ Disk swapping ✦ Upgrading ✦ Self adjusting components ✦ Bookies turn themselves into readonly when 90% of disk is full ✦ Load manager to balance traffic across brokers
  • 33.
    © 2019 SPLUNKINC. Requirement #12 - Disaster Recovery ✦ Critical enterprise data flows through Splunk products ✦ Customer expect continuous availability in cloud / on-premise ✦ Required to handle data center failures seamlessly ✦ Pulsar provides both ✦ Asynchronous Replication ✦ Synchronous Replication
  • 34.
    © 2019 SPLUNKINC. Disaster Recovery - Async Replication ✦ Two independent clusters, primary/ standby or primary/primary configuration ✦ Configured tenants and namespaces replicate to standby ✦ Data published to primary is asynchronously replicated to standby ✦ Producers and consumers restarted in second datacenter upon primary failure ✦ With replicated subscriptions, consumers start close to where they left off Producers (active) Datacenter A Consumers (active) Pulsar Cluster (primary) Datacenter B Producers (standby) Consumers (standby) Pulsar Cluster (standby) Pulsar replication ZooKeeper ZooKeeper
  • 35.
    © 2019 SPLUNKINC. Requirement #13 - Performance & TCO ✦ Splunk application requirements are very varied ✦ real-time (< 10 ms) ✦ near real-time (< few mins) ✦ high throughput (ability to handle multi PB/day in a single cluster) ✦ Conducted a detailed performance study comparing with Kafka
  • 36.
    © 2019 SPLUNKINC. Perfomance Experiments
  • 37.
    © 2019 SPLUNKINC. Settings AWS - i3.8xlarge 32 vCPU 244 GB of RAM 4 x 1,900 GB NVMe exposed as bonded RAID0 10 Gbps full duplex 7 Gbps dedicated EBS
  • 38.
    © 2019 SPLUNKINC. Settings 20 - i3.8xlarge instances in two tainted groups Pulsar tainted group - 15 instances, for running Pulsar/Kafka Pulsar client tainted group - 5 instances, for producing/consuming traffic
  • 39.
    © 2019 SPLUNKINC. Settings Message size 1 KiB Batch size 128 KiB Max delay 1 ms Message size 1 KiB Batch size 128 KiB Linger time 1 ms Apache Pulsar Apache Kafka
  • 40.
    © 2019 SPLUNKINC. Open Messaging Benchmark • Designed to measure performance of distributed messaging systems
 • Supports various “drivers” (Kafka, Pulsar, RocketMQ, RabbitMQ, ActiveMQ Artemis, NATS, NSQ)
 • Automated deployment in EC2
 • Configure workloads through a YAML file
  • 41.
    © 2019 SPLUNKINC. Open Messaging Benchmark Coordinator will take the workload definition and propagate to multiple workers — Collects and reports stats
  • 42.
    © 2019 SPLUNKINC. Publish Latency - 1 GiB/s in - 1 GiB/s out • Pulsar latency is consistently lower • Varies 5-140x Latency 0 ms 500 ms 1000 ms 1500 ms 2000 ms Pulsar EBS 
 With Journal Pulsar EBS 
 No Journal Pulsar NVMe 
 No Journal Kafka EBS Kafka NVMe 1959.8 1178.9 14.513.9 219.1 43.8 102.5 7.77.917.5 50 pct 99 pct
  • 43.
    © 2019 SPLUNKINC. Publish Latency - 1 GiB/s in - 3 GiB/s out Latency 0 ms 750 ms 1500 ms 2250 ms 3000 ms Pulsar EBS 
 With Journal Pulsar EBS 
 No Journal Pulsar NVMe 
 No Journal Kafka EBS Kafka NVMe 1500.7 2475.1 15.114.2 257 70.734.88.18.119.3 50 pct 99 pct • Pulsar latency is consistently lower • Varies 5-150x • Pulsar EBS - With Journal that guarantees durability still lower than Kafka without durability
  • 44.
    © 2019 SPLUNKINC. Publish Latency - 3 GiB/s in - 3 GiB/s out Latency 0 ms 350 ms 700 ms 1050 ms 1400 ms Pulsar EBS 
 No Journal Pulsar NVMe 
 No Journal Kafka EBS Kafka NVMe 1276.5 866.1 14.113.9 21.926.664.44.4 50 pct 99 pct • Pulsar latency is consistently lower • Varies 5-90x
  • 45.
    © 2019 SPLUNKINC. Pulsar provides consistently 5x-50x lower latency
  • 46.
    © 2019 SPLUNKINC. Brokers/Bookies Used - 1 GiB/s in - 1 GiB/s out • Pulsar uses 20-40% less brokers + bookies than Kafka • Due to higher bandwidth utilization • Pulsar EBS - With Journal requires 30% more brokers + bookies for durability Brokers + Bookies 0 3.5 7 10.5 14 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 3 5 10 10 9 333 Brokers Bookies
  • 47.
    © 2019 SPLUNKINC. Brokers/Bookies Used - 1 GiB/s in - 3 GiB/s out Brokers + Bookies 0 4 8 12 16 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 3 5 10 10 15 55 6 Brokers Bookies • Pulsar uses 20-50% less brokers + bookies than Kafka • Pulsar requires more brokers than 1 GiB/s out case due to the additional bandwidth required for 3 GiB/s out • Pulsar EBS - With Journal requires just 7% more brokers + bookies for durability
  • 48.
    © 2019 SPLUNKINC. Disk Write Bandwidth - 1 GiB/s in - 1 GiB/s out Bandwidth Per VM 0 MB/s 225 MB/s 450 MB/s 675 MB/s 900 MB/s Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 350350 850 530 250 Bandwidth • Pulsar provides as much as 850 MB/s per VM instance in NVMe • Pulsar in EBS - No Journal provides 530 MB/s out of 875 MB/s available • Kafka uses only 350 MB/s per VM instance independent of EBS or NVMe • Pulsar EBS - With Journal provides 250 MB/s since the data is written twice - effectively utilizing 500 MB/s of EBS disk write bandwidth
  • 49.
    © 2019 SPLUNKINC. Disk Write Bandwidth - 1 GiB/s in - 3 GiB/s out Bandwidth Per VM 0 MB/s 225 MB/s 450 MB/s 675 MB/s 900 MB/s Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 310310 850 510 250 Bandwidth • Pulsar provides as much as 850 MB/s per VM instance in NVMe • Pulsar in EBS - No Journal provides 510 MB/s out of 875 MB/s available • Kafka uses only 310 MB/s per VM instance independent of EBS or NVMe • Pulsar EBS - With Journal provides 250 MB/s since the data is written twice - effectively utilizing 500 MB/s of EBS disk write bandwidth
  • 50.
    © 2019 SPLUNKINC. Pulsar uses 20-30% less brokers + bookies since it exploits available disk bandwidth
  • 51.
    © 2019 SPLUNKINC. CPU Usage - 1 GiB/s in - 1 GiB/s out CPU Usage 0 15 30 45 60 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 56 39.6 24.2724.82 28.47 cores • Pulsar consumes 40-60% less cores than Kafka • Kafka uses more CPU due to CRC32 computation and Scala overhead
  • 52.
    © 2019 SPLUNKINC. CPU Usage - 1 GiB/s in - 3 GiB/s out CPU Usage 0 20 40 60 80 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 56.6 75.6 28.5729.65 33.66 cores • Pulsar consumes 40-60% less cores than Kafka • Pulsar CPU usage is more or less the same for 1 GiB out and 3 GiB out
  • 53.
    © 2019 SPLUNKINC. NIC Usage - 1 GiB/s in - 1 GiB/s out NIC Usage 23 23.75 24.5 25.25 26 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 26 25.2 25.02 23.97 24.21 26 25.2 24.11 24.66 24.82 In Out • NIC usage is pretty much the same in both Kafka and Pulsar
  • 54.
    © 2019 SPLUNKINC. NIC Usage - 1 GiB/s in - 3 GiB/s out NIC Usage 0 10.5 21 31.5 42 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 4242 40.8640.741.26 2625.5 23.68 24.6524.66 In Out • NIC usage is pretty much the same in both Kafka and Pulsar
  • 55.
    © 2019 SPLUNKINC. Pulsar uses 50–60% less CPU cores with complete control of memory
  • 56.
    © 2019 SPLUNKINC. VMs Needed- 1 GiB/s in - 1 GiB/s out VMs Required 0 2.5 5 7.5 10 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 10 9 4 5 10 i3.8xlarge • Pulsar uses 30-60% less VMs than Kafka • This is due to effective use of bandwidth per VM by Pulsar
  • 57.
    © 2019 SPLUNKINC. VMs Needed- 1 GiB/s in - 3 GiB/s out VMs Required 0 4 8 12 16 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 10 15 55 10 i3.8xlarge • Pulsar uses 30-60% less VMs than Kafka • This is due to effective use of bandwidth per VM by Pulsar
  • 58.
    © 2019 SPLUNKINC. VMs Needed - 3 GiB/s in - 3 GiB/s out VMs Required 0 7.5 15 22.5 30 Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 7 5 15 1515151515 VMs Additional VMs • Pulsar still uses 25-50% less VMs than Kafka • Kafka was able to sustain only 2.3 GiB/s in and 2.3 GiB/s out, in this case • Pulsar EBS - With Journal requires 30% more VMs for durability and no data loss
  • 59.
    © 2019 SPLUNKINC. Pulsar uses 25–50% less VMs for the given throughput. With additional 30% more VMs Pulsar supports durability
  • 60.
    © 2019 SPLUNKINC. Single Partition Throughput 0 MB/s 100 MB/s 200 MB/s 300 MB/s 400 MB/s Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 82.1 54.7 285.4 304.9 277.4 Throughput • Pulsar partition is not limited by a single disk I/O - takes advantage of storage striping in BookKeeper
  • 61.
    © 2019 SPLUNKINC. Single Partition Latency 0 ms 750 ms 1500 ms 2250 ms 3000 ms Pulsar EBS - With Journal Pulsar EBS - No Journal Pulsar NVMe - No Journal Kafka EBS Kafka NVMe 945.6 2441 11.76.3 193.1 128.1 21.83.24.220.4 50 pct 99 pct • Pulsar latency is consistently lower than kafka • Varies around 5x-100x
  • 62.
    © 2019 SPLUNKINC. Pulsar is 1.5-2x lower in capex cost with 5-50x improvement in latency and 2-3x lower in opex due to layered architecture
  • 63.
    © 2019 SPLUNKINC. Performance ✦ Pulsar provides consistently 5x-50x lower in latency ✦ Pulsar uses 20-30% less brokers + bookies as it efficiently exploits available disk bandwidth ✦ Pulsar uses 50–60% less CPU cores with complete control of memory ✦ Pulsar single partition throughput is 5x higher and 5x-50x lower in latency
  • 64.
    © 2019 SPLUNKINC. Requirement #14 - Observability ✦ When in production, we need visibility about overall health of the system and its components ✦ System should expose detailed relevant metrics ✦ Should be able to easy to debug and troubleshoot
  • 65.
    © 2019 SPLUNKINC. Pulsar Observability ✦ System overview metrics ✦ Messaging metrics ✦ Topic metrics ✦ Function metrics ✦ Broker metrics ✦ Bookie metrics ✦ Proxy metrics ✦ JVM metrics ✦ Log metrics ✦ Zookeeper metrics ✦ Container metrics ✦ Host metrics
  • 66.
    © 2019 SPLUNKINC. Requirement #15 - Ecosystem It is growing!
  • 67.
    © 2019 SPLUNKINC. Requirement #16 - Adoption Over 600 companies and growing!
  • 68.
    © 2020 SPLUNKINC. Requirement #17 - Community 320 contributors 30 committers 600+ Companies 6.7K github stars
  • 69.
    © 2019 SPLUNKINC. Requirement #18 - Licensing ✦ Apache License 2.0 ✦ Affiliated with vendor neutral institutions - Apache/CNCF ✦ Avoid vendor controlled components, if needed ✦ Vendor could change the license later
  • 70.
    © 2019 SPLUNKINC. Apache Pulsar vs Apache Kafka Multi-tenancy A single cluster can support many tenants and use cases Seamless Cluster Expansion Expand the cluster without any down time High throughput & Low Latency Can reach 1.8 M messages/s in a single partition and publish latency of 5ms at 99pct Durability Data replicated and synced to disk Geo-replication Out of box support for geographically distributed applications Unified messaging model Support both Topic & Queue semantic in a single model Tiered Storage Hot/warm data for real time access and cold event data in cheaper storage Pulsar Functions Flexible light weight compute Highly scalable Can support millions of topics, makes data modeling easier Licensing Apache 2.0 - no vendor specific licensing Multiprotocol Handlers Support for AMPQ, MQTT and Kafka OSS Several core features of Pulsar are in Apache as compared to Kafka
  • 71.
    © 2019 SPLUNKINC. Apache Pulsar at Splunk ✦ Apache Pulsar as a service running in production processing several billions of messages/day ✦ Apache Pulsar is integrated as the message bus with Splunk DSP 1.1.0 - core streaming product ✦ Apache Pulsar is being introduced in other initiatives as well.
  • 72.
    © 2019 SPLUNKINC. Splunk DSP A real time stream processing solution that collects, processes and delivers data to Splunk and other destinations in milliseconds Splunk Data Stream Processor Detect Data Patterns or Conditions Mask Sensitive Data Aggregate Format Normalize Transform Filter Enhance Turn Raw Data Into
 High-value Information Protect Sensitive Data Distribute Data To Splunk
 Or Other Destinations Data
 Warehouse Public
 Cloud Message
 Bus
  • 73.
    © 2019 SPLUNKINC. DSP Architecture HEC S2S Batch Apache Pulsar Stream Processing Engine External Systems REST Client Forwarders Data Source Splunk Indexer Apache Pulsar is at the core of DSP
  • 74.
    © 2020 SPLUNKINC. Closing Remarks Future Work ✦ Auto-partitioning ✦ Pluggable metadata store ✦ Enhancing the state store Current Work ✦ Improved Go client ✦ Support for batch connectors ✦ Pulsar k8s operator ✦ Critical bug fixes Splunk is committed to advancing Apache Pulsar - as it is used by our core products and cloud services Visit our booth for a demo of DSP! We are hiring!
  • 75.
    Thank You © 2019SPLUNK INC.