Tony Lee & Nelson Wang, Splunk
Modern microservice-oriented software architectures evangelize the principles of infrastructure-as-code and declarative directives to manage and run applications. At Splunk, we wanted to marry these ideals with the majestic monolith, Splunk Enterprise, to simplify the use of our product through containerization. Without rearchitecting the entire product from the ground-up, which can be a costly investment, we focused on incorporating a flexible configuration management layer on top of the core application. This has enabled us to make running Splunk in Docker act and behave as a true microservice, greatly reducing the friction of migrating towards more container-native software.
We not only concentrated on making our open-source Docker image initiative user-friendly and production-ready, but we also wanted to seamlessly integrate it back into our internal engineering process. Join us for this session as we discuss migrating a traditional application into a microservice ecosystem, developing a containerization strategy for both external customer usage and internal development, as well as learning about our internal container platform at scale.
15. • Add file/directory monitoring
• Open a TCP/UDP socket & listen for input
• Enable receiving of syslog data
• Setup HTTP event collection token
• Forward data to another service
• Setup administrator password
• Create user accounts for access control
• Install an enterprise license
• Connect to an existing license master
• Setup indexes
• Touch configuration files
• Install apps
• etc.
Usability
16. • Asynchronous dynamic provisioning performed by Ansible
• Eliminate reliance on any third-party service registrar/catalog
• Parallelized setup removed ordered dependencies and results in faster
time-to-value
Micro-Orchestration Layer
entrypoint.sh
Dockerfile
19. • Intermittent service interruption is expected within configuration layer
• Health-checks and probes to assist in lifecycle management
Fault Tolerance
20. • Orchestration engine
administers the
pods/containers
• Application-context is
shared amongst cluster
members via
environment variables
Distribution of State
SPLUNK DEPLOYMENT
Container
Indexer 2
Container
Indexer 1
Container
Cluster Master
Container
Search Head
26. Dynamic scaling of
specific roles
How do I add a new
indexer?
Scalability
SPLUNK DEPLOYMENT
Container
Cluster Master
Container
Search Head 1
Container
Indexer 2
Container
Indexer 1
Container
Indexer 3
29. External Usage
• Customers have been waiting to
run Splunk on Docker!
• Adoption amongst many
members of the open-source
community
• Examples for both swarm + k8s
manifests in our GitHub repo
30.
31. Internal Usage
Support
• Cheap, reproducible
environments
• Support any size
topology, any version
Open-source images are fed back into our internal deployment
provisioning service (orca) used across the company
Sales
• Rapid, reliable demo
instances on-demand
• Support for local
scenarios for the field
Dev/QA
• Disposability around
testing changes
• Accelerates iteration
cycles in CI/CD
32. How?
orca leverages Docker Enterprise and Universal Control Plane (UCP)
Node
Worker
Node
Worker
Node
Worker
Node
Worker
swarm mode cluster
Node
Worker
Node
Worker
Dev/QA/Sales Team
Using Swarm/K8s
Support Team
using Swarm/K8s
Ops Team
DOCKER ENTERPRISE
EDITION
35. Image CI/CD
• Support for multiple platforms
• Integrated security scanning
• Functional tests for various
configurations and topologies
• Version control, custom
dependencies for individual
teams
36. Pipeline
GitHub
IBM s390x Agent
$ build splunk-
debian
Linux Agent
$ build splunk-
debian
Linux Agent
$ push splunk-debian
DockerHub
Public/Open-source
Internal
37. The Future
• Deployment frameworks (helm,
operators)
• Find ways to make Splunk more
accessible and reduce time-to-
value
• Always consider the future of
the monolith!
38. Summary
• Find ways to bridge the gap while migrating monolithic
software to microservice architecture
• Plan for the future
• Dynamic self-orchestration made our software offering more
nimble.