4. Industry Leading Platform For Machine Data
Machine Data: Any Location, Type, Volume
Platform Support (Apps / API / SDKs)
Enterprise Scalability
Universal Indexing
Answer Any Question
Custom
dashboards
Report
and
analyze
Monitor
and alert
Developer
Platform
Ad hoc
search
Online
Services Web
Services
Servers
Security GPS
Location
Storage
Desktops
Networks
Packaged
Applications
Custom
ApplicationsMessaging
Telecoms
Online
Shopping
Cart
Web
Clickstreams
Databases
Energy
Meters
Call Detail
Records
Smartphones
and Devices
RFID
On-
Premises
Private
Cloud
Public
Cloud
5. Industry Leading Platform For Machine Data
Machine Data: Any Location, Type, Volume
Platform Support (Apps / API / SDKs)
Enterprise Scalability
Universal Indexing
Answer Any Question
Custom
dashboards
Report
and
analyze
Monitor
and alert
Developer
Platform
Ad hoc
search
Online
Services Web
Services
Servers
Security GPS
Location
Storage
Desktops
Networks
Packaged
Applications
Custom
ApplicationsMessaging
Telecoms
Online
Shopping
Cart
Web
Clickstreams
Databases
Energy
Meters
Call Detail
Records
Smartphones
and Devices
RFID
On-
Premises
Private
Cloud
Public
Cloud
Any amount, any location, any source
Schema-
on-the-fly
Universal
indexing
No
back-end
RDBMS
No need
to filter
data
7. 1.
2.
3.
4.
Simple Steps to Deploy Splunk Enterprise
Download
Install
Forward Data
Search
Four steps:
DatabasesNetworks Servers Virtual
Machines
Smartphones
and Devices
Custom
Applications
Security Web
Server
Sensors
8. Product Roles
Searching and Reporting (Search Head)
Indexing and Search Services (Indexer)
Data Collection and Forwarding (Forwarder)
Data Governor (Cluster Master)
Distributed Management (Deployment Server)
DatabasesNetworks Servers Virtual
Machines
Smartphones
and Devices
Custom
Applications
Security Web
Server
Sensors
9. Scales to Hundreds of TBs/Day
Enterprise-Class Scale, Resilience and Interoperability
Send data from thousands of servers using any combination of Splunk Forwarders
Auto load-balanced forwarding to Splunk Indexers
Offload search load to Splunk Search Heads
10. 1.
2.
3.
Simple Steps to Deploy Splunk Cloud
Sign Up
Forward Data
Search
Three steps:
DatabasesNetworks Servers Virtual
Machines
Smartphones
and Devices
Custom
Applications
Security Web
Server
Sensors
11. Visibility Across Datacenters
Distributed search unifies the view
across locations
Role-based access controls how far a given
user's search will span
New York Tokyo
London Cloud
12. Ingests Data From Heterogeneous Data Sources
Agent-Less and Agent Approach for Flexibility and Optimization
perf
shell
API
Mounted File Systems
hostnamemount
syslog
TCP/UDP
Event Logs
Performance
Active
Directory
syslog hosts
and network devices
Unix, Linux and Windows hosts
Local File Monitoring
Splunk Forwarder
virtual
host
Windows
Scripted or Modular Inputs
shell scripts,
API subscriptions
Mainframes*nix
Wire Data
Splunk App for Stream
DevOps/IoT
HTTP Event Collector
13. Forwards Events to Third-Party Systems
Service Desk
Event Console
SIEM
RAW
Formatted
14. REPLICATION
Delivers Mission-Critical Availability
• Data replication – maintain
searchability even if servers
go down
• Multi-site capable –
maintain searchability even
if a site goes down
• Search Affinity – optimizes
searches by fetching from
the closest/fastest location
Clustering
Portland
Datacenter
New York
Datacenter
15. Integrates with Third-Party Business Tools
Analyst Splunk admin
Requirement
s
STEP 1 Business user
communicates data
requirements to
Splunk admin
STEP 2 Splunk admin authors saved
searches in Splunk Enterprise
thereby making the searches
available to ODBC driver
STEP 3 Business user
uses tool to access
saved searches and
retrieve data from
Splunk Enterprise
ODBC driver
(SQL to SPL
translation layer)
Analyst
Saved
Searches
17. Turn Machine Data Into Operational Intelligence
Answer Any Question
Platform Support (Apps / API / SDKs)
Enterprise Scalability
Universal Indexing
Custom
Dashboards
Report
and
Analyze
Monitor
and Alert
Developer
Platform
Ad hoc
Search
18. Search All Your Machine Data
• Real-time and historical data on-
premises, in the cloud or both
• Over 140 commands including
anomaly detection and machine
learning
Data
ParsingQueue
Parsing Pipeline
• Source, event typing
• Character set
normalization
• Line breaking
• Timestamp
identification
• Regex transforms
Indexing
Pipeline
Real-
Time
Buffer
Raw Data
Index Files
Real-
Time
Search
Process
Monitor
Input IndexQueue
TCP/UDP
Input
Scripted
Input
Splunk
Index
Search all your data
Results right away
Schema-on-the-fly
20. Extract Fields Anytime
• Highlight-to-extract multiple
fields at once
• Apply keyword search filters
• Specify required text in
extractions
• View diverse and rare events
• Validate extracted values with
field stats
Simple field extraction
21. Enrich Raw Data to Make It More Meaningful
Create additional fields from
the raw data with a lookup
to an external data source
LDAP,
AD
Watch
Lists
CRM/ERP
CMDB
External Data Sources
Insight comes out
Data goes in
22. Actionable Alerting
Alerts
• Create alerts based on any
search
• Customize content and format
of email alerts
• Trigger a script
• Custom Alert Actions
– Allows packaged integration
with third-party applications
– Enable custom workflows
– Developers can build, package
and publish alert actions
23. Reports
Dynamic Reporting
Chart on any search
Choose
visualization
Save as a report
• Visually represent the
results of a search
• Run on an ad hoc basis or
save the report to view later
• Share it with others on the
team or a different group
• Add reports to a new or
existing dashboard
24. Custom Visualizations
• Open framework to create or
customize any visual
• Visuals shared via
Splunkbase library
• Available for any use: search,
dashboards, reports
• Visuals for IT, security, IoT
and business analytics
Visualize Any Data
25. Define Relationships in Machine Data
Data Model
• Describes how underlying
machine data is represented
and accessed
• Defines meaningful
relationships in the data
• Enables single authoritative
view of underlying raw data
Hierarchical object view of underlying data
Add constraints to
filter out events
26. Transparent Acceleration
● Automatically collected
– Handles timing issues,
backfill…
● Automatically maintained
– Uses acceleration window
● Stored on the indexers
– Peer to the buckets
● Fault tolerant collection
Time window of data
that is accelerated
Check to enable
acceleration of
data model
High Performance
Analytics Store
27. Event Sampling
• Powerful search option
provides unbiased sample
results
• Useful to quickly
determine dataset
characteristics
• Speeds large-scale data
investigation and discovery
Sample Random Events
28. Easy-to-Use Analytics
● Drag-and-drop interface
enables any user to analyze
data
● Create complex queries and
reports without learning
search language
● Click to visualize any chart
type; reports dynamically
update when fields change
Select fields from
data model
Time window
All chart types available in the chart toolbox
Save report
to share
Pivot
29. Combine Reports to Create Dashboards
Use the built-in
dashboard editor
Or embed the reports into
external sites like a wiki
31. Inside Universal Indexing
Accurate searching and trending
by time across all data
Automatic event boundary identification
Automatic timestamp normalization
34. FrozenWARM COLDHOT
Index
How the Data is Stored and Aged in Splunk
Hot – Newest buckets of data that are still open for write
Warm – Recent data but closed for writing (read only)
Cold – Oldest data, commonly on cheaper, slower storage
Frozen – No longer searchable, commonly archived or deleted data
Optional TSIDX Reduction
35. Extend storage with HDFS or AWS S3
Hadoop ClustersWARM
COLD
FROZEN
Drive Down Costs by Archiving Historical Data to Commodity Hardware
• Archive historical
data to Hadoop or S3
• Unified search
across all data in real
time
• Also analyze archived
data using Hadoop
tools
37. Powerful Developer Platform
REST API
Build Splunk Apps Extend and Integrate Splunk
Simple XML
JavaScript
HTML5
Web
Framework
Java
JavaScript
Python
Ruby
C#
PHP
Data Models
Search Extensibility
Modular Inputs
SDKs
38. Accelerate Your Deployment
Apps – Leverage packaged searches and
dashboards already built on top of Splunk
Education – Focused training programs
online or in a classroom
Professional Services – Harness the
knowledge and speed of the experts
Cloud – No need to wait for infrastructure,
use Splunk AMIs or Splunk Cloud
39. Summary
● Real-Time Architecture
● Schema-on-the-fly
● Massive Scalability
● Easy Reporting and Analytics
● Platform for All Machine Data
As a company our mission is to make machine data accessible, usable and valuable to everyone. This overarching mission is what drives our company and product priorities.
Splunk is the platform for machine data, it digests all machine data and allow users to quickly analyze their data and rapidly obtain insight. The platform was designed around the premise of being able to consume any machine data even if the format changes. A relational database would cannot effectively support constantly changing underlying schemas. Splunk solves this by creating a schema on the fly…
Splunk Cloud is only available in the U.S. and Canada.
Splunk is the platform for machine data, it digests all machine data and allow users to quickly analyze their data and rapidly obtain insight. The platform was designed around the premise of being able to consume any machine data even if the format changes. A relational database would cannot effectively support constantly changing underlying schemas. Splunk solves this by creating a schema on the fly…
Splunk Cloud is only available in the U.S. and Canada.
It only takes minutes to download and install Splunk on the platform of your choice. Once Splunk has been downloaded and installed the next step is to forward data to the Splunk instance. At that point all data is searchable from a single place! Since Splunk stores a copy of the raw data, searches won’t affect the end devices. Having a central place to search your data not only simplifies things, it also decreases risk since a user doesn’t have to log into the end devices.
The software can be installed on a single small instance, such as a laptop, or installed on multiple servers to scale as needed. When installed on multiple servers the functions can be split up to meet any performance, security, or availability requirements.
These are the five logical roles, a Splunk instance can be one or more of the roles.
The search head is what most users interact with. It is the webserver and app interpreting engine that provides the primary, web-based user interface. Since most of the data interpretation happens as-needed at search time, the role of the search head is to translate user and app requests into actionable searches for it’s indexer(s) and display the results. The Splunk web UI is highly customizable, either through our own view and app system, or by embedding Splunk searches in your own web apps or our API. Additional search heads can be deployed to scale with user or search load.
The core of the Splunk infrastructure is indexing. An indexer does two things – it accepts and processes new data, adding it to the index and compressing it on disk. The indexer also services search requests, looking through the data it has via it’s indices and returning the appropriate results to the searcher over a secure compressed communication channel. Indexers scale out almost limitlessly and with almost no degradation in overall performance, allowing Splunk to scale from single-instance small deployments to truly massive Big Data challenges.
The Splunk forwarder is an optional component that can be installed to forward data from servers, desktops, mainframes, and even ARM based devices. There are two types of forwarders; the full Splunk distribution or a dedicated “Universal Forwarder”. The full Splunk distribution can be configured to filter data before transmitting, execute scripts locally, or run SplunkWeb. This gives you several options depending on the footprint size your endpoints can tolerate. The universal forwarder is an ultra-lightweight agent designed to collect data in the smallest possible footprint. Both flavors of forwarder come with automatic load balancing, SSL encryption and data compression, and the ability to route data to multiple Splunk instances or third party systems.
The Cluster Master coordinates which indexers have copies of which buckets to ensure we have met the proper number of replication and searchable copies of each bucket. All clustered Indexers check in with the Master to alert them of their status, and the status of each of their replicated indexes and buckets. We will talk more about buckets later.
And at the bottom there is the there is the Deployment Server, which can be used to manage your distributed Splunk environment. Deployment server helps you synchronize the configuration of your search heads during distributed searching, as well as your forwarders to centrally manage your distributed data collection. Of course, Splunk has a simple flat-file configuration system, so feel free to use your own config management tools if your more comfortable with what you already have.
By allowing Splunk Enterprise to be split into multiple roles, any portion of Splunk can be scaled as needed.
Customers are using Splunk to index hundreds of TB/s a day and search over petabytes of data. Splunk can take a single search and query as many indexers as are needed to complete the job, allowing you to use inexpensive commodity hardware in massively parallel clusters.
Besides achieve massive scale, splitting the roles enabled user to meet location and data segmentation requirements.
If you're looking for all the benefits of Splunk Enterprise with all the benefits of software-as-a-service, then look no further. Splunk Cloud is backed by a 100% uptime SLA, scales to over 10TB/day, and offers a highly secure environment. It makes life easy so you can go home early. Steps to deploy are even simpler as all you need to do is signup, forward your data, and search!
Splunk Cloud delivers all the features of award-winning Splunk Enterprise, as a cloud-based service. The platform provides access to Splunk Enterprise Security and the Splunk App for AWS and enables centralized visibility across cloud, hybrid and on-premises environments.
Instant: Instant trial and instant conversion from POC to production.
Secure: Completed SOC2 Type 2 Attestation*. Dedicated cloud environments for each customer.
Reliable: 100% uptime SLA. All the features of Splunk Enterprise, including apps, APIs, SDKs. 10TB+/day scalability and up to 10x bursting over licensed data volumes**.
Hybrid: Centralized visibility across Splunk Cloud (SaaS) and Splunk Enterprise (software) deployments.
Searches can be distributed from a single search head to any number of indexers. These indexers can all be local for massive parallelization for Big Data problems, or spread across a global enterprise to help you keep data wherever makes the most sense for your network, availability, and security requirements.
Splunk Enterprise can be deployed on premise, in the cloud, or a combination of both.
There is also an Amazon Machine Image available or if you don’t want to host or administer Splunk, it can be managed as a service by our experts using “Splunk Cloud”.
Getting data into Splunk is designed to be as flexible and easy as possible. Because the indexing engine is so flexible and doesn’t generally require configuration for most machine data, all that remains is how to collect and ship the data to your Splunk. There are many options.
When Splunk is running locally as an indexer or lightweight forwarder, you have additional options and greater control. Splunk can directly monitor hundreds or thousands of local files, index them and detect changes. Additionally, many customers use our out-of-the-box scripts and tools to generate data – common examples include performance polling scripts on *nix hosts, API and more.
Splunk isn’t the only technology that can benefit from collecting machine data, so let Splunk help send the data to those systems that need it. For those systems that want a direct tap into the raw data, Splunk can forward all or a subset of data in real time via TCP as raw text or RFC-compliant syslog. This can be done on the forwarder or centrally via the indexer without incrementing your daily indexing volume. Separately, Splunk can schedule sophisticated correlation searches and configure them to open tickets or insert events into SIEMs or operation event consoles. This allows you to summarize, mash-up and transform the data with the full power of the search language and import data into these other systems in a controlled fashion, even if they don’t natively support all the data types Splunk does.
Splunk’s clustering technology allows you to choose how many raw copies and searchable copies of your data you would like to keep. It also allows you to chose which indexers you want to store the copies on. This capability allows servers or even datacenters to go down without losing the ability to access the data.
In addition, the search affinity capability allows users to fetch data from the closest or fastest location where there is a copy of the data which can not only save the time it takes to do a search but bandwidth by eliminating the need to use the WAN when there is a local copy.
Splunk ODBC Driver lets you interact with, manipulate and visualize machine data stored in Splunk Enterprise using existing business software tools, such as Microsoft Excel or Tableau Desktop. This flexibility gives you the features available in Excel or Tableau Desktop as well as the advanced analytics capabilities of Splunk Enterprise.
Splunk Administrators need to create saved searches once. Business users then use a tool they are already familiar with to access those saved searches. Time savings and increased productivity are benefits everyone experiences.
Searching and reporting allows you to turn machine data into operational intelligence
After the data has been indexed, users can :
Search and Investigate using the Splunk Processing Language or Pivot
Add knowledge with lookups and data models
Monitor and Alert based on their needs
And of course, build reports and analyze the data.
Let’s take a look at each of these.
Splunk Cloud is only available in the U.S. and Canada.
Allows you to search all your data in one place in real time. The search interface operates very similar to doing a search on any web search engine. Any user can become powerful very quickly with preexisting knowledge of using a search engine; however, Splunk has created over 100 commands (135 published) to make analyzing the data quicker and easier.
<This is a great time to start a demo and show the search language if giving a demo>
The schema on the fly approach is a key differentiator with Splunk.
Applying a schema at the last possible moment allows for the greatest flexibility when asking questions of your data. Splunk enterprise will atomically extract the values from the fields in events. If the data source is updated and new fields are added or the format of the events change Splunk does not need to re-index the data.
If Splunk doesn’t automatically extract the fields you desire you can simply select the fields and add them for future searches.
Sometimes the raw event doesn’t contain useful enough information and it needs to be enriched….
The data for example may have a userid but you want to search on a name. Splunk’s lookup capability can enrich the raw data by adding additional fields at search time by. Some common use cases including event and error code description fields. Think “Page not Found” instead of “404”. Enriching your data can lead to entirely new insight.
In the example shown, Splunk took the userid and looked up the name and role of the user from an HR database. Similarly, it determined the location of the failed log in attempt by correlating the IP address. Even though these fields don’t exist in the raw data, Splunk allows you to search or pivot on them at any time.
You can also mask data. For example, you may want social security numbers to be replaced with all X’s for regular users but not masked for others. Removing data can also be useful, such as filtering PII, before writing it to an index in Splunk.
Alerts are triggered when certain conditions are met by the results of the search upon which it is based. Alerts can be based on both historical and real-time searches.
When an alert is triggered, it performs an alert action. This action can be the sending of the alert information to a designated set of email addresses, or the posting of the alert information to an RSS feed. Alerts can also be set up to run a custom script when they are triggered.
You can base these alerts on a wide range of threshold and trend-based scenarios, including empty shopping carts, brute force firewall attacks, and server system errors.
Release 6.4 delivers an array of new pre-built visualizations, a visualization developer framework, and an open library to make it simple for customers to access, develop and share interactive visualizations
More than a dozen pre-built visualizations help customers analyze and interact with data sets commonly found in IT, security, and machine learning analysis
A new developer framework allows customers and partners to easily create or customize any visualization to suit their needs
Splunkbase now contains a growing library of visualizations provided by Splunk, our partners and our community
Doubles the visualizations in Splunk today and creates an open environment for the unlimited creation and sharing of new visualizations
Once a visual is imported from SplunkBase it is treated the same as any native Splunk feature, and is available for general use in the Visualizations dropdown.
Data Models are created using the Data Model Builder and are usually designed and implemented by users who understand the format and semantics of their indexed data, and who are familiar with the Splunk Search Processing Language (SPL). They define meaningful relationships in the data.
Unlike data models in the traditional structured world, Splunk Data Models focus on machine data and data mashups between machine data and structured data. Splunk software is founded on the ability to flexibly search and analyze highly diverse machine data employing late-binding or search-time techniques for schematization (“schema-on-the-fly”). And Data Models are no exception. They define relationships in the underlying data, while leaving the raw machine data intact, and map these relationships at search time. They are therefore highly flexible and designed to enable users to rapidly iterate.
Security is also a key consideration and data models are fully permissionable in Splunk 6.
Data Models are accelerated using the High Performance Analytics Store, new in Splunk 6. The High Performance Analytics Store represents a breakthrough innovation from Splunk that dramatically accelerates analytical operations across massive data sets by up to 1000x over Splunk 5.
The Analytics Store contain a separate store of pre-extracted values derived from the underlying Splunk index. This data is organized in columns for rapid retrieval and powers dramatic improvements in the performance of analytical operations. Once created the Analytics Store is used seamlessly by Data Models and in turn the Pivot interface.
For users more comfortable with the Splunk Search Processing Language (SPL), The Analytics Store can also be used directly in the search language.
The Splunk Analytics Store is different from traditional Columnar databases – it is based on the Splunk lexicon and optimized for data retrieval (versus updates) by the Splunk Data Model or directly from the Splunk Search Processing Language.
With the Analytics Store, Splunk Enterprise now uniquely optimizes data retrieval for both rare term searches and now analytical operations all in the same software platform.
The new Pivot interface, combined with Data Models and the Analytics Store makes it dramatically easier for non-technical users and technical users alike to analyze and visualize data in Splunk and represent an important step towards Splunk’s mission of making machine data accessible, usable and valuable to everyone.
Sometimes you don't need accelerated data, rather you just need a sample over time for example to see a trend.
Event Sampling makes it faster to characterize very large datasets and focus your investigations. It is an integrated option of Search, offering a dropdown menu to control sampling 1 per 10, per 100, 1000, 10,000 or a custom value.
This can make searches exponentially faster.
The Pivot interface enables non-technical and technical users alike to quickly generate sophisticated charts, visualizations and dashboards using simple drag and drop and without learning the Search Processing Language (SPL). Users can access different chart types from the Splunk toolbox to easily visualize their data different ways.
Queries using the Pivot interface are powered by underlying “data models” which define the relationships in Machine Data.
After reports have been created either via search or pivot they can be combined to create a dashboard.
Reports can even been embedded into other webpages such as an wiki or SharePoint. This allows you to share reports with users who may not even have a Splunk account.
If the native dashboarding capabilities aren’t enough, the Splunk Web Framework, REST API, and SDKs can be leveraged to meet the needs of almost any imagination.
To build an index on the data, Splunk Enterprise will first take the raw data, compress it, and put it in “buckets”. It will also perform a number of actions, including:
Configuring character set encoding.
Identifying line termination using linebreaking rules. While many events are short and only take up a line or two, others can be long.
Identifying timestamps or creating them if they don't exist. At the same time that it processes timestamps, Splunk identifies event boundaries.
Extracting a set of default fields for each event, including host, source, and sourcetype.
When running a search, the search head will fetch the events from disk which could be from multiple indexers, then it will sort and summarize the events, and format into the final results as requested before displaying it to the user. Splunk Enterprise search is incredibly powerful because it….
A sourcetype is simply a default field that identifies the type of the event and information about the event, such as what is the beginning or end of an event and where the timestamp is or which timestamp to use if the data has multiple.
It's important that you assign the right source type to your data. That way, the indexed version of the data will look the way you expect it to, with appropriate timestamps and event breaks. This will make it a lot easier to search your data later on.
For the most part, it's pretty easy to assign the right source type to your data. Splunk Enterprise comes with a large number of predefined source types. When consuming data, Splunk Enterprise will usually select the correct source type automatically. Sometimes, though, Splunk Enterprise needs your help. In such a case, tpreview capability, as shown in the images here, can be used to set or create a custom sourcetype.
As the data ages it will transition between hot, warm, and cold buckets. Each index can be configured to move the data based on amount of disk used or the age of the data. For example, there may be a mandate to keep security data for 7 years, but only the last year is ever really searched. The data older than a year can be sent to cheaper disks (or even archived on tape).
TSIDX Reduction (Optional) – For more information see Slide 42 in the Appendix.
In 6.4 Users have the ability to reduce Splunk performance optimization data (TSIDX) files – yielding a smaller footprint.
40-80% reduction in data footprint
No functionality loss
Limited performance tradeoff
Splunk Enterprise customers can drive down their TCO by archiving historical data to Hadoop on commodity hardware.
Store old data cheaper in Hadoop commodity batch storage instead of SANs
Archive buckets to Hadoop (HDFS) instead of freezing buckets or throwing data away
BUILD SPLUNK APPS
The Splunk Web Framework makes building a Splunk app looks and feels like building any modern web application.
The Simple Dashboard Editor makes it easy to BUILD interactive dashboards and user workflows as well as add custom styling, behavior and visualizations. Simple XML is ideal for fast, lightweight app customization and building. Simple XML development requires minimal coding knowledge and is well-suited for Splunk power users in IT to get fast visualization and analytics from their machine data. Simple XML also lets the developer “escape” to HTML with one click to do more powerful customization and integration with JavaScript.
Developers looking for more advanced functionality and capabilities can build Splunk apps from the ground up using popular, standards-based web technologies: JavaScript and HTML5. The Splunk Web Framework lets developers quickly create Splunk apps by using prebuilt components, styles, templates, and reusable samples as well as supporting the development of custom logic, interactions, components, and UI. Developers can choose to program their Splunk app using Simple XML, JavaScript or HTML5 (or any combination thereof).
EXTEND AND INTEGRATE SPLUNK
Splunk Enterprise is a robust, fully-integrated platform that enables developers to INTEGRATE data and functionality from Splunk software into applications across the organization using Software Development Kits (SDKs) for Java, JavaScript, C#, Python, PHP and Ruby. These SDKs make it easier to code to the open REST API that sits on top of the Splunk Engine. With almost 200 endpoints, the REST API lets developers do programmatically what any end user can do in the UI and more. The Splunk SDKs include documentation, code samples, resources and tools to make it faster and more efficient to program against the Splunk REST API using constructs and syntax familiar to developers experienced with Java, Python, JavaScript, PHP, Ruby and C#. Developers can easily manage HTTP access, authentication and namespaces in just a few lines of code.
Developers can use the Splunk SDKs to:
- Run real-time searches and retrieve Splunk data from line-of-business systems like Customer Service applications
- Integrate data and visualizations (charts, tables) from Splunk into BI tools and reporting dashboards
- Build mobile applications with real-time KPI dashboards and alerts powered by Splunk
- Log directly to Splunk from remote devices and applications via TCP, UDP and HTTP
- Build customer-facing dashboards in your applications powered by user-specific data in Splunk
- Manage a Splunk instance, including adding and removing users as well as creating data inputs from an application outside of Splunk
- Programmatically extract data from Splunk for long-term data warehousing
Developers can EXTEND the power of Splunk software with programmatic control over search commands, data sources and data enrichment.
Splunk Enterprise offers search extensibility through:
- Custom Search Commands - developers can add a custom search script (in Python) to Splunk to create own search commands. To build a search that runs recursively, developers need to make calls directly to the REST API
- Scripted Lookups: developers can programmatically script lookups via Python.
- Scripted Alerts: can trigger a shell script or batch file (we provide guidance for Python and PERL).
- Search Macros: make chunks of a search reuseable in multiple places, including saved and ad hoc searches.
Splunk also provides developers with other mechanisms to extend the power of the platform.
- Data Models: allow developers to abstract away the search language syntax, making Splunk queries (and thus, functionality) more manageable and portable/shareable.
- Modular Inputs: allow developers to extend Splunk to programmatically manage custom data input functionality via REST.
The reports and dashboards can be bundled into “Splunk Apps” which can be downloaded and shared. Hundreds of apps are available at apps.splunk.com, the large majority of which are free.
Multiple education options are available:
Virtual: Instructor Led Public Classes
Virtual: Instructor Led Dedicated Classes
Classroom: Instructor Led Public Classes
Classroom: Dedicated Onsite
eLearning: Self-Paced Learning
Custom Designed Solutions
Professional Services can certainly help you deploy faster but also provides a variety of other services:
Implementation services over the full deployment lifecycle
System health optimization and best practices reviews
Deployment workshops
Upgrade services
Splunk Cloud is the enterprise service enables you to use Splunk without having to wait for hardware or staffing resources.