This document summarizes Patrick Farrell's role as the Sr. Software Engineer and Splunk administrator at Cardinal Health, a Fortune 500 healthcare company. It describes how Splunk has helped Cardinal Health improve root cause analysis, gather customer usage statistics, increase efficiencies, and provide more proactive customer support. Specifically, Splunk reduced the time to resolve issues from hours to seconds, improved systems uptime and performance, and increased customer satisfaction. The document provides recommendations on best practices for implementing Splunk and describes Cardinal Health's plans to expand Splunk usage.
Splunk Sales Presentation Imagemaker 2014Urena Nicolas
Splunk provee Inteligencia operativa para todos
Splunk es la plataforma de inteligencia operativa en tiempo real líder del sector. Es una forma fácil, rápida y segura de buscar, analizar y visualizar los grandes flujos de datos de máquina generados por sus sistemas de TI e infraestructura tecnológica (físicos, virtuales y en la nube).
Splunk Enterprise 6 es la versión más reciente y proporciona:
- Análisis potente para todos los usuarios a velocidades sorprendentes
- Experiencia de usuario completamente rediseñada
- Entorno del desarrollador más enriquecido para una ampliación fácil de la plataforma
Splunk Enterprise 6 ya está disponible. Descárguelo ahora y pruébelo usted mismo.
Splunk Sales Presentation Imagemaker 2014Urena Nicolas
Splunk provee Inteligencia operativa para todos
Splunk es la plataforma de inteligencia operativa en tiempo real líder del sector. Es una forma fácil, rápida y segura de buscar, analizar y visualizar los grandes flujos de datos de máquina generados por sus sistemas de TI e infraestructura tecnológica (físicos, virtuales y en la nube).
Splunk Enterprise 6 es la versión más reciente y proporciona:
- Análisis potente para todos los usuarios a velocidades sorprendentes
- Experiencia de usuario completamente rediseñada
- Entorno del desarrollador más enriquecido para una ampliación fácil de la plataforma
Splunk Enterprise 6 ya está disponible. Descárguelo ahora y pruébelo usted mismo.
Power of Splunk Search Processing Language (SPL)Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Power of Splunk Search Processing Language (SPL)Splunk
This session will unveil the power of the Splunk Search Processing Language (SPL). See how to use Splunk's simple search language for searching and filtering through data, charting statistics and predicting values, converging data sources and grouping transactions, and finally data science and exploration. We'll begin with basic search commands and build up to more powerful advanced tactics to help you harness your Splunk Fu!
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Splunk, Software Tools, Big Data, Logging, PCI, Information security, Cisco Systems, VMware ESX, Regulatory compliance, FISMA, Enterprise architecture, Data center, security software, SCADA, Windows,Unix,Scanners, Citrix, Microsoft Active Directory
The Big Data phenomenon is being driven by the growth of machine data. Critical insights found in machine data enable IT and Security teams to ensure uptime, detect fraud and identify threats. Today, forward-thinking organizations are discovering its value to better understand their customers, improve products, optimize marketing and improve business processes. Learn how Splunk and your machine data can deliver real-time insights from this new class of data and complement your existing BI investments.
You and your colleagues are all doing great things with Splunk. But you seldom come together to share ideas, apps and best practices. This session will help you take Splunk to the next level by helping you establish a Splunk Center of Excellence (CoE) at your organization. The purpose of a COE is simple - to provide Splunk users an informal venue in which they can discuss ideas, diagnose challenges, share innovations and network with peers. This session will share the best practices you need to create and maintain a successful CoE practice.
Splunk in the Cisco Unified Computing System (UCS) Splunk
Cisco has been a Splunk customer for 8 years, with a strong engineering partnership for 3+ years. Learn how several Cisco customers as well as Cisco IT have deployed, grown, and transformed our businesses using the advantages of Splunk Enterprise software together with Cisco UCS and Nexus hardware. We will also talk about scalability and performance considerations for all scales of data footprint and business growth.
Case Studies: Enterprise BI vs Self-Service Analytics Tools: Real Life Consid...Senturus
Real-life client use cases and the factors playing into the choice of tool. Practical input on guiding considerations when selecting an approach: governance, accessibility, functionality, time to decision (agility). View the webinar video recording and download this deck and comparison chart: http://www.senturus.com/resources/enterprise-bi-platforms-vs-self-service-analytics-tools/
Explore the benefits and drawbacks of enterprise BI platforms and self-service analytics. Learn how these seemingly contradictory philosophies can complement each other and extend the value of assets. Among the topics under discussion: how these seemingly contradictory philosophies can complement each other and extend the value of your ever-changing (and growing) information assets and considerations when selecting an approach, given all the many options.
Senturus, a business analytics consulting firm, has a resource library with hundreds of free recorded webinars, trainings, demos and unbiased product reviews. Take a look and share them with your colleagues and friends: http://www.senturus.com/resources/.
How to Design, Build and Map IT and Business Services in SplunkSplunk
Your IT department supports critical business functions, processes and products. You're most effective when your technology initiatives are closely aligned and measured with specific business objectives. This session covers best practices and techniques for designing and building an effective service model, using the domain knowledge of your experts and capturing and reporting on key metrics that everyone can understand. We will design a sample service model and map them to performance indicators to track operational and business objectives. We will also show you how to make Splunk service-ware with Splunk IT Service Intelligence (ITSI).
Similar to SplunkLive! Customer Presentation - Cardinal Health (20)
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
.conf Go 2023 presentation:
De NOC a CSIRT
Speakers:
Daniel Reina - Country Head of Security Cellnex (España) & Global SOC Manager Cellnex
Samuel Noval - Global CSIRT Team Leader, Cellnex
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Accelerate your Kubernetes clusters with Varnish CachingThijs Feryn
A presentation about the usage and availability of Varnish on Kubernetes. This talk explores the capabilities of Varnish caching and shows how to use the Varnish Helm chart to deploy it to Kubernetes.
This presentation was delivered at K8SUG Singapore. See https://feryn.eu/presentations/accelerate-your-kubernetes-clusters-with-varnish-caching-k8sug-singapore-28-2024 for more details.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualityInflectra
In this insightful webinar, Inflectra explores how artificial intelligence (AI) is transforming software development and testing. Discover how AI-powered tools are revolutionizing every stage of the software development lifecycle (SDLC), from design and prototyping to testing, deployment, and monitoring.
Learn about:
• The Future of Testing: How AI is shifting testing towards verification, analysis, and higher-level skills, while reducing repetitive tasks.
• Test Automation: How AI-powered test case generation, optimization, and self-healing tests are making testing more efficient and effective.
• Visual Testing: Explore the emerging capabilities of AI in visual testing and how it's set to revolutionize UI verification.
• Inflectra's AI Solutions: See demonstrations of Inflectra's cutting-edge AI tools like the ChatGPT plugin and Azure Open AI platform, designed to streamline your testing process.
Whether you're a developer, tester, or QA professional, this webinar will give you valuable insights into how AI is shaping the future of software delivery.
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Securing your Kubernetes cluster_ a step-by-step guide to success !KatiaHIMEUR1
Today, after several years of existence, an extremely active community and an ultra-dynamic ecosystem, Kubernetes has established itself as the de facto standard in container orchestration. Thanks to a wide range of managed services, it has never been so easy to set up a ready-to-use Kubernetes cluster.
However, this ease of use means that the subject of security in Kubernetes is often left for later, or even neglected. This exposes companies to significant risks.
In this talk, I'll show you step-by-step how to secure your Kubernetes cluster for greater peace of mind and reliability.
Key Trends Shaping the Future of Infrastructure.pdfCheryl Hung
Keynote at DIGIT West Expo, Glasgow on 29 May 2024.
Cheryl Hung, ochery.com
Sr Director, Infrastructure Ecosystem, Arm.
The key trends across hardware, cloud and open-source; exploring how these areas are likely to mature and develop over the short and long-term, and then considering how organisations can position themselves to adapt and thrive.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
2. My Background and Role
Patrick Farrell, Sr. Software Engineer
– Resident Splunk Administrator and Champion
– Started using Splunk two years ago as a
developer for our eCommerce platform
– Responsible for Splunk administration,
maintenance, custom application
development, and dashboards
– Splunk Community of Practice
owner at Cardinal Health
3. Company Overview
•
•
•
•
•
Founded in 1971
Over 30,000 employees
Headquarters in Dublin, Ohio
Ranked #19 on the Fortune 500
Cardinal Health helps pharmacies,
hospitals, ambulatory surgery centers
and physician offices focus on patient
care while reducing costs, enhancing
efficiency and improving quality
4. Before Splunk
Manual search on 30+ servers
using Unix command-line
programs (Awk, Grep, Tail)
Operational support and
development groups spent
hours on root cause analysis
and problem resolution
No insight into customer
usage of our applications
No ability to be proactive
with customer support
5. Splunk at Cardinal Health
Data sources
– Application Logs
– Access Logs, System Out,
System Err, GC, and other
custom application logs
– 25 individual source types
– 250+ individual sources
Indexer, Search Head,
Deployment Server, and
License Master
60 GB
Per Day
Splunk used in pre-production
and production environments
More than thirty individuals
actively using Splunk on a
regular basis
Forwarder
Forwarder
30+ Forwarders (5 Server Classes)
6. Splunk Use Cases
“Splunk is our Swiss Army Knife”
Improving Root
Cause Analysis
Gathering Customer
Usage Statistics
Increasing
Efficiency
Proactive Customer
Support
7. Return on Investment
“One of the most important benefits of
using Splunk from an application development
standpoint is illustrated by how it has helped
us clean up our logging code.”
8. Increased Efficiency
100+ developers on a single application,
there can be lines of erroneous code
– 1.2 million severe error messages / hour
Splunk is used to analyze application logs
during performance/endurance testing
The punct command is your friend
Key benefit:
Splunk helps us clean up our code
– Capacity savings (storage, license)
– Improved efficiency (speed)
– Reduced spam
9. Improved Systems Uptime and Performance
Writing Splunk friendly code
– Inventory Manager
Splunk’s search processing language
allowed us to easily perform analysis
once considered impossible from
the Unix prompt.
Analytics for:
–
–
–
–
–
Most active accounts
Most invoked operations
SQL Database contention
Longest running operations
Exceptions encountered
17. Improving Customer Satisfaction
•
Splunk alerts us when customers
see the contact help desk
message on our site
– Reach out to customer immediately
•
Immediate support = happier
customers = more revenue
•
Gathering customer usage data to
identify which functionality should
be enhanced or retired
18. Reducing Root Cause Analysis Time
Searching logs across many
application servers can take
hours. Remember, time is
money!
Now an alert or search helps us
identify most issues in seconds!
22. Results with Splunk
Reduced
Down time
The most important
benefit to our large ecommerce application
is reduced down time.
Every minute of down
time results in a
significant loss of
revenue.
Improved
Customer
Satisfaction
Increase
Efficiencies
We were able to
reduce our daily
indexing volume by
3 GB by identifying
and eliminating
defects that produced
in excess of 1.2 million
severe events per
hour.
Reduced
MTTR
Application
Enhancements
We can determine
the focus of future
enhancements by
monitoring how our
customers are using
the site. Likewise,
we can also identify
unused functionality.
Thank you, punct!
22
Searching and
Reporting
Ability to drill down to
specific areas and find
issues in seconds
instead of hours.
23. Best Practice Recommendations
Splunk is an amazing platform as long as you are prepared for it!
Create a roadmap that outlines how you intend to use Splunk and
where you would like to take the product within your organization.
Plan your environment and account for future growth (users,
searches, license volume, hardware capacity, storage, etc.).
23
24. Best Practice Recommendations
Generate a unique identifier for each transaction and write it to the log
as part of each event so that you may easily identify all related events.
Take advantage of automatic field extraction using key-value pairs or
use a logging format such as JSON that can provide automatic field
extraction.
Capture execution time in log events for an added dimension
24
25. Future Plans
Expanding use of Splunk to our Medical eCommerce Platform
Creation of additional operational and business dashboards
Evaluate the possibility of using Splunk in DEV and QA
25
And so, if a developer for example wanted to realize, you know, or wanted to identify the root cause of a problem, they may have significant difficulty locating the log information. And originally, our environment was kind of – it was larger, there were more servers in the mix. And so, it would really be almost, you know, a never ending process to try and find the place where the problem occurred, when we had to physically log in to each box, examine, hover many log files on each box and then move on to the next one and see, you know, where the problem originated.Patrick Farrell: We would definitely increase our, let's say, outage time, right, downtime, if we were having it, because trying to locate a problem was, you know, quite a challenge. And so, by adding Splunk to the mix, especially once we manage to stabilize our environment, you know, we have definitely seen some benefits in – not necessarily in support, but reducing or cleaning up some issues that exist in our application.Patrick Farrell: Correct. Those users that – those customers that are currently on the site, potentially, that are experiencing difficulties. Maybe they're presented with the contact the help desk type message. And so, I'll see that. So, with that information, our goal is that maybe we can make a proactive attempt to contact a customer. We haven't gone this far yet. But for example, automatically pop up a message on the customer's screen saying, "Hey, would you like to speak with customer support about the issue you're experiencing?" type of thing, that kind of push to the customer and say, "Look, you know, we're here for you. You know, come take advantage of the opportunity to speak with us about the issue that you're experiencing."
Patrick Farrell: There are 30-some forwarders; I think 33 or so forwarders right now. We collect log data from essentially custom application logs. We collect like HTTP log data. We're bringing in log data from our – let's see, like JVM type logs, robust GC logs. Where else do we pull data from? System out, system error. So, we have a number of source files as well as it's not just Order Express that uses it. We also have our EDI group. They use Splunk as well, the same Splunk installation. They use that for their source types. They have well over 200 individual source files that are managed and indexed by Splunk, probably on the order of about 20 source types just for them. Let's see. By the way, you know, I want to switch gears just for a second. (Cathy), I just pinged (Scott). He said that Splunk was the only – really the only tool being considered. He did say they briefly looked at an IBM tool. But really he said it was far more expensive and less functional than Splunk. Patrick Farrell: Well, right now, we're basically consolidated onto a single virtual machine. And I'll tell you that it's an undersized virtual machine. We handle, I'll say, about – just our production server alone handles about, I'll say, 60 gigabytes a day of log volume coming into it and that's going through a single virtual machine. It's a Linux operating system and it handles – let's see, it's got the deployment server, license server. It's on a license master, and (indexer and search) all in one virtual machine.
Patrick Farrell: And what we do use for it in our stage environment is specifically to analyze performance testing results. So, when – and this is our – probably one of our biggest benefits that we've seen from Splunk from an application development standpoint is just cleaning up the code. So, like I said, we have a large development team and everybody is off doing their thing. And it's – when you try to bring people together, and you bring this whole thing together, and you put it out there and you look at the finished product and you see, "Wow, maybe there's a million severe error messages an hour in the production logs."Patrick Farrell: And you look at that, you say, "A million severe error messages an hour. Do I really need a million severe error messages an hour? My system is still functioning. I'm not getting alerted. Why is it doing this?" And so, what we're using it for is to go back almost retroactively and find the places in the logs where people were either printing worthless log statements. To give you one example, I found one in there that was printed 1.2 million times an hour in the log and it had nothing in it. Like ...Patrick Farrell: I have – basically – I said before, I was a developer and the team that I was a developer for is called the Inventory Manager. And Inventory Manager, that particularly piece of Order Express or the larger application, is using these logs. As a developer, I was basically the one who wrote the logs, so I knew the most about what was going into those logs. I also had a lot of control of the information and how I was going to write that information to the log. And ultimately, it just – it ended up being very advantageous to me to – you know, to change the way I was writing these logs so that Splunk would – you know, they were naturally useful to Splunk. And so, that information specifically allowed me to build some pretty interesting dashboards, just most specifically from an operational standpoint first and then more from a business perspective, trying to show – from an operational perspective, I show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: Correct. There is information like that that might be in these logs that is completely just dead – it's dead information. There's really no use for it. And so, we're going back and we're taking these statements out because when you add those statements up, the overall size of those statements, we may have gigabytes worth of data per day that is just from one statement in our logs in production. And so, for us, that's not – you know, there's benefit, there's monetary benefit in our case to take those nasty statements out, clean them up and move on. Not to mention, they shouldn't be there in the first place. Our system will run faster if we don't have to write these silly statements to the log. So – I mean it's that kind of retroactive stuff at the moment. We do it in our stage environment. We also use stage environment to look for the most frequently occurring messages; for example, the pump command.
And what we do use for it in our stage environment is specifically to analyze performance testing results. So, when – and this is our – probably one of our biggest benefits that we've seen from Splunk from an application development standpoint is just cleaning up the code.
Patrick Farrell: And what we do use for it in our stage environment is specifically to analyze performance testing results. So, when – and this is our – probably one of our biggest benefits that we've seen from Splunk from an application development standpoint is just cleaning up the code. So, like I said, we have a large development team and everybody is off doing their thing. And it's – when you try to bring people together, and you bring this whole thing together, and you put it out there and you look at the finished product and you see, "Wow, maybe there's a million severe error messages an hour in the production logs."
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
show, let's see, the most frequently – you know, the most – the top 10 accounts for example that are using the system.Patrick Farrell: I see the most frequently invoked operations and their accounts throughout the day. I see the longest running operations. I see the database SQL contention, whether or not there are any transaction timeouts for example at the database level. What else? I see the overall number of business and system exceptions individually, of course, so that way I can – technically in production, we shouldn't have any what are called the business exceptions. Those are what we consider business role validations or violations, and those should be caught during testing. So, I should be able to look at this graph or this – what we have is a radio gauge. I should be able to see it basically at zero all the time. And if I don't see that at zero, then we have defects that are there in production. The question is how are those defects – you know, where are those defects? Are they on the service layer? Are they on the front end? Is it a front end validation that failed and the service layer caught it? That type of thing. So – then of course, there are system exceptions which are completely unexpected. You know, cases that the system caught. So, I see those. I also see which users are having the most difficulty with the system. A Additionally, I had a couple other notes. On the business side of things, I also track the type of transactions that are being executed on the system and (when) throughout the day they are executed. I track how certain pieces of functionality are being used. For example, a given report may have, let's say, five inputs or one – the report may have one particular input that has five options, and maybe the users are only using two of those options. And so, we're supporting functionality for three others that really nobody is using. So, that knowledge gives us the ability to go back and say, "Well, if nobody's using this functionality over the course of a month, two months, three months, et cetera, or more, do we really need to keep this functionality? Should we retire it?"Patrick Farrell: ... the more (I thought) about it, I was like, "Ooh, OK. So I can extract the execution time field." I'm like, "Wow, that's really useful. So now, I could kind of do aggregated searches across all my (boxes)." And then I though about it some more, I was like, "Ooh, OK." So now, I've got all these cool things like subsearches and, you know, this rich query or this rich search language, the search processing language that Splunk has. And I quickly fell in love it. I was like, "Wow, this is great. I can do some amazing things that I could never do with my data before from the UNIX prompt. I can now do that in Splunk. And take this, you know, basically to a whole new level." And that's essentially what excited me, you know, about the product. It was basically the richness of that search processing language.
I also see which users are having the most difficulty with the system. And so, the idea that the … Correct. Those users that – those customers that are currently on the site, potentially, that are experiencing difficulties. Maybe they're presented with the contact the help desk type message. And so, I'll see that. So, with that information, our goal is that maybe we can make a proactive attempt to contact a customer. We haven't gone this far yet. But for example, automatically pop up a message on the customer's screen saying, "Hey, would you like to speak with customer support about the issue you're experiencing?" type of thing, that kind of push to the customer and say, "Look, you know, we're here for you. You know, come take advantage of the opportunity to speak with us about the issue that you're experiencing.”
I would say biggest business impact would be ability to identify issues in a complex environment quickly which will reduce outage time. That's probably our biggest benefit because as a large e-commerce application with, you know, doing as much business as we do. You know, on a daily basis, you don't really want to be down for long. Because every second you're down is orders you're not receiving, and those customers will be happy to take their business somewhere else. So, you really want to get your systems running. You want to get – you want to identify the problems quickly and you want to get them resolved so that you're not alienating your customer base.