Greg Dostatni is the team lead for application hosting at the University of Alberta. He manages a 10 person team responsible for managing applications and databases across the university. The university implemented Splunk in 2013 to help address challenges around siloed data and reactive troubleshooting after restructuring IT operations. Splunk provides centralized logging, real-time monitoring and alerts, and customizable dashboards. This has improved initial incident response times from half an hour to gather data to immediately investigating issues. Splunk also allows tracking of key metrics like authentication system transactions and performance monitoring across the university's systems.
WestJet Airlines is a Canadian airline founded in 1996 that has grown to operate over 425 flights per day to over 90 destinations across North America and Central America. The Solutions Architect at WestJet discusses how they implemented Splunk to gain visibility into their various systems like websites and apps. Splunk has helped WestJet troubleshoot issues faster, identify performance problems, and answer ad-hoc questions by consolidating their logs in one place.
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
This document provides an agenda for scaling a Splunk deployment beyond initial use cases. It discusses growing use cases and data volume over time. As Splunk becomes mission critical, the document recommends implementing high availability through indexer and search head clustering. It also suggests using a distributed management console and centralized configuration management. Finally, the document briefly discusses Splunk Cloud and hybrid deployments as options to scale without waiting for additional on-premise hardware.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
SplunkLive! Customer Presentation - Penn State Hershey Medical CenterSplunk
This document discusses Jeff Campbell's role as the Information Security Architect at Penn State Hershey Medical Center and their use of Splunk. It describes how Penn State Hershey Medical Center has over 9,000 employees and a combined $1.5 billion budget across its institutes and hospitals. It outlines some of the challenges they faced with decentralized logging prior to Splunk, and how Splunk provided a centralized log repository allowing for faster searching and correlation across systems. It provides examples of how Penn State Hershey is using Splunk for security use cases, operational improvements, and additional sources. It also discusses their Splunk architecture and future plans to expand Splunk usage.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document summarizes Patrick Farrell's role as the Sr. Software Engineer and Splunk administrator at Cardinal Health, a Fortune 500 healthcare company. It describes how Splunk has helped Cardinal Health improve root cause analysis, gather customer usage statistics, increase efficiencies, and provide more proactive customer support. Specifically, Splunk reduced the time to resolve issues from hours to seconds, improved systems uptime and performance, and increased customer satisfaction. The document provides recommendations on best practices for implementing Splunk and describes Cardinal Health's plans to expand Splunk usage.
The document discusses how Staples uses Splunk for operational support, application insights, and business intelligence across their infrastructure. Staples relies on Splunk for real-time visibility into the health of their Advantage website and business/operational analytics. Splunk provides comprehensive insights into Staples' infrastructure and helps map application performance to user experience. It has saved Staples numerous times by quickly detecting issues. Adoption of Splunk at Staples has grown organically as more teams see its benefits.
This document discusses Dell's implementation and use of Splunk for operational monitoring and troubleshooting. Some key points:
1) Dell implemented Splunk to gain drill down capabilities and a single source of truth for aggregating machine data to better understand issues across their IT infrastructure.
2) Splunk provides benefits like intuitive dashboards, reduced time spent on monitoring, high visibility, and ease of root cause analysis.
3) Before Splunk, Dell lacked cross-server visibility, automated alerts, and drill down capabilities, resulting in slow recovery times.
WestJet Airlines is a Canadian airline founded in 1996 that has grown to operate over 425 flights per day to over 90 destinations across North America and Central America. The Solutions Architect at WestJet discusses how they implemented Splunk to gain visibility into their various systems like websites and apps. Splunk has helped WestJet troubleshoot issues faster, identify performance problems, and answer ad-hoc questions by consolidating their logs in one place.
Taking Splunk to the Next Level - Architecture Breakout SessionSplunk
This document provides an agenda for scaling a Splunk deployment beyond initial use cases. It discusses growing use cases and data volume over time. As Splunk becomes mission critical, the document recommends implementing high availability through indexer and search head clustering. It also suggests using a distributed management console and centralized configuration management. Finally, the document briefly discusses Splunk Cloud and hybrid deployments as options to scale without waiting for additional on-premise hardware.
This document discusses how Staples uses Splunk to gain insights from machine data across their organization. It provides details on:
- Staples' Splunk infrastructure consisting of 8 index servers and 9 search heads that can handle 1TB of data per day.
- The key use cases of operational support, application insights, and business intelligence.
- How Splunk provides a single pane of glass for visibility across their web apps, servers, monitoring tools, and more.
- Examples of how Splunk has helped identify issues, reduced resolution times, and optimized website searches to improve the customer experience.
SplunkLive! Customer Presentation - Penn State Hershey Medical CenterSplunk
This document discusses Jeff Campbell's role as the Information Security Architect at Penn State Hershey Medical Center and their use of Splunk. It describes how Penn State Hershey Medical Center has over 9,000 employees and a combined $1.5 billion budget across its institutes and hospitals. It outlines some of the challenges they faced with decentralized logging prior to Splunk, and how Splunk provided a centralized log repository allowing for faster searching and correlation across systems. It provides examples of how Penn State Hershey is using Splunk for security use cases, operational improvements, and additional sources. It also discusses their Splunk architecture and future plans to expand Splunk usage.
Splunk Ninjas: New Features, Pivot, and Search DojoSplunk
Besides seeing the newest features in Splunk Enterprise and learning the best practices for data models and pivot, we will show you how to use a handful of search commands that will solve most search needs. Learn these well and become a ninja.
This document summarizes Patrick Farrell's role as the Sr. Software Engineer and Splunk administrator at Cardinal Health, a Fortune 500 healthcare company. It describes how Splunk has helped Cardinal Health improve root cause analysis, gather customer usage statistics, increase efficiencies, and provide more proactive customer support. Specifically, Splunk reduced the time to resolve issues from hours to seconds, improved systems uptime and performance, and increased customer satisfaction. The document provides recommendations on best practices for implementing Splunk and describes Cardinal Health's plans to expand Splunk usage.
The document discusses how Staples uses Splunk for operational support, application insights, and business intelligence across their infrastructure. Staples relies on Splunk for real-time visibility into the health of their Advantage website and business/operational analytics. Splunk provides comprehensive insights into Staples' infrastructure and helps map application performance to user experience. It has saved Staples numerous times by quickly detecting issues. Adoption of Splunk at Staples has grown organically as more teams see its benefits.
This document discusses Dell's implementation and use of Splunk for operational monitoring and troubleshooting. Some key points:
1) Dell implemented Splunk to gain drill down capabilities and a single source of truth for aggregating machine data to better understand issues across their IT infrastructure.
2) Splunk provides benefits like intuitive dashboards, reduced time spent on monitoring, high visibility, and ease of root cause analysis.
3) Before Splunk, Dell lacked cross-server visibility, automated alerts, and drill down capabilities, resulting in slow recovery times.
Justin Hardeman is a Unix administrator at Availity LLC, a company that processes over 2 billion healthcare transactions annually. He has over 5 years of experience using Splunk for monitoring Availity's large, multi-datacenter infrastructure consisting of 500+ virtual machines. Splunk has allowed Availity to move from a reactive to proactive approach by providing real-time visibility into issues, transactions, and workflows across their environment.
The document discusses how the U.S. Social Security Administration uses Splunk to gain insights from large amounts of machine data. It summarizes Splunk's implementation at SSA, including indexing over 400 GB of data daily from various sources, and deploying search heads, indexers, and universal forwarders. It also discusses challenges around managing IT assets and how Splunk helps with security, compliance, and asset management.
This document discusses how Splunk provides value across IT operations, application delivery, business analytics, industrial data/IoT, and security/compliance. It highlights Splunk's capabilities for operational visibility, powerful developer platform, extensibility, and ecosystem for industrial/IoT data. An example deployment for oil and gas operations is shown. The document argues that a new approach to ICS/OT security is needed to analyze all relevant data and leverage threat intelligence. Splunk provides an application for enterprise security focused on ICS/OT environments.
John Villacres works in network automation and tools at Nationwide. He demonstrated Splunk to colleagues in 2012 and they now use it extensively. Splunk has improved their ability to troubleshoot issues by providing timely access to network data through custom dashboards. It has reduced resolution times for problems from days to minutes by integrating data from sources like firewalls, routers, and packet captures. More teams now use Splunk as its efficiency has allowed employees to take on new tasks while maintaining productivity.
Tom McMahon is the Security Engineering Manager at Weill Cornell Medical College. They have grown their security team from 2 to 12 people over 5 years. Splunk has become a central tool for their security operations and IT operations. It has improved security response times, increased visibility across their networks and systems, and allowed for better operational reporting and metrics. Splunk consolidates logs from many different systems and applications, providing a single pane of glass. It has replaced their legacy SIEM which was at capacity.
Splunk is used by Satcom Direct for monitoring aviation systems, tracking aircraft in flight, and analyzing business data. Logs from networking devices, phone systems, satellite communications systems and aircraft position reports are fed to Splunk. This allows Satcom Direct to provide a single dashboard for support technicians to monitor systems, see customer information and receive alerts. Splunk is also used to visualize aircraft flight paths on maps and analyze business metrics like call volumes to different countries to improve contracts.
The document summarizes Splunk adoption at athenahealth, a cloud-based healthcare services company. It discusses how Splunk has provided athenahealth's security teams visibility into various data sources to help prioritize threats and incidents. Specifically, Splunk Enterprise Security is used by the Security Incident Response Team. Over 10 power users consume 400GB of data per day from hundreds of forwarders. Splunk has improved efficiency, reduced alert fatigue, and allowed for better investigation and correlation of security information.
This document discusses log centralization in cloud environments. It describes FINRA's role as an independent financial industry regulator and how it monitors the stock market and registered brokers. It then discusses challenges of collecting logs from various cloud services (SaaS, IaaS, PaaS) and providers (AWS, Cisco, etc.). It provides examples of using AWS services like CloudTrail, CloudWatch, and Elastic MapReduce with Hadoop to collect and analyze logs and metrics in the cloud.
This document discusses how Splunk provides new visibility and analytics for IT operations. It notes that IT environments are becoming increasingly complex with more servers, applications, virtualization, and cloud services. Splunk offers a platform for operational intelligence that can consolidate machine data from various sources and provide search, monitoring, and analytics capabilities. It also discusses how Splunk apps can provide deep insights into specific technology areas.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
This document summarizes a presentation given by David Craigen and Jeff Meyers of Aaron's Inc. about how they use Splunk. It discusses Aaron's background as a lease-to-own retailer with over 2,100 stores. It then describes the security team and their challenges with limited visibility and slow response times prior to Splunk. With Splunk, they have gained flexibility, fast time to value through security incident correlation and continuous monitoring across various data sources. Their roadmap includes adding more data sources and automation while expanding Splunk use for applications. Key lessons included showing quick value, taking a holistic view of security data, and attending Splunk conferences for best practices.
Splunk is a software company headquartered in San Francisco with additional offices in London and Hong Kong. They have over 2,100 employees and annual revenue of $668.4 million, growing 49% year-over-year. Their products include Splunk Enterprise, Splunk Cloud, and other solutions for collecting, analyzing, and visualizing machine-generated data from websites, applications, sensors, and other sources. Splunk has over 11,000 customers across more than 110 countries, including 80 of the Fortune 100. Their largest customer indexes over 1 petabytes of data per day.
The document summarizes Battelle's use of Splunk for security monitoring and log management. It describes how Splunk replaced three disparate and difficult to manage log systems, providing a single interface for all security logs. Splunk reduced complexity, increased efficiency of the security team, and allowed them to spend more time on security and less on tool management. The security team uses Splunk for central logging, alerts and monitoring, queries and searches, and reporting to share security information.
Razi Asaduddin presented on how ExxonMobil uses Splunk for various purposes including cyber security, network and application performance monitoring, and capacity planning. Some key points included how Splunk has allowed ExxonMobil to gain visibility and insights across data that was previously siloed, and how their use of Splunk has evolved from one-dimensional searches to multi-dimensional pivoting and visualization. Razi also shared best practices like starting with simple questions and gradually building complexity, as well as methods for policing Splunk usage within the organization.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Splunk Discovery: Warsaw 2018 - Legacy SIEM to Splunk, How to Conquer Migrati...Splunk
Presented at Splunk Discovery Warsaw 2018:
SIEM Replacement Methodology
Use Cases
Data Sources & Data Onboarding
Architecture
Third Party Integration
You Got This!
Travis Perkins: Building a 'Lean SOC' over 'Legacy SOC'Splunk
Travis Perkins has a complex hybrid IT infrastructure and is in midst of migrating to the cloud. This session will outline the pitfalls from their initial infrastructure-heavy ‘legacy SOC’ approach with a legacy SIEM and the success they gained when they moved to a cloud-based, data-driven ‘lean SOC’.
The document discusses the experience of migrating from an old SIEM to Splunk Enterprise Security (ES). Key points include:
- The old SIEM was difficult to maintain, slow, and lacked community support. Splunk provided better performance and capabilities.
- Logs were migrated to Splunk one source at a time after normalization. Analysts found Splunk easier to use.
- A proof of concept with ES showed its advanced correlations, dashboards, and incident management capabilities beyond core Splunk.
- ES provides templates for searches, alerts, and workflows that would have taken months to recreate. It is a more complete SIEM solution.
1) Cisco has been using Splunk enterprise for over 7 years across many business units and teams, with daily indexing growing from 300GB in 2010 to over 2TB currently.
2) Cisco's Computer Security Incident Response Team (CSIRT) uses Splunk as their security information and event management (SIEM) platform to monitor 350TB of stored data across 60 global users.
3) The presentation discusses how Cisco and some of its customers have successfully deployed Splunk on Cisco Unified Computing System (UCS) servers to scale their Splunk environments and gain benefits of simplified and repeatable deployments.
This document discusses how Splunk has helped the University of Maryland improve security visibility and incident response. It provides an overview of the speaker and UMD, challenges they previously faced with scattered logs and limited visibility, and how Splunk has provided faster search capabilities and the ability to correlate data from multiple sources. Use cases described how Splunk has helped with real-world incident investigations, security alerts and threat response, breach detection, and compliance reporting. Best practices and lessons learned are also shared.
This document provides an introduction to big data, including definitions and key characteristics. It discusses how big data is defined as extremely large and complex datasets that cannot be managed by traditional systems due to issues of volume, velocity, and variety. It outlines three key characteristics of big data: volume (scale), variety (complexity), and velocity (speed). Examples are given of different types and sources of big data. The document also introduces cloud computing and how it relates to big data management and processing. Finally, it provides an overview of topics to be covered, including frameworks, modeling, warehousing, ETL, and specific analytic techniques.
Justin Hardeman is a Unix administrator at Availity LLC, a company that processes over 2 billion healthcare transactions annually. He has over 5 years of experience using Splunk for monitoring Availity's large, multi-datacenter infrastructure consisting of 500+ virtual machines. Splunk has allowed Availity to move from a reactive to proactive approach by providing real-time visibility into issues, transactions, and workflows across their environment.
The document discusses how the U.S. Social Security Administration uses Splunk to gain insights from large amounts of machine data. It summarizes Splunk's implementation at SSA, including indexing over 400 GB of data daily from various sources, and deploying search heads, indexers, and universal forwarders. It also discusses challenges around managing IT assets and how Splunk helps with security, compliance, and asset management.
This document discusses how Splunk provides value across IT operations, application delivery, business analytics, industrial data/IoT, and security/compliance. It highlights Splunk's capabilities for operational visibility, powerful developer platform, extensibility, and ecosystem for industrial/IoT data. An example deployment for oil and gas operations is shown. The document argues that a new approach to ICS/OT security is needed to analyze all relevant data and leverage threat intelligence. Splunk provides an application for enterprise security focused on ICS/OT environments.
John Villacres works in network automation and tools at Nationwide. He demonstrated Splunk to colleagues in 2012 and they now use it extensively. Splunk has improved their ability to troubleshoot issues by providing timely access to network data through custom dashboards. It has reduced resolution times for problems from days to minutes by integrating data from sources like firewalls, routers, and packet captures. More teams now use Splunk as its efficiency has allowed employees to take on new tasks while maintaining productivity.
Tom McMahon is the Security Engineering Manager at Weill Cornell Medical College. They have grown their security team from 2 to 12 people over 5 years. Splunk has become a central tool for their security operations and IT operations. It has improved security response times, increased visibility across their networks and systems, and allowed for better operational reporting and metrics. Splunk consolidates logs from many different systems and applications, providing a single pane of glass. It has replaced their legacy SIEM which was at capacity.
Splunk is used by Satcom Direct for monitoring aviation systems, tracking aircraft in flight, and analyzing business data. Logs from networking devices, phone systems, satellite communications systems and aircraft position reports are fed to Splunk. This allows Satcom Direct to provide a single dashboard for support technicians to monitor systems, see customer information and receive alerts. Splunk is also used to visualize aircraft flight paths on maps and analyze business metrics like call volumes to different countries to improve contracts.
The document summarizes Splunk adoption at athenahealth, a cloud-based healthcare services company. It discusses how Splunk has provided athenahealth's security teams visibility into various data sources to help prioritize threats and incidents. Specifically, Splunk Enterprise Security is used by the Security Incident Response Team. Over 10 power users consume 400GB of data per day from hundreds of forwarders. Splunk has improved efficiency, reduced alert fatigue, and allowed for better investigation and correlation of security information.
This document discusses log centralization in cloud environments. It describes FINRA's role as an independent financial industry regulator and how it monitors the stock market and registered brokers. It then discusses challenges of collecting logs from various cloud services (SaaS, IaaS, PaaS) and providers (AWS, Cisco, etc.). It provides examples of using AWS services like CloudTrail, CloudWatch, and Elastic MapReduce with Hadoop to collect and analyze logs and metrics in the cloud.
This document discusses how Splunk provides new visibility and analytics for IT operations. It notes that IT environments are becoming increasingly complex with more servers, applications, virtualization, and cloud services. Splunk offers a platform for operational intelligence that can consolidate machine data from various sources and provide search, monitoring, and analytics capabilities. It also discusses how Splunk apps can provide deep insights into specific technology areas.
This document provides an overview and examples of data onboarding in Splunk. It discusses best practices for indexing data, such as setting the event boundary, date, timestamp, sourcetype and source fields. Examples are given for onboarding complex JSON, simple JSON and complex CSV data. Lessons learned from each example highlight issues like properly configuring settings for nested or multiple timestamp fields. The presentation also introduces Splunk capabilities for collecting machine data beyond logs, such as the HTTP Event Collector, Splunk MINT and the Splunk App for Stream.
This document summarizes a presentation given by David Craigen and Jeff Meyers of Aaron's Inc. about how they use Splunk. It discusses Aaron's background as a lease-to-own retailer with over 2,100 stores. It then describes the security team and their challenges with limited visibility and slow response times prior to Splunk. With Splunk, they have gained flexibility, fast time to value through security incident correlation and continuous monitoring across various data sources. Their roadmap includes adding more data sources and automation while expanding Splunk use for applications. Key lessons included showing quick value, taking a holistic view of security data, and attending Splunk conferences for best practices.
Splunk is a software company headquartered in San Francisco with additional offices in London and Hong Kong. They have over 2,100 employees and annual revenue of $668.4 million, growing 49% year-over-year. Their products include Splunk Enterprise, Splunk Cloud, and other solutions for collecting, analyzing, and visualizing machine-generated data from websites, applications, sensors, and other sources. Splunk has over 11,000 customers across more than 110 countries, including 80 of the Fortune 100. Their largest customer indexes over 1 petabytes of data per day.
The document summarizes Battelle's use of Splunk for security monitoring and log management. It describes how Splunk replaced three disparate and difficult to manage log systems, providing a single interface for all security logs. Splunk reduced complexity, increased efficiency of the security team, and allowed them to spend more time on security and less on tool management. The security team uses Splunk for central logging, alerts and monitoring, queries and searches, and reporting to share security information.
Razi Asaduddin presented on how ExxonMobil uses Splunk for various purposes including cyber security, network and application performance monitoring, and capacity planning. Some key points included how Splunk has allowed ExxonMobil to gain visibility and insights across data that was previously siloed, and how their use of Splunk has evolved from one-dimensional searches to multi-dimensional pivoting and visualization. Razi also shared best practices like starting with simple questions and gradually building complexity, as well as methods for policing Splunk usage within the organization.
What is Splunk? At the end of this session you’ll have a high-level understanding of the pieces that make up the Splunk Platform, how it works, and how it fits in the landscape of Big Data. You’ll see practical examples that differentiate Splunk while demonstrating how to gain quick time to value.
Splunk Discovery: Warsaw 2018 - Legacy SIEM to Splunk, How to Conquer Migrati...Splunk
Presented at Splunk Discovery Warsaw 2018:
SIEM Replacement Methodology
Use Cases
Data Sources & Data Onboarding
Architecture
Third Party Integration
You Got This!
Travis Perkins: Building a 'Lean SOC' over 'Legacy SOC'Splunk
Travis Perkins has a complex hybrid IT infrastructure and is in midst of migrating to the cloud. This session will outline the pitfalls from their initial infrastructure-heavy ‘legacy SOC’ approach with a legacy SIEM and the success they gained when they moved to a cloud-based, data-driven ‘lean SOC’.
The document discusses the experience of migrating from an old SIEM to Splunk Enterprise Security (ES). Key points include:
- The old SIEM was difficult to maintain, slow, and lacked community support. Splunk provided better performance and capabilities.
- Logs were migrated to Splunk one source at a time after normalization. Analysts found Splunk easier to use.
- A proof of concept with ES showed its advanced correlations, dashboards, and incident management capabilities beyond core Splunk.
- ES provides templates for searches, alerts, and workflows that would have taken months to recreate. It is a more complete SIEM solution.
1) Cisco has been using Splunk enterprise for over 7 years across many business units and teams, with daily indexing growing from 300GB in 2010 to over 2TB currently.
2) Cisco's Computer Security Incident Response Team (CSIRT) uses Splunk as their security information and event management (SIEM) platform to monitor 350TB of stored data across 60 global users.
3) The presentation discusses how Cisco and some of its customers have successfully deployed Splunk on Cisco Unified Computing System (UCS) servers to scale their Splunk environments and gain benefits of simplified and repeatable deployments.
This document discusses how Splunk has helped the University of Maryland improve security visibility and incident response. It provides an overview of the speaker and UMD, challenges they previously faced with scattered logs and limited visibility, and how Splunk has provided faster search capabilities and the ability to correlate data from multiple sources. Use cases described how Splunk has helped with real-world incident investigations, security alerts and threat response, breach detection, and compliance reporting. Best practices and lessons learned are also shared.
This document provides an introduction to big data, including definitions and key characteristics. It discusses how big data is defined as extremely large and complex datasets that cannot be managed by traditional systems due to issues of volume, velocity, and variety. It outlines three key characteristics of big data: volume (scale), variety (complexity), and velocity (speed). Examples are given of different types and sources of big data. The document also introduces cloud computing and how it relates to big data management and processing. Finally, it provides an overview of topics to be covered, including frameworks, modeling, warehousing, ETL, and specific analytic techniques.
Transformative experience of implementing a next-generation library system - ...CONUL Conference
The document summarizes the library's transformation from an outdated legacy system to a new next-generation library system. Key points:
- The legacy system implemented in 1999 was designed for print and had become overly complex with various add-on systems.
- The library saw an opportunity to transform its operations by implementing a single system for managing all resources with maximum automation and integration.
- A lengthy, 10 month process involved extensive requirements gathering, a 12 month procurement, and a 10 month implementation project to migrate data and configure new workflows.
- The project aimed to streamline processes like acquisitions and e-resources through increased automation and optimized workflows.
- Data cleanup took much longer than expected,
This document discusses achieving content governance across collaborative and social business platforms. It begins by introducing content governance and the challenges of governing multiple target repositories, including traditional ECM, email, file shares, SharePoint, cloud ECM, and cloud file sharing. It then discusses different governance approaches like repository-based, content type-based, and lifecycle-based governance. It proposes a hybrid lifecycle model and "policy hub service" to help achieve governance across platforms in both on-premise and cloud environments. It concludes by questioning whether unified governance of SharePoint, email, and file shares provides full governance and how to best approach hybrid governance.
This document provides an overview of a Splunk fundamentals training hosted by Global Technology Resources, Inc. The training covers Splunk architecture, data collection, using Splunk for investigations and discovery, automation with reports, alerts and dashboards, and Splunk apps. Hands-on labs are included to allow attendees to explore the Splunk interface, conduct searches, and create a simple dashboard. Global Technology Resources, Inc. is a solutions-oriented consulting firm with extensive experience and credentials in Splunk.
SharePoint Saturday Helsinki 2019 - Collaboration Governance and Adoption Bes...Jasper Oosterveld
Our world has changed rapidly due to the easy access of technology in our personal lives. For each aspect of our lives, an app or service is available. This new world, ruled by technology, had a huge impact on organizations worldwide. This is where Microsoft steps in by providing the Modern Workplace with Microsoft 365. A Workplace that makes our work lives easier, fun and efficient. Adoption and Governance are at the foundation of a successful roll-out and high quality Modern Workplace. Jasper Oosterveld, Microsoft MVP & Modern Workplace Consultant, is going to take share his best practices around adoption and governance. This session has a focus around collaboration!
5 Tips to Optimize SharePoint While Preparing for HybridAdam Levithan
For organizations planning to migrate to a hybrid deployment of the their SharePoint and Office 365 infrastructure, optimizing their current SharePoint is a crucial step in reducing the amount of work required for a successful migration, increase end-user performance and decrease the risk of an unsuccessful migration.
Join Metalogix SharePoint expert Adam Levithan on March 17, 2016 for 5 Tips to Optimize SharePoint While Preparing for a Hybrid Deployment, a comprehensive live webinar where he unveils the top five optimizations that organizations need to consider before they plan to move to a SharePoint 2013 or SharePoint 2016 hybrid deployment.
Key takeaways
Such optimizations will help SharePoint Admins and IT professionals:
Provide the best end-user experience,
Gain early warnings as performance issues are developing
Obtain better insight into the interdependency between SharePoint infrastructure and applications
With the upcoming release of SharePoint 2016, hybrid deployments are quickly becoming the new standard for SharePoint deployments. Adam's help increased the success of several companies migrating to hybrid deployments and will be happy to share his insights, experience and solutions with attendees.
The document provides information about an IT services company called Coalesce Technologies. It discusses Coalesce's services, commitment to client satisfaction, growing network, and customized solutions. It also describes the library management system project, including the problems with existing systems, proposed new system features, and UML diagrams for modeling the system. Key aspects of the proposed system include automating transactions, providing a simple GUI, efficient database updating, and restricting administrative access for security.
The document discusses 3 common mistakes made when building scalable SharePoint applications:
1. Conflating performance and scalability and not conducting proper load testing. Performance is for a single user while scalability is for growing number of users. Load testing should be part of development process.
2. Using SharePoint lists as an operational database which is not best practice. Lists are meant for abstraction, not large transactional databases. Consider a custom SQL database instead.
3. Not using CAML (Common Application Markup Language) to efficiently fetch list items which can impact performance and scalability. The document provides recommendations to avoid these mistakes like designing for future growth and volume from the start.
Accelerating SDLC for Large Public Sector Enterprise ApplicationsSplunk
This document discusses how big data analytics tools like Splunk can be used to accelerate the software development lifecycle for large public sector applications. It provides examples of how Splunk was used to improve productivity by enabling immediate log access across many servers and files. Splunk also created real-time performance dashboards to help identify root causes of issues. Additional analytics revealed insights like peak usage times and patterns, user behaviors on forms, and browser/device details. The summary concludes that these tools can improve IT and business while providing lessons on proper Splunk setup and logging the right application data.
CHIME LEAD New York 2014 "Case Studies from the Field: Putting Cyber Security Strategies into Action"
Learn from those in the trenches who have deployed effective cyber strategies in their organizations, foiled attacks and managed breach situations. Learn approaches for success and pitfalls to avoid by exploring the experience of others with deployment and management of cyber security strategies and plans.
Learning Objectives:
Identify successes, challenges and lessons learned with implementation of cyber strategies
Identify success strategies for gaining the C Suite support and ways cyber security can be integrated into the organization's culture and work processes.
Identify best practices with anticipating new and emerging threats and ways to maintain a proactive position instead of reactive
Identify approaches for breach preparation and breach management
Featured Speakers:
Neal Ganguly, MBA, FCHIME, FHIMSS, CHCIO
VP & CIO
JFK Health System
Miroslav Belote
Director of IT – Infrastructure and Information Security Officer
JFK Health System
Nassar Nizami
CISO
Yale-New Haven Health System
This document discusses application decommissioning and using InfoArchive for archiving structured and unstructured data from legacy systems. It outlines factors driving the need for archiving like compliance, mobility, and governance. InfoArchive can archive data from applications being decommissioned or consolidated for cost savings, risk reduction, and compliance. It supports archiving databases, content, emails and can integrate with DPAD storage platforms.
The document summarizes the activities of EDINA and the Data Library at the University of Edinburgh related to research data management. It describes EDINA as a national data center that provides online resources for education and research. The Data Library assists university researchers with discovering, accessing, using and managing research datasets. It also outlines several projects the Data Library is involved in to develop training, policies and services to support best practices in research data management according to funder requirements. This includes developing an institutional research data management roadmap to help the university meet funder expectations by 2015.
The document provides information about research data management (RDM) services and initiatives at the University of Edinburgh. It describes the EDINA National Data Centre and Data Library, which provide online resources and data management support. It outlines several JISC-funded RDM projects undertaken by the Data Library, including building the Edinburgh DataShare repository. It also summarizes the Research Data MANTRA training module and the university's RDM roadmap, which lays out a multi-phase plan to improve RDM support and services by 2015 in line with funder requirements.
Slides for presentation entitled 'Measuring impact' given during Institutional Web Management Workshop (IWMW) in June 2012 at the University of Edinburgh.
Webinar for charities and beginners thinking about moving to Office 365. We briefly look at email, calendar, SharePoint and other features, charity pricing and take questions from the audience.
The deliverable from a consulting engagement for a hospital. The hospital needed to define the requirements for a single EIM platform. This two-day clinic allowed them to identify key issues and requirements to reduce the time to move from idea to RFP. While ensuring the that process stayed focused on hospital goals rather than just technical ease and fastest implementation.
The document discusses how non-profit organizations can improve productivity and reduce costs by moving workloads and applications to the cloud, providing an overview of public, private and hybrid cloud models, examples of productivity benefits from cloud computing, and a checklist and case study for non-profits considering a transition to cloud services.
(ATS6-APP05) Deploying Contur ELN to large organizationsBIOVIA
Introducing new IT systems that affect many users could be challenging, in particular for large organizations. This session will describe how Contur ELN has been deployed to 1000+ users in different fields of R&D. Case studies will be used to illustrate strategies and practical considerations.
How Elns can galvanise research data management. rmacneil88
Talk given by Rory Macneil of Research Space on ELNs and Research Data Management at the Dealing with Data Conference, held on 26 August 2014 as part of the launch of the University of Edinburgh's Research Data Management programme.
Similar to University of Alberta Customer Presentation (20)
.conf Go 2023 - Raiffeisen Bank InternationalSplunk
This document discusses standardizing security operations procedures (SOPs) to increase efficiency and automation. It recommends storing SOPs in a code repository for versioning and referencing them in workbooks which are lists of standard tasks to follow for investigations. The goal is to have investigation playbooks in the security orchestration, automation and response (SOAR) tool perform the predefined investigation steps from the workbooks to automate incident response. This helps analysts automate faster without wasting time by having standard, vendor-agnostic procedures.
.conf Go 2023 - Das passende Rezept für die digitale (Security) Revolution zu...Splunk
.conf Go 2023 presentation:
"Das passende Rezept für die digitale (Security) Revolution zur Telematik Infrastruktur 2.0 im Gesundheitswesen?"
Speaker: Stefan Stein -
Teamleiter CERT | gematik GmbH M.Eng. IT-Sicherheit & Forensik,
doctorate student at TH Brandenburg & Universität Dresden
El documento describe la transición de Cellnex de un Centro de Operaciones de Seguridad (SOC) a un Equipo de Respuesta a Incidentes de Seguridad (CSIRT). La transición se debió al crecimiento de Cellnex y la necesidad de automatizar procesos y tareas para mejorar la eficiencia. Cellnex implementó Splunk SIEM y SOAR para automatizar la creación, remediación y cierre de incidentes. Esto permitió al personal concentrarse en tareas estratégicas y mejorar KPIs como tiempos de resolución y correos electrónicos anal
conf go 2023 - El camino hacia la ciberseguridad (ABANCA)Splunk
Este documento resume el recorrido de ABANCA en su camino hacia la ciberseguridad con Splunk, desde la incorporación de perfiles dedicados en 2016 hasta convertirse en un centro de monitorización y respuesta con más de 1TB de ingesta diaria y 350 casos de uso alineados con MITRE ATT&CK. También describe errores cometidos y soluciones implementadas, como la normalización de fuentes y formación de operadores, y los pilares actuales como la automatización, visibilidad y alineación con MITRE ATT&CK. Por último, señala retos
Splunk - BMW connects business and IT with data driven operations SRE and O11ySplunk
BMW is defining the next level of mobility - digital interactions and technology are the backbone to continued success with its customers. Discover how an IT team is tackling the journey of business transformation at scale whilst maintaining (and showing the importance of) business and IT service availability. Learn how BMW introduced frameworks to connect business and IT, using real-time data to mitigate customer impact, as Michael and Mark share their experience in building operations for a resilient future.
The document is a presentation on cyber security trends and Splunk security products from Matthias Maier, Product Marketing Director for Security at Splunk. The presentation covers trends in security operations like the evolution of SOCs, new security roles, and data-centric security approaches. It also provides updates on Splunk's security portfolio including recognition as a leader in SIEM by Gartner and growth in the SIEM market. Maier highlights some breakout sessions from the conference on topics like asset defense, machine learning, and building detections.
Data foundations building success, at city scale – Imperial College LondonSplunk
Universities have more in common with modern cities than traditional places of learning. This mini city needs to empower its citizens to thrive and achieve their ambitions. Operationalising data is key to building critical services; from understanding complex IT estates for smarter decision-making to robust security and a more reliable, resilient student experience. Juan will share his experience in building data foundations for a resilient future whilst enabling digital transformation at Imperial College London.
Splunk: How Vodafone established Operational Analytics in a Hybrid Environmen...Splunk
Learn how Vodafone has provided end-to-end visibility across services by building an Operational Analytics Platform. In this session, you will hear how Stefan and his team manage legacy, on premise, hybrid and public cloud services, and how they are providing a platform for complex triage and debugging to tackle use cases across Vodafone’s extensive ecosystem.
.italo operates an Essential Service by connecting more than 100 million people annually across Italy with its super fast and secure railway. And CISO Enrico Maresca has been on a whirlwind journey of his own.
Formerly a Cyber Security Engineer, Enrico started at .italo as an IT Security Manager. One year later, he was promoted to CISO and tasked with building out – and significantly increasing the maturity level – of the SOC. The result was a huge step forward for .italo.
So how did he successfully achieve this ambitious ask? Join Enrico as he reveals the key insights and lessons learned in his SOC journey, including:
Top challenges faced in improving security posture
Key KPIs implemented in order to measure success
Strategies and approaches applied in the SOC
How MITRE ATT&CK and Splunk Enterprise Security were utilised
Next steps in their maturity journey ahead
This document summarizes a presentation about observability using Splunk. It includes an agenda introducing observability and why Splunk for observability. It discusses the need for modernization initiatives in companies and the thousands of changes required. It presents that Splunk provides end-to-end visibility across metrics, traces and logs to detect, troubleshoot and optimize systems. It shares a customer case study of Accenture using Splunk observability in their hybrid cloud environment. Finally, it concludes that observability with Splunk can drive results like reduced downtime and faster innovation.
This document contains slides from a Splunk presentation covering the following topics:
- Updated Splunk logo and information about meetings in Zurich and sales engineering leads
- Ideas for confused or concerned human figures in design concepts
- Three buckets of challenges around websites slowing, apps being down, and supply chain issues
- Accelerating mean time to detect, identify, respond and resolve through cyber resilience with Splunk
- Unifying security, IT and DevOps teams
- Splunk's technology vision focusing on customer experience, hybrid/edge, unleashing data lakes, and ubiquitous machine learning
- Gaining operational resilience through correlating infrastructure, security, application and user data with business outcomes
This document summarizes a presentation about Splunk's platform. It discusses Splunk's mission of helping customers create value faster with insights from their data. It provides statistics on Splunk's daily ingest and users. It highlights examples of how Splunk has helped customers in areas like internet messaging and convergent services. It also discusses upcoming challenges and new capabilities in Splunk like federated search, flexible indexing, ingest actions, improved data onboarding and management, and increased platform resilience and security.
The document appears to be a presentation from Splunk on security topics. It includes sections on cyber security resilience, the data-centric modern SOC, application monitoring at scale, threat modeling, security monitoring journeys, self-service Splunk infrastructure, the top 3 CISO priorities of risk based alerting, use case development, a security content repository, security PVP (posture, vision, and planning) and maturity assessment, and concludes with an overview of how Splunk can provide end-to-end visibility across an organization.
Digital Marketing Trends in 2024 | Guide for Staying AheadWask
https://www.wask.co/ebooks/digital-marketing-trends-in-2024
Feeling lost in the digital marketing whirlwind of 2024? Technology is changing, consumer habits are evolving, and staying ahead of the curve feels like a never-ending pursuit. This e-book is your compass. Dive into actionable insights to handle the complexities of modern marketing. From hyper-personalization to the power of user-generated content, learn how to build long-term relationships with your audience and unlock the secrets to success in the ever-shifting digital landscape.
Letter and Document Automation for Bonterra Impact Management (fka Social Sol...Jeffrey Haguewood
Sidekick Solutions uses Bonterra Impact Management (fka Social Solutions Apricot) and automation solutions to integrate data for business workflows.
We believe integration and automation are essential to user experience and the promise of efficient work through technology. Automation is the critical ingredient to realizing that full vision. We develop integration products and services for Bonterra Case Management software to support the deployment of automations for a variety of use cases.
This video focuses on automated letter generation for Bonterra Impact Management using Google Workspace or Microsoft 365.
Interested in deploying letter generation automations for Bonterra Impact Management? Contact us at sales@sidekicksolutionsllc.com to discuss next steps.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Skybuffer SAM4U tool for SAP license adoptionTatiana Kojar
Manage and optimize your license adoption and consumption with SAM4U, an SAP free customer software asset management tool.
SAM4U, an SAP complimentary software asset management tool for customers, delivers a detailed and well-structured overview of license inventory and usage with a user-friendly interface. We offer a hosted, cost-effective, and performance-optimized SAM4U setup in the Skybuffer Cloud environment. You retain ownership of the system and data, while we manage the ABAP 7.58 infrastructure, ensuring fixed Total Cost of Ownership (TCO) and exceptional services through the SAP Fiori interface.
TrustArc Webinar - 2024 Global Privacy SurveyTrustArc
How does your privacy program stack up against your peers? What challenges are privacy teams tackling and prioritizing in 2024?
In the fifth annual Global Privacy Benchmarks Survey, we asked over 1,800 global privacy professionals and business executives to share their perspectives on the current state of privacy inside and outside of their organizations. This year’s report focused on emerging areas of importance for privacy and compliance professionals, including considerations and implications of Artificial Intelligence (AI) technologies, building brand trust, and different approaches for achieving higher privacy competence scores.
See how organizational priorities and strategic approaches to data security and privacy are evolving around the globe.
This webinar will review:
- The top 10 privacy insights from the fifth annual Global Privacy Benchmarks Survey
- The top challenges for privacy leaders, practitioners, and organizations in 2024
- Key themes to consider in developing and maintaining your privacy program
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
leewayhertz.com-AI in predictive maintenance Use cases technologies benefits ...alexjohnson7307
Predictive maintenance is a proactive approach that anticipates equipment failures before they happen. At the forefront of this innovative strategy is Artificial Intelligence (AI), which brings unprecedented precision and efficiency. AI in predictive maintenance is transforming industries by reducing downtime, minimizing costs, and enhancing productivity.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
Introduction of Cybersecurity with OSS at Code Europe 2024Hiroshi SHIBATA
I develop the Ruby programming language, RubyGems, and Bundler, which are package managers for Ruby. Today, I will introduce how to enhance the security of your application using open-source software (OSS) examples from Ruby and RubyGems.
The first topic is CVE (Common Vulnerabilities and Exposures). I have published CVEs many times. But what exactly is a CVE? I'll provide a basic understanding of CVEs and explain how to detect and handle vulnerabilities in OSS.
Next, let's discuss package managers. Package managers play a critical role in the OSS ecosystem. I'll explain how to manage library dependencies in your application.
I'll share insights into how the Ruby and RubyGems core team works to keep our ecosystem safe. By the end of this talk, you'll have a better understanding of how to safeguard your code.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes und Domino Lizenzkostenreduzierung in der Welt von DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-und-domino-lizenzkostenreduzierung-in-der-welt-von-dlau/
DLAU und die Lizenzen nach dem CCB- und CCX-Modell sind für viele in der HCL-Community seit letztem Jahr ein heißes Thema. Als Notes- oder Domino-Kunde haben Sie vielleicht mit unerwartet hohen Benutzerzahlen und Lizenzgebühren zu kämpfen. Sie fragen sich vielleicht, wie diese neue Art der Lizenzierung funktioniert und welchen Nutzen sie Ihnen bringt. Vor allem wollen Sie sicherlich Ihr Budget einhalten und Kosten sparen, wo immer möglich. Das verstehen wir und wir möchten Ihnen dabei helfen!
Wir erklären Ihnen, wie Sie häufige Konfigurationsprobleme lösen können, die dazu führen können, dass mehr Benutzer gezählt werden als nötig, und wie Sie überflüssige oder ungenutzte Konten identifizieren und entfernen können, um Geld zu sparen. Es gibt auch einige Ansätze, die zu unnötigen Ausgaben führen können, z. B. wenn ein Personendokument anstelle eines Mail-Ins für geteilte Mailboxen verwendet wird. Wir zeigen Ihnen solche Fälle und deren Lösungen. Und natürlich erklären wir Ihnen das neue Lizenzmodell.
Nehmen Sie an diesem Webinar teil, bei dem HCL-Ambassador Marc Thomas und Gastredner Franz Walder Ihnen diese neue Welt näherbringen. Es vermittelt Ihnen die Tools und das Know-how, um den Überblick zu bewahren. Sie werden in der Lage sein, Ihre Kosten durch eine optimierte Domino-Konfiguration zu reduzieren und auch in Zukunft gering zu halten.
Diese Themen werden behandelt
- Reduzierung der Lizenzkosten durch Auffinden und Beheben von Fehlkonfigurationen und überflüssigen Konten
- Wie funktionieren CCB- und CCX-Lizenzen wirklich?
- Verstehen des DLAU-Tools und wie man es am besten nutzt
- Tipps für häufige Problembereiche, wie z. B. Team-Postfächer, Funktions-/Testbenutzer usw.
- Praxisbeispiele und Best Practices zum sofortigen Umsetzen
Trusted Execution Environment for Decentralized Process MiningLucaBarbaro3
Presentation of the paper "Trusted Execution Environment for Decentralized Process Mining" given during the CAiSE 2024 Conference in Cyprus on June 7, 2024.
2. 2
• At U of A since 2007
• Responsible for 10-person
team managing applications
and databases university-wide
• Splunk user since 2013
• I’ve eaten BBQ chicken
intestines on a stick. Yummy.
• splunk> take the sh out of IT
3. 3
The University of Alberta
• Public research university based in
Edmonton and founded in 1908
• 39,000+ students and 18,000
employees
• 5 campuses and 18 faculties
• One of the top 100 universities
worldwide
4. 4
IT at the University of Alberta
Central IT group for authentication,
wireless and core services
Independent IT groups for most
faculties and departments
University-wide initiative to
consolidate more of IT
Need to standardize IT operations
and tame diverse technology stacks
4
5. 5
Application Hosting Objectives
• Centralize more of IT
• Build and manage shared
environments
• Develop custom services as
needed
• Roll out/upgrade applications
• Investigate performance
problems
IT
Libraries
LMS
Public website
+ CMS
Ticketing
Billing
systems
Research group
serversOther applications
and databases
6. 6
Challenges after Restructuring IT
• More interdependencies
among teams
• Massive volume of data,
housed in silos
• “Running blind” – no
understanding of the data
• Time-consuming to gather
data for incidents
7. 7
Splunk Timeline
• Funding to
rebuild Splunk
environment
• New hardware,
clustering with
dedicated
storage
• 400 data sources
• 133 sourcetypes
April 2015
• Management
notification of
syslog data loss
• Incidents
escalated
• Splunk in
production?
Sept. 2014
• Data loss
concerns from
restarting
Splunk
• Management
relying on
Splunk reports
• Splunk not in
production
March 2014
• Pilot deployed
• Splunk as syslog
target
• Log aggregation
test; no need
for backup
Sept. 2013
8. 8
Splunk at the University of Alberta
Infrastructure
Applications
(mail, authentication)
Networking
and Security
(switches, IPS)
Application
Hosting
(apps, databases)
9. 9
Example: Troubleshooting Authentication Systems
Before
• 12GB/day, 20 machines
• No aggregation
• Reactive issue response
based on user feedback
• Manual investigations
• Delay in getting data
After
• Centralized data
• ½ hour to troubleshoot
• Proactive alerts for issues
• Easy access to
infrastructure data
• Real-time reporting
14. 14
Splunk Deployment Takeaways
Successes
• Visibility cutting through team
boundaries
• More advanced initial incident
investigation
• Openness - signed standard IT
agreement for access to Splunk
data
• Management loves reports
• Defusing situations with rapid
access to facts
Challenges
• Accepting syslog data directly
• Log standardization
• Figuring out what to look at in the
logs to understand “good” system
behavior
15. 15
Aha! Moments
Transactions
• End-to-end monitoring
of 4M+ email messages
per day (greylisting
spam filtering
Google)
• Used transactions to
combine logs across
systems into single,
message-centric log
• Ability to easily search
for anomalies
Generic Alerts
• Created alert to catch
errors across systems in
real time
• Used existing alert and
removed host
specification to create
the generic alert
• Catches errors that were
not in Splunk at the
moment the alert was
created
10-second Query
• 10-second window =
~35,000 events
• Statistics to rank likely
events triggering issues
• New Splunk window to
analyze unusual messages
• Ability to examine small
slice of time in detail
while running statistics
over longer period of time
16. 16
“Splunk allows us to erase these lines and
any analyst can see all the data from
anywhere and investigate a problem from
end to end.”
Good morning everyone and welcome to SplunkLive Calgary
Thanks so much for having me at your SplunkLive today
Graduated from CS at University of Alberta in 2002, worked for various research projects and did some contract development for a few years. Joined university again in 2007.
University of Alberta – friendly neighbor about 300 KM that away (for main campus)
18 faculties includes some big ones like Science, Medicine and Engineering
The University ranking changes every year as well as different rankings get published. This seemed like a safe enough statement to make without going into half a page of small print.
Our starting point about 4 years ago.
Over the last 4 years or so we’ve been consolidating about 300+ individual groups IT into central department. Originally this was envisioned as a 10 year plan, so we still have a ways to go.
At this point we are supporting a lot of different institutional needs, lots of different technologies and life is very exciting.
Application Hosting. We manage the applications and databases for a lot of clients across campus. There are some big applications that are managed by others (LMS, Peoplesoft), but we make up for it with the number of different applications we support. I’ve stopped being amazed at the number of needs an institution of this size has. We have an application for printing out labels to put on file folders, tracking project time, billing and invoicing, databases supporting libraries, ticketing systems, wikis, departmental pages. Typically there is a piece of software behind a lot of business processes and all of that needs to be monitored, patched, upgraded.
As our consolidation effort continues we will be using Splunk to look into how an application is used in order to determine how it could be consolidated with other applications of similar function. There is an amazing amount of information about usage patterns, what gets accessed and how often and who does the accessing.
We’ve re-organized ourselves along functional lines. OS Support, Networking, Application Hosting, etc. What that means is that some investigations spanning multiple teams become very time consuming and expensive (two people looking at logs). Some of that is unavoidable, and even desirable, but for a large number of errors we’re just missing that one piece of crucial information that “solves” the problem. That could be a log line from the VM host indicating physical hardware problem, log from authentication system detailing why connection was rejected, etc. etc.
Splunk allows us access to that information. Here is where I need to bring up a big warning flag. Having access to the logs does not mean you can understand the logs. There are some errors where the team running a system is required to correctly interpret the logs, but in general having more eyes is a good thing. Some of the expertise can be developed over time, some more through developing dashboards and applications within Splunk.
I get a kick of these timelines, so I wanted to add our own. I’ve seen a few at a Splunk conference and they typically go something similar. An organization gets Splunk in limited capacity, something happens (systems get hacked, phishing attack, etc) and next year they are running 10x the license.
We’ve had a bit of a different experience, where the Splunk importance snuck up on us. In September 2013 we’ve deployed Splunk as a pilot. Some of the conversations at that point were along the lines of “backups are not important as all the systems keep their own logs” Splunk is just a view into the logs, there is little to no new information contained. Let’s just send logs to see what we can make out of it.
By March we were becoming concerned every time we needed to restart Splunk, since that meant data loss. Our installation was configured as a syslog target for networking devices and IMS logs so if Splunk was not there to receive the events, they went nowhere.
By September (1 year later) we needed to notify management of Splunk outages because of the data loss. In April / May this year we are rebuilding the environment in clustered configuration with dedicated syslog servers.
I don’t know if I can think of any one moment where Splunk suddenly became critical production environment, but it definitely is one now.
What follows is a gross generalization. I obviously understand the challenges within my team the best, so take this with a grain of salt.
Although other groups do use and send logs to Splunk, the three main groups are Infrastructure Applications, Networking and Application Hosting. It’s interesting that we all have different needs and different use cases. Infrastructure has a small number of highly critical applications that they need to understand end to end (mail, central authentication, etc).
Networking has a few data sources that are fairly similar (switches, firewalls, etc). Security looks at a few different data sources like VPN and IPS as well as authentication logs from a number of systems.
Application Hosting currently has a relatively few number of sources, but a lot of them are different and unique. There is probably 5 different ways apache is configured to log access requests, we run every major database type and version. Postgresql, MySQL, MSSQL, Oracle. In addition we support applications themselves or interact with software vendors on behalf of clients. On my last monthly report there were 374 different relational databases in environments supported by my team.
An example from our Infrastructure Applications team. Our authentication system (approximately 180,000 accounts) generates around 12 GB of logs / day. Logs were stored on each individual node of a cluster in a text file. Trying to find logs related to a specific login id required signing on onto the 20 of so systems and using “grep” to identify individual log lines. That was not a quick task and required generating IO load on the servers. Doing anything more advanced than that was nearly impossible.
After. We have summary indexes and reporting indexes on this data to quickly answer specific questions we know will be asked. We can correlate with data from other systems and alert in real-time for specific events. Users are no longer our main issue detection method.
This is something we’ve used with great effect in my team for a few performance investigations so far. We break down the web traffic by percentiles and return size. It allows us to pinpoint problems (in some cases) as well as provide an instant report to the client how their application is really performing. This query is both complex enough to be useful, yet simple enough that I can explain it to non technical clients.
This is a new initiative coming out of my team. It is called a First Responders App (even though it is currently a dashboard, I know it will end up an app eventually).
First Responders is meant as the place to go at a start of an incident. It’s meant to put a lot of infrastructure information at analyst’s fingertips and it spans information from all of the operational teams. It allows an analyst to verify backup status, check logs, check tickets / change calendar, check monitoring system, who has logged in into the system last, etc. As we’re still rolling it out we do not have all the information we would perhaps wish, but the reception so far has been very positive.
Also part of First Responders we have a holistic look at the logs. Not only do we look at the logs from the system, we also look at logs about the system. If our IPS detects activity against the host, that will appear in the window at top left. Same if out authentication suddenly starts throwing a lot of errors or messages about the host. Sparklines are great for very quickly identifying patterns and seeing if something is unusual.
Lastly we have a query I’ve been playing around for a some months. This query tries to generate a statistical baseline of events from a system, and then compare last full hour against that baseline to highlight issues. In this screenshot I relaxed my alerting thresholds, that’s why the z scores are so low, it does illustrate what the output looks like. I’m working a next generation of this type of query that will log all deviations from the entire environment every hour. Wish me luck.
This was probably our management’s first introduction to splunk, monthly reports for critical systems. We are still tweaking what goes on the reports, and will continue to do so. This particular dashboard shows our Adobe Connect, including a histogram of meeting sizes, which did include classes in the 240 – 245 participant range as well as an AppDex score. Eventually I’d like to standardize all application monthly reports and have them send automatically to each client / department.
Things that worked and things that did not work so well. The main things that were successes:
You can really defuse a situation by being able to rapidly provide facts. If you’re able to provide a list of users who accessed a specific file in the first 15 minutes of a breach investigation, that really brings down the stress level of everyone involved and situation rapidly de-escalates. Similarly for performance investigations.
Everyone in the system can see all logs, we try to have the system as open as possible to all IT.
Challenges:
There is so many different way of configuring logging. That makes getting consistent reports a challenge.
Knowing what is “normal”. Having the ability to rapidly generate graphs of values and have data that goes back a few months (at least) is highly beneficial.
Syslog. Where possible use universal forwarders, where not possible have a syslog collector.
Some of the “AHA moments”
Using transactions to create message centric logs. It was nothing short of magical, especially when compared to the good old “grep” command across multiple systems.
Generic alerts. Ability to create alerts that work for systems that are not in the system yet. The ability to look at the entire environment as a single event stream is incredibly powerful.
An extension of the previous. While investigating a hiccup of some sort I performed a time constrained query across the entire environment. I wanted to see whether the error was limited to this system, or whether it appeared anywhere else. Using two windows I was able to run simple queries on specific messages and determine “normal event” or “not”. It was an amazing, and humbling, look at our environment. So many things happened within that 10 second window.