This document discusses open source logging and metrics tools. It provides an introduction to customizing logs from common daemons and focuses on log aggregation, parsing, and search. It describes a demo setup using the ELK stack to aggregate and visualize logs and metrics from a Drupal site. The document discusses shipping logs with rsyslog and logstash, and parsing different log formats. It also covers monitoring performance with tools like Graphite and Grafana.
A talk about Open Source logging and monitoring tools, using the ELK stack (ElasticSearch, Logstash, Kibana) to aggregate logs, how to track metrics from systems and logs, and how Drupal.org uses the ELK stack to aggregate and process billions of logs a month.
Attack monitoring using ElasticSearch Logstash and KibanaPrajal Kulkarni
With growing trend of Big data, companies are tend to rely on high cost SIEM solutions. However, with introduction of open source and lightweight cluster management solution like ElasticSearch this has been the highlight of the year. Similarly, the log aggregation has been simplified by logstash and kibana providing a visual look to the complex data structure. This presentation will exactly cater to this need of having a appropriate log analysis+Detecting Intrusion+Visualizing data in a powerful interface.
Elasticsearch, Logstash, Kibana. Cool search, analytics, data mining and more...Oleksiy Panchenko
In the age of information and big data, ability to quickly and easily find a needle in a haystack is extremely important. Elasticsearch is a distributed and scalable search engine which provides rich and flexible search capabilities. Social networks (Facebook, LinkedIn), media services (Netflix, SoundCloud), Q&A sites (StackOverflow, Quora, StackExchange) and even GitHub - they all find data for you using Elasticsearch. In conjunction with Logstash and Kibana, Elasticsearch becomes a powerful log engine which allows to process, store, analyze, search through and visualize your logs.
Video: https://www.youtube.com/watch?v=GL7xC5kpb-c
Scripts for the Demo: https://github.com/opanchenko/morning-at-lohika-ELK
A talk about Open Source logging and monitoring tools, using the ELK stack (ElasticSearch, Logstash, Kibana) to aggregate logs, how to track metrics from systems and logs, and how Drupal.org uses the ELK stack to aggregate and process billions of logs a month.
Attack monitoring using ElasticSearch Logstash and KibanaPrajal Kulkarni
With growing trend of Big data, companies are tend to rely on high cost SIEM solutions. However, with introduction of open source and lightweight cluster management solution like ElasticSearch this has been the highlight of the year. Similarly, the log aggregation has been simplified by logstash and kibana providing a visual look to the complex data structure. This presentation will exactly cater to this need of having a appropriate log analysis+Detecting Intrusion+Visualizing data in a powerful interface.
Elasticsearch, Logstash, Kibana. Cool search, analytics, data mining and more...Oleksiy Panchenko
In the age of information and big data, ability to quickly and easily find a needle in a haystack is extremely important. Elasticsearch is a distributed and scalable search engine which provides rich and flexible search capabilities. Social networks (Facebook, LinkedIn), media services (Netflix, SoundCloud), Q&A sites (StackOverflow, Quora, StackExchange) and even GitHub - they all find data for you using Elasticsearch. In conjunction with Logstash and Kibana, Elasticsearch becomes a powerful log engine which allows to process, store, analyze, search through and visualize your logs.
Video: https://www.youtube.com/watch?v=GL7xC5kpb-c
Scripts for the Demo: https://github.com/opanchenko/morning-at-lohika-ELK
'Scalable Logging and Analytics with LogStash'Cloud Elements
Rich Viet, Principal Engineer at Cloud Elements presents 'Scalable Logging and Analytics with LogStash' at All Things API meetup in Denver, CO.
Learn more about scalable logging and analytics using LogStash. This will be an overview of logstash components, including getting started, indexing, storing and getting information from logs.
Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching).
Using Riak for Events storage and analysis at Booking.comDamien Krotkine
At Booking.com, we have a constant flow of events coming from various applications and internal subsystems. This critical data needs to be stored for real-time, medium and long term analysis. Events are schema-less, making it difficult to use standard analysis tools.This presentation will explain how we built a storage and analysis solution based on Riak. The talk will cover: data aggregation and serialization, Riak configuration, solutions for lowering the network usage, and finally, how Riak's advanced features are used to perform real-time data crunching on the cluster nodes.
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
Safely Protect PostgreSQL Passwords - Tell Others to SCRAMJonathan Katz
PostgreSQL 10 introduced SCRAM (Salted Challenge Response Authentication Mechanism), introduced in RFC 5802, as a way to securely authenticate passwords. The SCRAM algorithm lets a client and server validate a password without ever sending the password, whether plaintext or a hashed form of it, to each other, using a series of cryptographic methods.
At the end of this talk, you will understand how SCRAM works, how to ensure your PostgreSQL drivers supports it, how to upgrade your passwords to using SCRAM-SHA-256, and why you want to tell other PostgreSQL password mechanisms to SCRAM!
Talk given by Thomas Widhalm at Icinga Camp San Francisco 2016 - https://www.icinga.org/community/events/archive/2016-archive/icinga-camp-san-francisco/
Presentation given at Mongo SV conference in Mountain View on December 3, 2010. Covers reasons for logging to MongoDB, logging library basics and library options for Java, Python, Ruby, PHP and C#. Updated 1/1/2012 with more info on logging in Ruby and tailable cursors.
'Scalable Logging and Analytics with LogStash'Cloud Elements
Rich Viet, Principal Engineer at Cloud Elements presents 'Scalable Logging and Analytics with LogStash' at All Things API meetup in Denver, CO.
Learn more about scalable logging and analytics using LogStash. This will be an overview of logstash components, including getting started, indexing, storing and getting information from logs.
Logstash is a tool for managing events and logs. You can use it to collect logs, parse them, and store them for later use (like, for searching).
Using Riak for Events storage and analysis at Booking.comDamien Krotkine
At Booking.com, we have a constant flow of events coming from various applications and internal subsystems. This critical data needs to be stored for real-time, medium and long term analysis. Events are schema-less, making it difficult to use standard analysis tools.This presentation will explain how we built a storage and analysis solution based on Riak. The talk will cover: data aggregation and serialization, Riak configuration, solutions for lowering the network usage, and finally, how Riak's advanced features are used to perform real-time data crunching on the cluster nodes.
We're talking about serious log crunching and intelligence gathering with Elastic, Logstash, and Kibana.
ELK is an end-to-end stack for gathering structured and unstructured data from servers. It delivers insights in real time using the Kibana dashboard giving unprecedented horizontal visibility. The visualization and search tools will make your day-to-day hunting a breeze.
During this brief walkthrough of the setup, configuration, and use of the toolset, we will show you how to find the trees from the forest in today's modern cloud environments and beyond.
Safely Protect PostgreSQL Passwords - Tell Others to SCRAMJonathan Katz
PostgreSQL 10 introduced SCRAM (Salted Challenge Response Authentication Mechanism), introduced in RFC 5802, as a way to securely authenticate passwords. The SCRAM algorithm lets a client and server validate a password without ever sending the password, whether plaintext or a hashed form of it, to each other, using a series of cryptographic methods.
At the end of this talk, you will understand how SCRAM works, how to ensure your PostgreSQL drivers supports it, how to upgrade your passwords to using SCRAM-SHA-256, and why you want to tell other PostgreSQL password mechanisms to SCRAM!
Talk given by Thomas Widhalm at Icinga Camp San Francisco 2016 - https://www.icinga.org/community/events/archive/2016-archive/icinga-camp-san-francisco/
Presentation given at Mongo SV conference in Mountain View on December 3, 2010. Covers reasons for logging to MongoDB, logging library basics and library options for Java, Python, Ruby, PHP and C#. Updated 1/1/2012 with more info on logging in Ruby and tailable cursors.
Managing Your Security Logs with ElasticsearchVic Hargrave
The ELK stack (Elasticsearch-Logstash-Kibana) provides a cost effective alternative to commercial SIEMs for ingesting and managing OSSEC alert logs. This presentation will show you how to construct a low cost SIEM based on ELK that rivals the capabilties of commercials SIEMs.
Apache Solr on Hadoop is enabling organizations to collect, process and search larger, more varied data. Apache Spark is is making a large impact across the industry, changing the way we think about batch processing and replacing MapReduce in many cases. But how can production users easily migrate ingestion of HDFS data into Solr from MapReduce to Spark? How can they update and delete existing documents in Solr at scale? And how can they easily build flexible data ingestion pipelines? Cloudera Search Software Engineer Wolfgang Hoschek will present an architecture and solution to this problem. How was Apache Solr, Spark, Crunch, and Morphlines integrated to allow for scalable and flexible ingestion of HDFS data into Solr? What are the solved problems and what's still to come? Join us for an exciting discussion on this new technology.
Slides for a college course based on "Incident Response & Computer Forensics, Third Edition" by by Jason Luttgens, Matthew Pepe, and Kevin Mandia, at City College San Francisco.
Website: https://samsclass.info/152/152_F18.shtml
Slides for a college course based on "Incident Response & Computer Forensics, Third Edition" by by Jason Luttgens, Matthew Pepe, and Kevin Mandia.
Teacher: Sam Bowne
Twitter: @sambowne
Website: https://samsclass.info/121/121_F16.shtml
Docker Logging and analysing with Elastic StackJakub Hajek
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. What aspects should be considered while you design your logging solutions?
Docker Logging and analysing with Elastic Stack - Jakub Hajek PROIDEA
Collecting logs from the entire stateless environment is challenging parts of the application lifecycle. Correlating business logs with operating system metrics to provide insights is a crucial part of the entire organization. We will see the technical presentation on how to manage a large amount of the data in a typical environment with microservices.
Apache Big Data EU 2016: Building Streaming Applications with Apache ApexApache Apex
Stream processing applications built on Apache Apex run on Hadoop clusters and typically power analytics use cases where availability, flexible scaling, high throughput, low latency and correctness are essential. These applications consume data from a variety of sources, including streaming sources like Apache Kafka, Kinesis or JMS, file based sources or databases. Processing results often need to be stored in external systems (sinks) for downstream consumers (pub-sub messaging, real-time visualization, Hive and other SQL databases etc.). Apex has the Malhar library with a wide range of connectors and other operators that are readily available to build applications. We will cover key characteristics like partitioning and processing guarantees, generic building blocks for new operators (write-ahead-log, incremental state saving, windowing etc.) and APIs for application specification.
Nagios Conference 2014 - Rob Hassing - How To Maintain Over 20 Monitoring App...Nagios
Rob Hassing's presentation on How To Maintain Over 20 Monitoring Appliances.
The presentation was given during the Nagios World Conference North America held Oct 13th - Oct 16th, 2014 in Saint Paul, MN. For more information on the conference (including photos and videos), visit: http://go.nagios.com/conference
After being tasked to find a "proper solution for deployments and servers configuration management", I came up with a presentation to show what configuration management is, why our current tool (SaltStack) just needed more love and a proposal for a separate "proper deployment tool", which is not covered in this presentation.
Adding Support for Networking and Web Technologies to an Embedded SystemJohn Efstathiades
These are the slides for a presentation we gave at Device Developer Conference 2014 in the UK. The presentation discusses the work done, experiences, and lessons learnt from adding an open source TCP/IP network stack and web server to an existing industrial control system running on an ARM Cortex M3-based processor from TI.
The presentation covers the following:
· Integrating the network stack into the existing software base
· Configuring and using the network stack and web server
· Adding support for HTTP basic authentication to restrict user access
· Using HTTP to remotely access the target system and retrieve operational data
· Debugging hints and tips
· Pitfalls to avoid and other lessons learnt
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
NGINX is used by more than 130 million websites as a lightweight way to serve web content. Use it to decrease costs, improve performance and open up bottlenecks in web and application server environments without a major architectural overhaul. In this talk, we'll cover the three most basic use cases of static content delivery, application load balancing, and web proxying with caching; and touch on the NGINX maintained Docker container.
I will be giving a brief overview of the history of NGINX along with an overview of the features and functionality in the project as it stands today. I will give some real use case of example of how NGINX can be used to solve problems and eliminate complexity within infrastructure. I will then dive into the future of the modern web and how NGINX is monitoring and leveraging industry changes to enhance the product for individuals and companies in the industry.
Best And Worst Practices Deploying IBM ConnectionsLetsConnect
Depending on deployment size, operating system and security considerations you have different options to configure IBM Connections. This session will show examples from multiple customer deployments of IBM Connections. I will describe things I found and how you can optimize your systems. Main topics include; simple (documented) tasks that should be applied, missing documentation, automated user synchronization, TDI solutions and user synchronization, performance tuning, security optimizing and planning Single Sign On
Similar to Open Source Logging and Metric Tools (20)
A FUTURE-FOCUSED DIGITAL PLATFORM WITH DRUPAL 8Phase2
https://www.youtube.com/watch?v=NCx0fx-FWSc
Breaking News: Al Jazeera Builds Future-focused Digital Platform with Drupal 8
Sep 28, 2016 at DrupalCon Dublin
This just in: Al Jazeera Media Network, a leading provider in news and media broadcasting, is investing in its future by building a global, multi-lingual, unified CMS platform to streamline the creation and personalized delivery of news on the newly released Drupal 8 platform. This story is still unfolding!
For a global media network like Al Jazeera, Drupal 8 provides the perfect base for internationalization, future growth, and flexibility. Al Jazeera required a platform that could unify several different content streams and support a complicated editorial workflow, allowing network wide collaboration and search.
In this talk, leaders from the Al Jazeera digital project will go “behind-the-scenes” of the network’s next generation publishing platform. Hear from the Al Jazeera Product Managers and Platform Experts about how the content needs driving the media business can map to the underpinnings of a unified publishing platform. We will explore the technical advantages of Drupal 8, as well as the digital strategy that informed the endeavor. You’ll learn:
● Why Al Jazeera Media Network decided to invest in Drupal 8 as an early adopter
● How to use Deploy, Multi-version, and Replication modules to support an enterprise content repository
● The implications of starting with Lightning as a base distribution
● How Al Jazeera Media Network transformed its editorial workflow with Drupal 8 tools
For anyone working in the digital publishing industry or considering using Drupal 8 for a platform, this session is a must-see!
The Future of Digital Storytelling - Phase2 TalkPhase2
Watch the full talk here: https://www.phase2technology.com/blog/the-future-of-digital-storytelling/
Mike Mangi, Director of Digital Strategy at Phase2, talks about the importance of evoking emotion in storytelling, and the evolution of our use of technology in our quest for ever-more immersive storytelling tools.
He discusses examples of how a story call be told in and across myriad devices from mobile, to wearables, to Augmented and Virtual Reality headsets, to Artificial Intelligence (AI).
He talks about the need for content and experience management systems capable of publishing multi-device, context-optimized content, and the potential to provide solutions with platforms like headless Drupal.
Drupal 8 for Enterprise: D8 in a Changing Digital LandscapePhase2
Check out our white paper on D8 for enterprise: http://phase.to/1i1G7Gg
Today's digital marketplace requires organizations to engage their audiences on the multitude of channels and devices where they consume content. Drupal 8 can be an effective tool for creating a streamlined, multi-channel experience for users, in addition to serving as an adaptive content engine for website platform builders. In this slideshow, we examine the value of Drupal 8 as a flexible content management system (CMS) and how businesses can use it for maximum benefit.
The Yes, No, and Maybe of "Can We Build That With Drupal?"Phase2
Over the last five years, Drupal has made a huge splash in the Government sector and has quickly become the open source CMS platform of choice. If you’re not already using Drupal, it’s likely that it’s come up as an option. It’s a powerful and flexible framework, and because of this the answer to the question ‘Can we build this with Drupal?’ is usually ‘Yes’. That said…your ‘yes’ should sometimes be ‘It depends’.
Understanding the reasons why government has taken interest in Drupal is key to understanding how and where it is best used. Drupal has core strengths that line up with key needs, but there are things it doesn’t do well. How do you make sure that you’re not asking Drupal to do too much? Conversely, even if Drupal is the best choice, how do you make sure your architecture is sound, your project plan is tight, and your business strategy is appropriate?
We’ll look at some case studies from various levels of government from federal to local, examine the challenges faced, and review lessons learned. If your project needs to stretch Drupal to its breaking point, how do you mitigate the technical, project management, and business impacts? How do you weigh the pros and cons of using Drupal when you are planning a project, and what are the key warning signs in an RFP that warn against it? And even when the needs of the client project line up cleanly with Drupal’s core strengths, how do you identify the risk areas when it seems like a match made in heaven?
Drupal is a powerful tool and can transform the work you do, but being educated as to its strengths and weaknesses protects you and your project, whether you are a contractor or contract officer, internal technology team or external developer.
David Spira presents on the importance of user testing and Empathy to deliver an effective product, specifically a contact management app for disaster relief that was later used during the Nepal earthquake in 2015.
Red Hat needed a new pattern library that would be flexible enough to integrate into our current Drupal 7 site, yet powerful enough to build future D7, D8 and other Red Hat branded sites. This pattern library would create a consistent, brand approved, look across all of our web properties, and become a common UI development platform for Designers, UX, Devs and Project managers.
In this case study we’ll explain our architectural approach to deliver dozens of tightly packaged components to Redhat.com and other web properties through a variety of distribution methods.
At Phase2, we do things a little differently when it comes to design. While many teams are stuck in the “design first, develop second, theme last” way of doing things, we link our multidisciplinary teams together by a common vehicle: design systems. Each piece of the system, including our prototyping tools, live within the platform, allowing us to integrate processes like creative design, prototyping, front-end methodology, and implementation. We call this “The New Design Workflow.”
This session will feature a panel of Phase2’s most experienced designers and front-end devs for an inside look at our best practices, tips and tricks. Plus, hear us weigh in how Drupal 8 will interface with your favorite front-end tools like PatternLab.
Drupal 8, Don’t Be Late (Enterprise Orgs, We’re Looking at You)Phase2
After building one of the first enterprise Drupal 8 platforms, we speak from experience when we say: if you are an enterprise organization, you should be seriously considering the move to Drupal 8. For many in the Drupal world, Drupal 8 is still viewed with apprehension. With this panel, we’re here to unveil the D8 mystery.
In the changing CMS landscape, enterprises have a lot to gain from the more decoupled, API-focused content repository that Drupal 8 is evolving toward. Drupal’s paradigm shift will vastly improve the way organizations ingest, store, publish, and distribute content through multiple channels. But is the investment worth it? For the enterprise, our answer is an enthusiastic yes.
In this session, discover:
How Drupal 8’s structure fundamentally changes the way organizations approach platform building
The impact of Drupal 8’s configuration management improvements
The benefits of integrated front-end tools and external libraries
The challenges enterprise organizations will face adopting Drupal 8 (and how to overcome them)
How other enterprise organizations are already harnessing the power of Drupal 8
How to get started!
Memorial Sloan Kettering: Adventures in Drupal 8Phase2
Memorial Sloan Kettering is preparing to launch two websites in Drupal 8. As one of the first organizations to migrate its Drupal 6 content management system onto an enterprise Drupal 8 platform, Memorial Sloan Kettering has learned first hand the major challenges and advantages of building in Drupal 8.
In this session, project members from MSK, Phase2, and Digitas will explore the decision to take the leap to Drupal 8 and the reality of building in D8 while it is still a beta. Get details on the brute force migration process, front-end integrations and wiring up with twig in practice, and community contributions to accelerate Drupal 8 in the process of a flagship redesign for one of the leaders in the healthcare space.
We’ll elaborate on the challenges we faced and strategies we used to build on Drupal 8 and how you can learn from them!
Finally, we’ll answer some of your most burning questions:
How did you accomplish moving an existing Drupal 6 site with 25,000 plus pages of content to Drupal 8 while redesigning at the same time?
Should other organizations consider building in Drupal 8?
What tools and best practices were used by developers/sys admins?
What contrib modules are being used?
How difficult was it for the team to learn Drupal 8?
What is being used for layout and webforms?What external libraries and APIs are being used?
Transcript: Selling digital books in 2024: Insights from industry leaders - T...BookNet Canada
The publishing industry has been selling digital audiobooks and ebooks for over a decade and has found its groove. What’s changed? What has stayed the same? Where do we go from here? Join a group of leading sales peers from across the industry for a conversation about the lessons learned since the popularization of digital books, best practices, digital book supply chain management, and more.
Link to video recording: https://bnctechforum.ca/sessions/selling-digital-books-in-2024-insights-from-industry-leaders/
Presented by BookNet Canada on May 28, 2024, with support from the Department of Canadian Heritage.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
Elevating Tactical DDD Patterns Through Object CalisthenicsDorra BARTAGUIZ
After immersing yourself in the blue book and its red counterpart, attending DDD-focused conferences, and applying tactical patterns, you're left with a crucial question: How do I ensure my design is effective? Tactical patterns within Domain-Driven Design (DDD) serve as guiding principles for creating clear and manageable domain models. However, achieving success with these patterns requires additional guidance. Interestingly, we've observed that a set of constraints initially designed for training purposes remarkably aligns with effective pattern implementation, offering a more ‘mechanical’ approach. Let's explore together how Object Calisthenics can elevate the design of your tactical DDD patterns, offering concrete help for those venturing into DDD for the first time!
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
Epistemic Interaction - tuning interfaces to provide information for AI supportAlan Dix
Paper presented at SYNERGY workshop at AVI 2024, Genoa, Italy. 3rd June 2024
https://alandix.com/academic/papers/synergy2024-epistemic/
As machine learning integrates deeper into human-computer interactions, the concept of epistemic interaction emerges, aiming to refine these interactions to enhance system adaptability. This approach encourages minor, intentional adjustments in user behaviour to enrich the data available for system learning. This paper introduces epistemic interaction within the context of human-system communication, illustrating how deliberate interaction design can improve system understanding and adaptation. Through concrete examples, we demonstrate the potential of epistemic interaction to significantly advance human-computer interaction by leveraging intuitive human communication strategies to inform system design and functionality, offering a novel pathway for enriching user-system engagements.
LF Energy Webinar: Electrical Grid Modelling and Simulation Through PowSyBl -...DanBrown980551
Do you want to learn how to model and simulate an electrical network from scratch in under an hour?
Then welcome to this PowSyBl workshop, hosted by Rte, the French Transmission System Operator (TSO)!
During the webinar, you will discover the PowSyBl ecosystem as well as handle and study an electrical network through an interactive Python notebook.
PowSyBl is an open source project hosted by LF Energy, which offers a comprehensive set of features for electrical grid modelling and simulation. Among other advanced features, PowSyBl provides:
- A fully editable and extendable library for grid component modelling;
- Visualization tools to display your network;
- Grid simulation tools, such as power flows, security analyses (with or without remedial actions) and sensitivity analyses;
The framework is mostly written in Java, with a Python binding so that Python developers can access PowSyBl functionalities as well.
What you will learn during the webinar:
- For beginners: discover PowSyBl's functionalities through a quick general presentation and the notebook, without needing any expert coding skills;
- For advanced developers: master the skills to efficiently apply PowSyBl functionalities to your real-world scenarios.
The Metaverse and AI: how can decision-makers harness the Metaverse for their...Jen Stirrup
The Metaverse is popularized in science fiction, and now it is becoming closer to being a part of our daily lives through the use of social media and shopping companies. How can businesses survive in a world where Artificial Intelligence is becoming the present as well as the future of technology, and how does the Metaverse fit into business strategy when futurist ideas are developing into reality at accelerated rates? How do we do this when our data isn't up to scratch? How can we move towards success with our data so we are set up for the Metaverse when it arrives?
How can you help your company evolve, adapt, and succeed using Artificial Intelligence and the Metaverse to stay ahead of the competition? What are the potential issues, complications, and benefits that these technologies could bring to us and our organizations? In this session, Jen Stirrup will explain how to start thinking about these technologies as an organisation.
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
In his public lecture, Christian Timmerer provides insights into the fascinating history of video streaming, starting from its humble beginnings before YouTube to the groundbreaking technologies that now dominate platforms like Netflix and ORF ON. Timmerer also presents provocative contributions of his own that have significantly influenced the industry. He concludes by looking at future challenges and invites the audience to join in a discussion.
State of ICS and IoT Cyber Threat Landscape Report 2024 previewPrayukth K V
The IoT and OT threat landscape report has been prepared by the Threat Research Team at Sectrio using data from Sectrio, cyber threat intelligence farming facilities spread across over 85 cities around the world. In addition, Sectrio also runs AI-based advanced threat and payload engagement facilities that serve as sinks to attract and engage sophisticated threat actors, and newer malware including new variants and latent threats that are at an earlier stage of development.
The latest edition of the OT/ICS and IoT security Threat Landscape Report 2024 also covers:
State of global ICS asset and network exposure
Sectoral targets and attacks as well as the cost of ransom
Global APT activity, AI usage, actor and tactic profiles, and implications
Rise in volumes of AI-powered cyberattacks
Major cyber events in 2024
Malware and malicious payload trends
Cyberattack types and targets
Vulnerability exploit attempts on CVEs
Attacks on counties – USA
Expansion of bot farms – how, where, and why
In-depth analysis of the cyber threat landscape across North America, South America, Europe, APAC, and the Middle East
Why are attacks on smart factories rising?
Cyber risk predictions
Axis of attacks – Europe
Systemic attacks in the Middle East
Download the full report from here:
https://sectrio.com/resources/ot-threat-landscape-reports/sectrio-releases-ot-ics-and-iot-security-threat-landscape-report-2024/
Essentials of Automations: The Art of Triggers and Actions in FMESafe Software
In this second installment of our Essentials of Automations webinar series, we’ll explore the landscape of triggers and actions, guiding you through the nuances of authoring and adapting workspaces for seamless automations. Gain an understanding of the full spectrum of triggers and actions available in FME, empowering you to enhance your workspaces for efficient automation.
We’ll kick things off by showcasing the most commonly used event-based triggers, introducing you to various automation workflows like manual triggers, schedules, directory watchers, and more. Plus, see how these elements play out in real scenarios.
Whether you’re tweaking your current setup or building from the ground up, this session will arm you with the tools and insights needed to transform your FME usage into a powerhouse of productivity. Join us to discover effective strategies that simplify complex processes, enhancing your productivity and transforming your data management practices with FME. Let’s turn complexity into clarity and make your workspaces work wonders!
Why You Should Replace Windows 11 with Nitrux Linux 3.5.0 for enhanced perfor...SOFTTECHHUB
The choice of an operating system plays a pivotal role in shaping our computing experience. For decades, Microsoft's Windows has dominated the market, offering a familiar and widely adopted platform for personal and professional use. However, as technological advancements continue to push the boundaries of innovation, alternative operating systems have emerged, challenging the status quo and offering users a fresh perspective on computing.
One such alternative that has garnered significant attention and acclaim is Nitrux Linux 3.5.0, a sleek, powerful, and user-friendly Linux distribution that promises to redefine the way we interact with our devices. With its focus on performance, security, and customization, Nitrux Linux presents a compelling case for those seeking to break free from the constraints of proprietary software and embrace the freedom and flexibility of open-source computing.
4. About This Talk
• Let you visualize your data with OSS tools
• Information on customizing logs from common daemons
• Strong focus on log aggregation, parsing, and search
• Information about drupal.org's logging setup
• Some information on performance metrics tools
• Two-machine demo of Drupal and logging tools
9. Logs are time + data.
Creator of Logstash
Jordan Sissel
“ ”
10. What Are Logs
• Ultimately, logs are about keeping track of events
• Logs are very different; some use custom formats, while
some may be in pure XML or JSON
• Some are one line, some are many, like Java stacktraces or
MySQL slow query logs
13. Issues With Logs
• Legal retention requirements
• Require shell access to view
• Not often human-parseable
• Cyborg-friendly tooling
14. Solving Problems With Log Data
• Find slow pages or queries
• Sort through Drupal logs to trace user action on a site
• Get an average idea of traffic to a particular area
• Track new PHP error types
17. Shipping Concerns
• Queueing
• Behavior when shipping
to remote servers
• Max spool disk usage
• Retries?
• Security
• Encrypted channel
• Encrypted at rest
• Access to sensitive data
18. Configuring rsyslogd Clients
• Ship logs to another rsyslog server over TCP
• *.* @@utility:514
• This defaults to shipping anything that it would normally
log to /var/log/syslog or /var/log/messages
19. Configuring rsyslogd Servers
• Prevent remote logs from showing up in /var/log/messages
• if $source != 'utility' then ~
• Store logs coming in based on hostname and date
• $template DailyPerHostLogs,"/var/log/rsyslog/%HOSTNAME%/
%HOSTNAME%.%$YEAR%-%$MONTH%-%$DAY%.log"
*.* -?DailyPerHostLogs;RSYSLOG_TraditionalFileFormat
20. Configuring rsyslogd Shipping
• Read lines from a particular file and ship over syslog
• $ModLoad imfile
$InputFileName /var/log/httpd/access_log
$InputFileTag apache_access:
$InputFileStateFile state-apache_access
$InputFileSeverity info
$InputFileFacility local0
$InputFilePollInterval 10
$InputRunFileMonitor
21. Configuring rsyslogd Spooling
• Configure spooling and queueing behavior
• $WorkDirectory /var/lib/rsyslog # where to place spool files
$ActionQueueFileName fwdRule1 # unique name prefix for spool files
$ActionQueueMaxDiskSpace 1g # 1gb space limit
$ActionQueueSaveOnShutdown on # save messages to disk on shutdown
$ActionQueueType LinkedList # run asynchronously
$ActionResumeRetryCount -1 # infinite retries if host is down
22. Syslog-shipped Log Files
Mar 11 15:38:14 drupal drupal: http://192.168.32.3|1394566694|
system|192.168.32.1|http://192.168.32.3/admin/modules/list
/confirm|http://192.168.32.3/admin/modules|1||php module
installed.
!
Jul 30 15:04:14 drupal varnish_access: 156.40.118.178 - - [30/
Jul/2014:15:04:09 +0000] "GET http://23.251.149.143/misc/
tableheader.js?n9j5uu HTTP/1.1" 200 1848 "http://
23.251.149.143/admin/modules" "Mozilla/5.0 (Macintosh; Intel
Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/36.0.1985.125 Safari/537.36" 0.000757 miss
25. Apache
127.0.0.1 - - [08/Mar/2014:00:36:44 -0500] "GET /dashboard
HTTP/1.0" 302 20 "https://68.232.187.42/dashboard/" "Mozilla/
5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36
(KHTML, like Gecko) Chrome/33.0.1750.146 Safari/537.36"
26. nginx
192.168.32.1 - - [11/Apr/2014:10:44:36 -0400] "GET /kibana/
font/fontawesome-webfont.woff?v=3.2.1 HTTP/1.1" 200 43572
"http://192.168.32.6/kibana/" "Mozilla/5.0 (Macintosh; Intel
Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/34.0.1847.116 Safari/537.36"
27. Varnish
192.168.32.1 - - [11/Apr/2014:10:47:52 -0400] "GET http://
192.168.32.3/themes/seven/images/list-item.png HTTP/1.1" 200
195 "http://192.168.32.3/admin/config" "Mozilla/5.0
(Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/34.0.1847.116 Safari/537.36"
28. Additional Features
• Apache, nginx, and Varnish all support additional output
• Varnish can log cache hit/miss
• With Logstash we can look at how to normalize these
• A regex engine with built-in named patterns
• Online tools to parse sample logs
29. Apache
• Configurable log formats are available – http://
httpd.apache.org/docs/2.2/mod/mod_log_config.html
• A single LogFormat directive in any Apache configuration
file will override all log formats
• The default NCSA combined log format is as follows
• LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i"
"%{User-agent}i"" combined
30. Apache
• Additional useful information:
• %D Time taken to serve request in microseconds
• %{Host}i Value of the Host HTTP header
• %p Port
• New LogFormat line:
• LogFormat "%h %l %u %t "%r" %>s %b "%{Referer}i"
"%{User-Agent}i" %D %{Host}i %p" combined
31. nginx
• Log formats are defined with the log_format directive – http://
nginx.org/en/docs/http/ngx_http_log_module.html#log_format
• You may not override the default NCSA combined format
• log_format combined '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
32. Apache
127.0.0.1 - - [29/Jul/2014:22:03:07 +0000] "GET /admin/config/
development/performance HTTP/1.0" 200 3500 "-" "Mozilla/5.0
(Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/36.0.1985.125 Safari/537.36"
!
127.0.0.1 - - [29/Jul/2014:22:03:07 +0000] "GET /admin/config/
development/performance HTTP/1.0" 200 3500 "-" "Mozilla/5.0
(Macintosh; Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML,
like Gecko) Chrome/36.0.1985.125 Safari/537.36" 45304
23.251.149.143 80
33. nginx
• Additional useful information:
• $request_time Time taken to serve request in seconds with
millisecond resolution (e.g. 0.073)
• $http_host Value of the Host HTTP header
• $server_post Port
34. nginx
• New log_format line and example config for a vhost:
• log_format logstash '$remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'$request_time $http_host $server_port';
• access_log /var/log/nginx/access.log logstash;
35. nginx
70.42.157.6 - - [22/Jul/2014:22:03:30 +0000] "POST /
logstash-2014.07.22/_search HTTP/1.0" 200 281190 "http://
146.148.34.62/kibana/index.html" "Mozilla/5.0 (Macintosh;
Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/36.0.1985.125 Safari/537.36"
!
70.42.157.6 - - [22/Jul/2014:22:03:30 +0000] "POST /
logstash-2014.07.22/_search HTTP/1.0" 200 281190 "http://
146.148.34.62/kibana/index.html" "Mozilla/5.0 (Macintosh;
Intel Mac OS X 10_9_4) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/36.0.1985.125 Safari/537.36" 0.523 146.148.34.62 80
36. Varnish
• The varnishncsa daemon outputs NCSA-format logs
• You may pass a different log format to the varnishncsa
daemon; many share the same format as Apache
37. Varnish
• Additional useful information:
• %D Time taken to serve request in seconds with
microsecond precision (e.g. 0.000884)
• %{Varnish:hitmiss}x The text "hit" or "miss"
• varnishncsa daemon argument:
• -F '%h %l %u %t "%r" %s %b "%{Referer}i" "%{User-agent}i"
%D %{Varnish:hitmiss}x'
38. Varnish
70.42.157.6 - - [29/Jul/2014:22:03:07 +0000] "GET http://
23.251.149.143/admin/config/development/performance HTTP/1.0"
200 3500 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125
Safari/537.36"
!
70.42.157.6 - - [29/Jul/2014:22:03:07 +0000] "GET http://
23.251.149.143/admin/config/development/performance HTTP/1.0"
200 3500 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_4)
AppleWebKit/537.36 (KHTML, like Gecko) Chrome/36.0.1985.125
Safari/537.36" 0.045969 miss
41. Logstash
• http://logstash.net/
• Great tool to work with logs of ALL sorts
• Has input, filter, and output pipelines
• Inputs can be parsed with different codecs (JSON, netflow)
• http://logstash.net/docs/1.4.2/ describes many options
43. Kibana
• Great viewer for Logstash logs
• Needs direct HTTP access to ElasticSearch
• You may need to protect this with nginx or the like
• Uses ElasticSearch features to show statistical information
• Can show any ElasticSearch data, not just Logstash
44. Grok
• Tool for pulling semantic data from logs; logstash filter
• A regex engine with built-in named patterns
• Online tools to parse sample logs
• http://grokdebug.herokuapp.com/
• http://grokconstructor.appspot.com/
45. Example:
Grokking nginx Logs
192.168.32.1 - - [11/Apr/2014:10:44:36 -0400] "GET /kibana/
font/fontawesome-webfont.woff?v=3.2.1 HTTP/1.1" 200 43572
"http://192.168.32.6/kibana/" "Mozilla/5.0 (Macintosh; Intel
Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko)
47. Logstash Config
• By default Logstash looks in /etc/logstash/conf.d/*.conf
• You many include multiple files
• Each must have at least an input, filter, or output stanza
53. Logs vs Performance Counters
• Generally, logs capture data at a particular time
• You may also want to keep information about how your
servers are running and performing
• A separate set of tools are often used to help monitoring
and manage systems performance
• This data can then be trended to chart resource usage and
capacity
54. Proprietary Tools
• Third-party SaaS systems are also plentiful in this area
• DataDog
• Librato Metrics
• Circonus
• New Relic / AppNeta
55. Time-Series Data
• Generally, performance counters are taken with regular
sampling at an interval, known as time-series data
• Several OSS tools exist to store and query time-series data:
• RRDTool
• Whisper
• InfluxDB
56. First Wave: RRD-based Tools
• Many tools can graph metrics and make and plot RRD files
• Munin
• Cacti
• Ganglia
• collectd
57. Second Wave: Graphite
• Graphite is a more general tool; it does not collect metrics
• It uses an advanced storage engine called Whisper
• It can buffer data and cache it under heavy load
• It does not require data to be inserted all the time
• It's fully designed to take time-series data and graph it
58. Grafana
• Grafana is to Graphite as Kibana is to ElasticSearch
• HTML / JavaScript app
• Needs direct HTTP access to Graphite
• You may need to protect this with nginx or the like
59. Collectd
• http://collectd.org/
• Collectd is a tool that makes it easy to capture many
system-level statistics
• It can write to RRD databases or to Graphite
• Collectd is written in C and is efficient; it can remain
resident in memory and report on a regular interval
64. Stats
• Consolidating logs from ≈ 10 web servers
• Incoming syslog (Drupal), Apache, nginx, and Varnish logs
• Non-syslog logs are updated every hour with rsync
• > 2 billion logs processed per month
• Indexing is spiky but not constant; load average of 0.5