The document describes the discovery process used by Atrium Discovery to scan devices on a network. It involves the following key steps:
1. Scanning IPs to determine accessibility and detect open ports. Credentials are used to try accessing devices.
2. Classifying devices and collecting additional information if a host is detected. Cached credentials are used for faster future access.
3. Optimization is done to avoid rescanning the same hosts multiple times. Duplicated scans are skipped.
4. Discovery is restricted for sensitive devices and full discovery only occurs if required information can be collected from host devices.
Back-2-Basics: Exception & Event Instrumentation in .NETDavid McCarter
This session will instruct any level of programmer on how to easily use tracing that is built into .NET to log and analyze Exceptions and events that occur during application runtime. This is invaluable to fix bugs that only happen while the application is running. .NET TraceListeners will be discussed in detail along with how to write your own custom TraceListeners. I will also discuss and provide code for my centralized exception/ event logging system that allows applications at customer sites or on multiple servers to log to central database. Developers or technical support personal can then view these entries via an ASP.NET web site.
Back-2-Basics: Exception & Event Instrumentation in .NETDavid McCarter
This session will instruct any level of programmer on how to easily use tracing that is built into .NET to log and analyze Exceptions and events that occur during application runtime. This is invaluable to fix bugs that only happen while the application is running. .NET TraceListeners will be discussed in detail along with how to write your own custom TraceListeners. I will also discuss and provide code for my centralized exception/ event logging system that allows applications at customer sites or on multiple servers to log to central database. Developers or technical support personal can then view these entries via an ASP.NET web site.
Power of the Platform: Andy Walker, BMC SoftwareBMC Software
The service management industry has a challenge in deploying, maintaining and extending workflow based environments that consolidate information across all user types, departments and lines of business areas. All too frequently we see an environment today that consists of too many systems, dis-jointed processes and poor collaboration.
View this presentaition to learn how BMC ITSM solutions build concepts, such as, extensibility, automation, and collaboration directly into the technology platform to help companies overcome challenges faced by many service management organisations.
Volker Fröhlich - How to Debug Common Agent IssuesZabbix
Probably every Zabbix user has a story of a Zabbix agent suddenly failing to work. Computers and networks are complex and diverse, and so are the causes of these problems.
This talk introduces a structured approach to debugging configuration problems, connectivity problems and problems in the execution of the agent. It will spotlight common problems, but also some rather obscure ones I met in the wild.
Zabbix Conference 2015
Power of the Platform: Andy Walker, BMC SoftwareBMC Software
The service management industry has a challenge in deploying, maintaining and extending workflow based environments that consolidate information across all user types, departments and lines of business areas. All too frequently we see an environment today that consists of too many systems, dis-jointed processes and poor collaboration.
View this presentaition to learn how BMC ITSM solutions build concepts, such as, extensibility, automation, and collaboration directly into the technology platform to help companies overcome challenges faced by many service management organisations.
Volker Fröhlich - How to Debug Common Agent IssuesZabbix
Probably every Zabbix user has a story of a Zabbix agent suddenly failing to work. Computers and networks are complex and diverse, and so are the causes of these problems.
This talk introduces a structured approach to debugging configuration problems, connectivity problems and problems in the execution of the agent. It will spotlight common problems, but also some rather obscure ones I met in the wild.
Zabbix Conference 2015
Compliance as Code with terraform-complianceEmre Erkunt
terraform-compliance is a lightweight, security and compliance focused test framework against terraform to enable negative testing capability for your infrastructure-as-code.
Brand new to Sumo Logic? Get started with these 5 easy steps and get certified!
Learn the basics for how to search, parse and analyze the logs and metrics that are important to your organization. This session will guide you through running searches, simple parsing and basic analytics on your data. Learn how to convert your queries to charts and add them to Dashboards to help you visualize trends and easily identify anomalies. Lastly, learn how Alerts can help you stay on top of your critical events.
Analysis of ESET Smart Security 6 personal firewall’s thresholds and detectio...Andrej Šimko
The main goal of this project is to observe attacks on ESET Smart Security 6’s firewall, to discover the ability to detect various attacks coming from the same LAN, and find out thresholds of triggering warning/detection relevant to those attacks.
Storage, Virtual, and Server Profiler TrainingSolarWinds
For more information on Storage Manager, Powered by Profiler visit: http://www.solarwinds.com/storage-manager.aspx
During this webcast, we'll discuss and demonstrate post-installation configuration and setup procedures for the SolarWinds Profiler (now called Storage Manager) product. We will show you how to get the most out of your Profiler installation and how to ensure that your organization can see quick ROI from the product including:
• Profiler architecture and system recommendations
• Configuring and using the file analysis features
• Deployment and use of Profiler agents
• Alert configuration and best practices
• Load balancing and resource management within Profiler
With more and more sites falling victim to data theft, you've probably read the list of things (not) to do to write secure code. But what else should you do to make sure your code and the rest of your web stack is secure ? In this tutorial we'll go through the basic and more advanced techniques of securing your web and database servers, securing your backend PHP code and your frontend javascript code. We'll also look at how you can build code that detects and blocks intrusion attempts and a bunch of other tips and tricks to make sure your customer data stays secure.
Essentials of Automations: Optimizing FME Workflows with ParametersSafe Software
Are you looking to streamline your workflows and boost your projects’ efficiency? Do you find yourself searching for ways to add flexibility and control over your FME workflows? If so, you’re in the right place.
Join us for an insightful dive into the world of FME parameters, a critical element in optimizing workflow efficiency. This webinar marks the beginning of our three-part “Essentials of Automation” series. This first webinar is designed to equip you with the knowledge and skills to utilize parameters effectively: enhancing the flexibility, maintainability, and user control of your FME projects.
Here’s what you’ll gain:
- Essentials of FME Parameters: Understand the pivotal role of parameters, including Reader/Writer, Transformer, User, and FME Flow categories. Discover how they are the key to unlocking automation and optimization within your workflows.
- Practical Applications in FME Form: Delve into key user parameter types including choice, connections, and file URLs. Allow users to control how a workflow runs, making your workflows more reusable. Learn to import values and deliver the best user experience for your workflows while enhancing accuracy.
- Optimization Strategies in FME Flow: Explore the creation and strategic deployment of parameters in FME Flow, including the use of deployment and geometry parameters, to maximize workflow efficiency.
- Pro Tips for Success: Gain insights on parameterizing connections and leveraging new features like Conditional Visibility for clarity and simplicity.
We’ll wrap up with a glimpse into future webinars, followed by a Q&A session to address your specific questions surrounding this topic.
Don’t miss this opportunity to elevate your FME expertise and drive your projects to new heights of efficiency.
DevOps and Testing slides at DASA ConnectKari Kakkonen
My and Rik Marselis slides at 30.5.2024 DASA Connect conference. We discuss about what is testing, then what is agile testing and finally what is Testing in DevOps. Finally we had lovely workshop with the participants trying to find out different ways to think about quality and testing in different parts of the DevOps infinity loop.
GraphRAG is All You need? LLM & Knowledge GraphGuy Korland
Guy Korland, CEO and Co-founder of FalkorDB, will review two articles on the integration of language models with knowledge graphs.
1. Unifying Large Language Models and Knowledge Graphs: A Roadmap.
https://arxiv.org/abs/2306.08302
2. Microsoft Research's GraphRAG paper and a review paper on various uses of knowledge graphs:
https://www.microsoft.com/en-us/research/blog/graphrag-unlocking-llm-discovery-on-narrative-private-data/
The Art of the Pitch: WordPress Relationships and SalesLaura Byrne
Clients don’t know what they don’t know. What web solutions are right for them? How does WordPress come into the picture? How do you make sure you understand scope and timeline? What do you do if sometime changes?
All these questions and more will be explored as we talk about matching clients’ needs with what your agency offers without pulling teeth or pulling your hair out. Practical tips, and strategies for successful relationship building that leads to closing the deal.
Connector Corner: Automate dynamic content and events by pushing a buttonDianaGray10
Here is something new! In our next Connector Corner webinar, we will demonstrate how you can use a single workflow to:
Create a campaign using Mailchimp with merge tags/fields
Send an interactive Slack channel message (using buttons)
Have the message received by managers and peers along with a test email for review
But there’s more:
In a second workflow supporting the same use case, you’ll see:
Your campaign sent to target colleagues for approval
If the “Approve” button is clicked, a Jira/Zendesk ticket is created for the marketing design team
But—if the “Reject” button is pushed, colleagues will be alerted via Slack message
Join us to learn more about this new, human-in-the-loop capability, brought to you by Integration Service connectors.
And...
Speakers:
Akshay Agnihotri, Product Manager
Charlie Greenberg, Host
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf91mobiles
91mobiles recently conducted a Smart TV Buyer Insights Survey in which we asked over 3,000 respondents about the TV they own, aspects they look at on a new TV, and their TV buying preferences.
JMeter webinar - integration with InfluxDB and GrafanaRTTS
Watch this recorded webinar about real-time monitoring of application performance. See how to integrate Apache JMeter, the open-source leader in performance testing, with InfluxDB, the open-source time-series database, and Grafana, the open-source analytics and visualization application.
In this webinar, we will review the benefits of leveraging InfluxDB and Grafana when executing load tests and demonstrate how these tools are used to visualize performance metrics.
Length: 30 minutes
Session Overview
-------------------------------------------
During this webinar, we will cover the following topics while demonstrating the integrations of JMeter, InfluxDB and Grafana:
- What out-of-the-box solutions are available for real-time monitoring JMeter tests?
- What are the benefits of integrating InfluxDB and Grafana into the load testing stack?
- Which features are provided by Grafana?
- Demonstration of InfluxDB and Grafana using a practice web application
To view the webinar recording, go to:
https://www.rttsweb.com/jmeter-integration-webinar
Builder.ai Founder Sachin Dev Duggal's Strategic Approach to Create an Innova...Ramesh Iyer
In today's fast-changing business world, Companies that adapt and embrace new ideas often need help to keep up with the competition. However, fostering a culture of innovation takes much work. It takes vision, leadership and willingness to take risks in the right proportion. Sachin Dev Duggal, co-founder of Builder.ai, has perfected the art of this balance, creating a company culture where creativity and growth are nurtured at each stage.
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...James Anderson
Effective Application Security in Software Delivery lifecycle using Deployment Firewall and DBOM
The modern software delivery process (or the CI/CD process) includes many tools, distributed teams, open-source code, and cloud platforms. Constant focus on speed to release software to market, along with the traditional slow and manual security checks has caused gaps in continuous security as an important piece in the software supply chain. Today organizations feel more susceptible to external and internal cyber threats due to the vast attack surface in their applications supply chain and the lack of end-to-end governance and risk management.
The software team must secure its software delivery process to avoid vulnerability and security breaches. This needs to be achieved with existing tool chains and without extensive rework of the delivery processes. This talk will present strategies and techniques for providing visibility into the true risk of the existing vulnerabilities, preventing the introduction of security issues in the software, resolving vulnerabilities in production environments quickly, and capturing the deployment bill of materials (DBOM).
Speakers:
Bob Boule
Robert Boule is a technology enthusiast with PASSION for technology and making things work along with a knack for helping others understand how things work. He comes with around 20 years of solution engineering experience in application security, software continuous delivery, and SaaS platforms. He is known for his dynamic presentations in CI/CD and application security integrated in software delivery lifecycle.
Gopinath Rebala
Gopinath Rebala is the CTO of OpsMx, where he has overall responsibility for the machine learning and data processing architectures for Secure Software Delivery. Gopi also has a strong connection with our customers, leading design and architecture for strategic implementations. Gopi is a frequent speaker and well-known leader in continuous delivery and integrating security into software delivery.
UiPath Test Automation using UiPath Test Suite series, part 3DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 3. In this session, we will cover desktop automation along with UI automation.
Topics covered:
UI automation Introduction,
UI automation Sample
Desktop automation flow
Pradeep Chinnala, Senior Consultant Automation Developer @WonderBotz and UiPath MVP
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
The next four slides of animation show the basic approach to discovery alongside the nodes that are built in the model. The emphasis is that everything we do is recorded
On the first scan the only likely cause is “Excluded”. Very rarely you can get “OptAlreadyProcessing” if the same endpoint is injected while one is still in the queue - see later slide.
Pinging before scan allows us to optimise our detection of real device that will respond to discovery as opposed to dark space. Advanced Use “ Ping hosts before scanning” can be disabled globally for environments that suppress ICMP, but at the expense of slower performance in dark space. Consider the use of TCP ACK or TCP SYN ping to replace the standard ICMP ping if environment allows (“Use TCP ACK ping before scanning”, “Use TCP SYN ping before scanning”) or use “Exclude ranges from ping” if only a small area of the environment is an issue (maybe a DMZ)
If the endpoint responds to ping then discovery goes on to look for open ports. If the estate is hardened, discovery can have difficulty detecting open ports. In these situations consider modifying the discovery configuration setting “Valid Port States”. Contact support for advice before making modifications
It’s important that the appliance can see these ports open (or regarded as valid if you read the notes on the ports slide), otherwise discovery will not proceed. This list of ports has been aggressively honed from experience to focus only on regular stable service ports that are minimum risk whilst still allowing for effective discovery. Attempting to use fewer ports will reduce the quality and stability of discovery.
Depending on Dark Space settings we may or may not retain DiscoveryAccess nodes marked as NoResponse
UNIX methods will only be tried if the appliance can detect an open UNIX port (22 SSH, 23 telnet, 513 rlogin) at the end point and there is a credential for that endpoint *and* port in the vault.
If the slaves are restricted then only the ones valid for that endpoint. It is a common source of confusion, but vital to understand, that the slave is only a proxy and not a distributed discovery agent. If the appliance cannot detect that port 135 (Windows RPC) is open on the endpoint then discovery will not attempt to use ay windows slave. This can often be an issue with clients deploying Windows Slaves in protected areas of the network in the assumption this will allow scanning, it will not, and in this situation using multiple appliances and consolidation is the correct deployment. Advanced Use If there is no option but to have the appliance in a situation where it cannot detect port 135 on the endpoint then “Check port 135 before using Windows access methods” can be set to “no”. In this situation the appliance will direct all discovery requests that do not respond to a UNIX method via all registered slaves in sequence, this will cause discovery to take significantly longer per endpoint and noticeably degrade performance.
The SNMP discovery methods are more limited and should be regarded as fallback methods as they provide only basic information. No access to files or running of commands will be possible. The SNMP port is 161(UDP) OS currently supported in this fashion are IBM I (formerly OS/400), Netware, OpenVMS, z/OS (formerly OS/390). Netware is only available via SNMP v1
If the access methods have failed so far, then discovery will attempt the following methods to try to identify the device. If the device has a SNMP port 161 open, discovery will try to recover basic system information with a public community string. IP Stack Fingerprinting exploits the fact that is a close relation between an IP Stack and an OS, as each OS normally has a dedicated IP Stack; it is often possible to determine the OS quite accurately. But for IP Stack Fingerprinting to work well it needs to investigate closed as well as open ports. We use port 4 for the closed port. For the open ports we only use the ports used for our access methods. If the device has the telnet port 23 open than frequently the banner is presented before the login prompt and this will provide information about the device and its OS. Similarly a simple HTTP GET is used if port 80 is open. The results will often contain information about the device and its OS. All these methods are required for credentialess scanning. Disabling or modifying them is not recommended as without them identifying Hosts that need credentials to be deployed is very inefficient. Advanced Use IP Fingerprinting can be turned off with the “ Use IP Fingerprinting to Identify OS” option set to “no”, or the list of ports used for fingerprinting can be altered. Neither are recommended. Telnet banner sampling can be turned off using the “ Use Telnet Banner to Identify OS” option. SNMP SysDescr can be turned off by using the “ Use SNMP SysDescr to Identify OS” option. HTTP HEAD can be turned off by using the “ Use HTTP HEAD Request to Identify OS” option Contact support before attempt to change these settings.
At this stage we have already got a successful getDeviceInfo as we have an active session. In later modules we will refer back to the fact that these three methods need to succeed in order to creat/update a Host node.
Without success in completing DeviceInfo, HostInfo and InterfaceList we do not have enough information to feed the Host Identification algorithm. The system *can* cope with partial results in those methods, although the identity of the Host will be less stable the less properties it has to work on. Common reasons to not complete: Credential permissions Poor edits to scripts with uncaught stderr or other script termination issues. Login Timeout – check for timeout Script Timeout – check for timeout ScriptFailure related to the method. Increase the credential timeout to 180 seconds Parse failure (or incomplete DeviceInfo) – check for parsefailure ScriptFailure related to the method, check for scrambled session output. Turn on session logging and check for out of sequence characters. Consider increasing Session Line Delay,
The Host Algorithm uses a weighting technique to try and compute a key. The weighting compares the current properties with those from existing candidate Host nodes. If there is a difference and it is significant a new Host.key is generated, otherwise it uses the closest match. This allows a certain amount of change (such as upgrading an OS or changing a NIC) without forcing a new identity. We cannot compare every existing Host so we pre select candidates. These include the Host that this endpoint was associated with last time, Hosts with interfaces on the same IP as the current endpoint as well as Hosts that have the same serial number as that of the current properties.
end_state only relates to establishing a good quality session to the endpoint and relating it to an existing node.
On the first scan the only likely cause is “Excluded”. Very rarely you can get “OptAlreadyProcessing” if the same endpoint is injected while one is still in the queue - see later slide. Later on we can get OptNotBestIP and OptRemote from the optimization systems – these are described next OptNotBestIP – we know this endpoint was optimized last time so we assume it will be this time and do not contact it OptRemote – only seen on a Consolidation Appliance. Means that the endpoint was optimised on the Scanning Appliance. Full details of state will be on the Scanning Appliance.
Why do we still go to the OS/Device classifier, rather than further down? Because if we are using widely deployed credentials they may well work on another Host, and we still have to check if it is the same Host and not another one that the credentials works on that has moved to this IP since we last scanned.
Under some conditions the same IP can be requested while another scan of the same IP is still in progress. To prevent collisions if a duplicate is detected then one of the endpoints is skipped.
In general the level of access to the OSI over each interface is the same. There is no point scanning over the same endpoint several times in a range (or indeed across ranges) so we should only scan over one of the interfaces
Essentially the first endpoint that provides the GoodAccess end_state is the one we will attempt to use. Note hat as we have recovered up to date DeviceInfo, HostInfo and InterfaceList these properties of the Host node are updated. This is fine detail and probably only confuses the issue in an overview, but is included in the notes for completeness. Sometimes we will talk about the “BestIP”, this is an internal name for the system that picks the highest quality endpoint and is sometimes used to refer to the endpoint that is picked.
By default we will do this every 7 days. Advanced Use The setting is controlled by the value of “Scan optimization timeout” and this is a Model Maintenance setting rather than a discovery one. We don’t advise changing this value without advice.
It’s highly unlikely that you will get an early error state as that suggests a fundamental error in the core system and these are picked up in internal testing if they occur. More likely is an error from amended discovery scripts.
Pattern success or failure does not alter the summary states that track session establishment. These have their own tracking methods that will be described later. Note also that not all the standard discovery scripts may have completed successfully. Again further tracking methods will be described later to allow any issues to be understood.
This is subtle change but a Sweep Scan scan level never intends to get beyond a DeviceIndentified vs NoRepsonse state as it is intended for surveys of the estate during roll out and sizing of the project. Other scan levels are not included in the chart as these are the two that should be used during normal use; other scan levels should be used under guidance.
You may wish to download the state charts that were used during this module. Please download the chart zip file that should be available where you accessed this module.