The document discusses database forensics and analysis techniques. It introduces current challenges, available tools, and new approaches using external tables to preserve metadata when collecting evidence. Typical patterns seen in database objects like SYS.USER$ are shown, like multiple accounts with login attempts or similar lock times indicating password guessing. Timeline creation is demonstrated to combine data from different sources.
Slick is a modern database query and access library for Scala that allows working with stored data like Scala collections while controlling database access. It provides easy, concise, scalable, and safe database access. Slick supports many databases and can be set up by adding dependencies to a build file. Queries are processed by compiling them to SQL and lifting query types for translation. Queries support filtering, dropping rows, deletion, and creation.
DEF CON 27 -OMER GULL - select code execution from using sq liteFelipe Prado
The document discusses gaining code execution using SQLite database queries. It provides background on SQLite, examines its attack surface when querying an untrusted database, and explores previous work exploiting memory corruptions. The author proposes a technique called "Query Oriented Programming" to leverage SQL queries to implement memory leakage and other exploitation primitives to achieve remote code execution without using traditional scripting languages.
This document discusses different ways to store data in Android applications, including shared preferences, files, SQLite databases, and content providers. It provides examples of how to use shared preferences to store private data for a single activity or across all app components. It also discusses how to use files, SQLite databases to store data, including creating databases, inserting, updating and querying data. Finally, it covers how to create and use custom content providers to expose app data through a content URI.
This document discusses using SQLite databases in Android applications. SQLite is an embedded SQL database that does not require a separate server. It is included in the Android SDK and can be used to create and query database files. The document explains how to open a database, create tables, insert/update/delete rows using SQL queries, and retrieve data using cursors. Raw queries and simple queries can be used to perform retrieval queries on a single table. Transactions are used to commit or roll back changes to the database.
The document discusses various options for data storage in Android applications, including shared preferences, internal storage, external storage, SQLite databases, and network connections. It provides details on when each option would be suitable and code examples for using shared preferences, internal storage, and SQLite databases to store persistent data in key-value pairs, files, and structured databases, respectively. The document also covers best practices for accessing external storage and creating databases using the SQLiteOpenHelper class or directly with SQLiteDatabase objects.
This document summarizes XML out-of-band data retrieval attacks using XML external entities. It discusses how XML external entities can be used to retrieve files from remote servers or make requests to external resources. It also covers how entities defined in attributes can be used to bypass restrictions on external entity references. The document demonstrates these attack techniques and outlines tools that can automate XML out-of-band exploitation.
1. Shared preferences are used to store private primitive data in key-value pairs and the data is stored in an XML file.
2. Internal storage is used to store private files on the device memory and external storage stores public data on the shared external storage like an SD card.
3. SQLite database stores structured data in a private database on the internal storage and it allows for complex queries of the data.
Introduction to the basics of Information Retrieval (IR) with an emphasis on Apache Solr/Lucene. A lecture I gave during the JOSA Data Science Bootcamp.
Slick is a modern database query and access library for Scala that allows working with stored data like Scala collections while controlling database access. It provides easy, concise, scalable, and safe database access. Slick supports many databases and can be set up by adding dependencies to a build file. Queries are processed by compiling them to SQL and lifting query types for translation. Queries support filtering, dropping rows, deletion, and creation.
DEF CON 27 -OMER GULL - select code execution from using sq liteFelipe Prado
The document discusses gaining code execution using SQLite database queries. It provides background on SQLite, examines its attack surface when querying an untrusted database, and explores previous work exploiting memory corruptions. The author proposes a technique called "Query Oriented Programming" to leverage SQL queries to implement memory leakage and other exploitation primitives to achieve remote code execution without using traditional scripting languages.
This document discusses different ways to store data in Android applications, including shared preferences, files, SQLite databases, and content providers. It provides examples of how to use shared preferences to store private data for a single activity or across all app components. It also discusses how to use files, SQLite databases to store data, including creating databases, inserting, updating and querying data. Finally, it covers how to create and use custom content providers to expose app data through a content URI.
This document discusses using SQLite databases in Android applications. SQLite is an embedded SQL database that does not require a separate server. It is included in the Android SDK and can be used to create and query database files. The document explains how to open a database, create tables, insert/update/delete rows using SQL queries, and retrieve data using cursors. Raw queries and simple queries can be used to perform retrieval queries on a single table. Transactions are used to commit or roll back changes to the database.
The document discusses various options for data storage in Android applications, including shared preferences, internal storage, external storage, SQLite databases, and network connections. It provides details on when each option would be suitable and code examples for using shared preferences, internal storage, and SQLite databases to store persistent data in key-value pairs, files, and structured databases, respectively. The document also covers best practices for accessing external storage and creating databases using the SQLiteOpenHelper class or directly with SQLiteDatabase objects.
This document summarizes XML out-of-band data retrieval attacks using XML external entities. It discusses how XML external entities can be used to retrieve files from remote servers or make requests to external resources. It also covers how entities defined in attributes can be used to bypass restrictions on external entity references. The document demonstrates these attack techniques and outlines tools that can automate XML out-of-band exploitation.
1. Shared preferences are used to store private primitive data in key-value pairs and the data is stored in an XML file.
2. Internal storage is used to store private files on the device memory and external storage stores public data on the shared external storage like an SD card.
3. SQLite database stores structured data in a private database on the internal storage and it allows for complex queries of the data.
Introduction to the basics of Information Retrieval (IR) with an emphasis on Apache Solr/Lucene. A lecture I gave during the JOSA Data Science Bootcamp.
The document discusses various data storage options for Android applications, including shared preferences, internal and external file storage, and SQLite databases. It provides code examples for reading and writing data to shared preferences files and internal storage, as well as implementing a SQLite database using SQLiteOpenHelper and performing common CRUD operations.
Schemaless Solr allows documents to be indexed without pre-configuring fields in the schema. As documents are indexed, previously unknown fields are automatically added to the schema with inferred field types. This is implemented using Solr's managed schema, field value class guessing to infer types, and automatic schema field addition. The schema and newly added fields can be accessed via the Schema REST API, and the schema can be modified at runtime when configured as mutable. However, schemaless mode has limitations such as single field analyses and no way to change field types after initial inference.
The document discusses various data storage options in Android including shared preferences, internal storage, external storage, SQLite databases, and network/cloud storage. It provides details on how to use shared preferences to store private data in key-value pairs, how to read and write files to internal and external storage, how to create and manage SQLite databases to store structured data, and lists some popular cloud storage providers.
Solr vs. Elasticsearch, Case by Case: Presented by Alexandre Rafalovitch, UNLucidworks
Solr and Elasticsearch are both based on Lucene and provide full-text search and structured search capabilities. They differ in areas like configuration, indexing documents, and representing search parameters. Both allow indexing and searching large volumes of documents and facilitating fast and complex queries.
getting started guide for web scraping using scrapy framework.
GitHub Link : https://github.com/zekelabs/Python---ML---DL---PySpark-Training/tree/master/Scrapy%20Projects
This document describes a system for managing archival finding aids without using XSLT. It uses Ruby on Rails, MySQL, and SOLR to enable multiple presentations of data, support dynamic applications, and allow cross-collection search. Collections have components in a hierarchical structure. Descriptions are organized by ISAD(G) elements and stored in JSON format. EAD is used as a guide to structure data storage. An API facilitates interaction. A prototype public finding aid was created to demonstrate the system.
Validating JSON -- Percona Live 2021 presentationDave Stokes
JSON is a free form data exchange format which can cause problems when combined with a strictly typed relational database. Thanks to the folks at https://json-schema.org and the MySQL engineers at Oracle we can no specify required fields, type checks, and range checks.
Cassandra nice use cases and worst anti patterns no sql-matters barcelonaDuyhai Doan
This document summarizes a presentation on Cassandra use cases and anti-patterns. It discusses several anti-patterns to avoid such as queue-like designs, intensive updates on the same column, and designing around a dynamic schema. It also provides examples of good use cases such as rate limiting, anti-fraud detection, and account validation. The document contains an agenda, descriptions of each anti-pattern and their level of failure, as well as explanations and demonstrations of the example use cases.
This document summarizes the Cassandra Java driver and tools. It discusses the driver's architecture including connection pooling, request pipelining, load balancing policies, and automatic failover. It also covers using statements, asynchronous reads, the query builder, and the object mapper. Lastly, it discusses new automatic paging functionality in the driver.
This document summarizes Cassandra drivers and tools. It discusses the Java driver architecture including connection pooling, load balancing policies, and automatic paging. It also demonstrates Cassandra Unit for testing, the Java driver object mapper module, and Achilles object mapper with features like dirty checking. Live coding examples are provided for these tools.
09.Local Database Files and Storage on WPNguyen Tuan
This document provides an overview of local data storage options for Windows apps, including isolated storage, file I/O, settings storage, SQLite database, and external storage. It discusses using the local, installation, and shared application folders to store files. It also covers serialization, file associations, and APIs for reading and writing files like IsolatedStorageFile and StorageFile. The document demonstrates saving data to isolated storage, settings, and a SQLite database and loading data from these sources. It recommends SQLite as a local database option and provides instructions for adding SQLite support to a project.
Jonathan is a MySQL consultant who specializes in SQL, indexing, and reporting for big data. This tutorial will cover strategies for resolving 80% of performance problems, including indexes, partitioning, intensive table optimization, and finding and addressing bottlenecks. The strategies discussed will be common, established approaches based on the presenter's experience working with MySQL since 2007.
The document describes a presentation about rapidly prototyping with Solr. It will demonstrate ingesting documents into Solr, adjusting Solr's schema, and showcasing data in a flexible search UI. The presentation will cover faceting, highlighting, spellchecking, and debugging. Time will also be spent outlining next steps to develop and take the search application to production.
This document provides information about Frits Hoogland, an Oracle database performance, configuration and capacity specialist with 25 years of experience. It discusses mutexes in the Oracle database, noting they were gradually implemented starting in Oracle 10.2 to manage concurrency. The presentation assumes knowledge of concepts like mutexes/spinlocks and the general workings of the database.
sf bay area dfir meetup (2016-04-30) - OsxCollector Rishi Bhargava
OSXCollector is an open source forensic evidence collection and analysis toolkit for Mac OS X. It collects a wide range of system and application data from an OS X system, including files, browser history, startup items, quarantines, and more. The output is formatted as JSON, which contains metadata like hashes, timestamps, and signature chains for collected artifacts. This standardized format makes the data easy to analyze programmatically through output filters to detect anomalies, malware artifacts, and other interesting findings.
09.1. Android - Local Database (Sqlite)Oum Saokosal
This document provides information about SQLite databases in Android applications. It discusses how SQLite is integrated into the Android runtime, demonstrates basic SQL statements for creating tables, inserting, updating, deleting and selecting records, and shows code examples for executing SQL statements and retrieving data using a Cursor object in an Android app. It also explains how to view the database files on an Android emulator.
Faster Data Analytics with Apache Spark using Apache SolrChitturi Kiran
Apache Spark is a fast and general engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. Spark SQL allows users to execute relation queries in Spark with distributed in-memory computations. Though Spark gives us faster in-memory computations, Solr is blazing fast for some analytic queries. In this talk, we will take a deep dive into how to optimize the SQL queries from Spark to Solr by plugging into the Spark LogicalPlanner using pushdown strategies. The key take aways from the talk will be:
How to perform Spark SQL queries with Apache Solr?
What happens inside a Spark SQL query?
How to plug into Spark Logical Planner?
What type of push-down strategies are optimal with Solr?
Examples of push-down strategies
Presented at Lucene Revolution - http://sched.co/BAwV
This document provides an introduction to Apache Solr, including:
- Solr is an open-source search engine and REST API built on Lucene for indexing and searching documents.
- Solr architecture includes nodes, cores, schemas, and concepts like SolrCloud which uses Zookeeper for coordination across collections and shards.
- Documents are indexed, queried, updated, and deleted via the REST API or client libraries. Queries support various types including range, date, boolean, and proximity queries.
- Installation and configuration of standalone Solr involves downloading, extracting, and running bin/solr scripts to start the server and create cores.
- Resources for learning more include tutorials, documentation, and integration options
The document describes Windows Credentials Editor (WCE), a tool that manipulates Windows logon sessions to dump and modify credentials in memory. WCE has two main features - it can dump in-memory credentials like usernames, domains, and NTLM hashes from current, future, and terminated logon sessions; and it supports pass-the-hash by allowing changes to NTLM credentials or creation of new logon sessions with arbitrary credentials. The document discusses two methods WCE could use - directly calling authentication package APIs, which requires running code in LSASS; or reading LSASS memory to locate logon session and credential structures and decrypt credentials without injecting code.
This document describes a new method for exploiting PL/SQL injection without needing to create functions or procedures. It involves injecting a pre-compiled cursor using the DBMS_SQL package to execute arbitrary SQL. The attacker can use this to grant privileges to themselves or create their own functions without any system privileges beyond CREATE SESSION. It provides an example exploiting the SDO_DROP_USER_BEFORE trigger in Oracle to gain DBA privileges in this way without needing CREATE PROCEDURE permission.
The document discusses various data storage options for Android applications, including shared preferences, internal and external file storage, and SQLite databases. It provides code examples for reading and writing data to shared preferences files and internal storage, as well as implementing a SQLite database using SQLiteOpenHelper and performing common CRUD operations.
Schemaless Solr allows documents to be indexed without pre-configuring fields in the schema. As documents are indexed, previously unknown fields are automatically added to the schema with inferred field types. This is implemented using Solr's managed schema, field value class guessing to infer types, and automatic schema field addition. The schema and newly added fields can be accessed via the Schema REST API, and the schema can be modified at runtime when configured as mutable. However, schemaless mode has limitations such as single field analyses and no way to change field types after initial inference.
The document discusses various data storage options in Android including shared preferences, internal storage, external storage, SQLite databases, and network/cloud storage. It provides details on how to use shared preferences to store private data in key-value pairs, how to read and write files to internal and external storage, how to create and manage SQLite databases to store structured data, and lists some popular cloud storage providers.
Solr vs. Elasticsearch, Case by Case: Presented by Alexandre Rafalovitch, UNLucidworks
Solr and Elasticsearch are both based on Lucene and provide full-text search and structured search capabilities. They differ in areas like configuration, indexing documents, and representing search parameters. Both allow indexing and searching large volumes of documents and facilitating fast and complex queries.
getting started guide for web scraping using scrapy framework.
GitHub Link : https://github.com/zekelabs/Python---ML---DL---PySpark-Training/tree/master/Scrapy%20Projects
This document describes a system for managing archival finding aids without using XSLT. It uses Ruby on Rails, MySQL, and SOLR to enable multiple presentations of data, support dynamic applications, and allow cross-collection search. Collections have components in a hierarchical structure. Descriptions are organized by ISAD(G) elements and stored in JSON format. EAD is used as a guide to structure data storage. An API facilitates interaction. A prototype public finding aid was created to demonstrate the system.
Validating JSON -- Percona Live 2021 presentationDave Stokes
JSON is a free form data exchange format which can cause problems when combined with a strictly typed relational database. Thanks to the folks at https://json-schema.org and the MySQL engineers at Oracle we can no specify required fields, type checks, and range checks.
Cassandra nice use cases and worst anti patterns no sql-matters barcelonaDuyhai Doan
This document summarizes a presentation on Cassandra use cases and anti-patterns. It discusses several anti-patterns to avoid such as queue-like designs, intensive updates on the same column, and designing around a dynamic schema. It also provides examples of good use cases such as rate limiting, anti-fraud detection, and account validation. The document contains an agenda, descriptions of each anti-pattern and their level of failure, as well as explanations and demonstrations of the example use cases.
This document summarizes the Cassandra Java driver and tools. It discusses the driver's architecture including connection pooling, request pipelining, load balancing policies, and automatic failover. It also covers using statements, asynchronous reads, the query builder, and the object mapper. Lastly, it discusses new automatic paging functionality in the driver.
This document summarizes Cassandra drivers and tools. It discusses the Java driver architecture including connection pooling, load balancing policies, and automatic paging. It also demonstrates Cassandra Unit for testing, the Java driver object mapper module, and Achilles object mapper with features like dirty checking. Live coding examples are provided for these tools.
09.Local Database Files and Storage on WPNguyen Tuan
This document provides an overview of local data storage options for Windows apps, including isolated storage, file I/O, settings storage, SQLite database, and external storage. It discusses using the local, installation, and shared application folders to store files. It also covers serialization, file associations, and APIs for reading and writing files like IsolatedStorageFile and StorageFile. The document demonstrates saving data to isolated storage, settings, and a SQLite database and loading data from these sources. It recommends SQLite as a local database option and provides instructions for adding SQLite support to a project.
Jonathan is a MySQL consultant who specializes in SQL, indexing, and reporting for big data. This tutorial will cover strategies for resolving 80% of performance problems, including indexes, partitioning, intensive table optimization, and finding and addressing bottlenecks. The strategies discussed will be common, established approaches based on the presenter's experience working with MySQL since 2007.
The document describes a presentation about rapidly prototyping with Solr. It will demonstrate ingesting documents into Solr, adjusting Solr's schema, and showcasing data in a flexible search UI. The presentation will cover faceting, highlighting, spellchecking, and debugging. Time will also be spent outlining next steps to develop and take the search application to production.
This document provides information about Frits Hoogland, an Oracle database performance, configuration and capacity specialist with 25 years of experience. It discusses mutexes in the Oracle database, noting they were gradually implemented starting in Oracle 10.2 to manage concurrency. The presentation assumes knowledge of concepts like mutexes/spinlocks and the general workings of the database.
sf bay area dfir meetup (2016-04-30) - OsxCollector Rishi Bhargava
OSXCollector is an open source forensic evidence collection and analysis toolkit for Mac OS X. It collects a wide range of system and application data from an OS X system, including files, browser history, startup items, quarantines, and more. The output is formatted as JSON, which contains metadata like hashes, timestamps, and signature chains for collected artifacts. This standardized format makes the data easy to analyze programmatically through output filters to detect anomalies, malware artifacts, and other interesting findings.
09.1. Android - Local Database (Sqlite)Oum Saokosal
This document provides information about SQLite databases in Android applications. It discusses how SQLite is integrated into the Android runtime, demonstrates basic SQL statements for creating tables, inserting, updating, deleting and selecting records, and shows code examples for executing SQL statements and retrieving data using a Cursor object in an Android app. It also explains how to view the database files on an Android emulator.
Faster Data Analytics with Apache Spark using Apache SolrChitturi Kiran
Apache Spark is a fast and general engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. Spark SQL allows users to execute relation queries in Spark with distributed in-memory computations. Though Spark gives us faster in-memory computations, Solr is blazing fast for some analytic queries. In this talk, we will take a deep dive into how to optimize the SQL queries from Spark to Solr by plugging into the Spark LogicalPlanner using pushdown strategies. The key take aways from the talk will be:
How to perform Spark SQL queries with Apache Solr?
What happens inside a Spark SQL query?
How to plug into Spark Logical Planner?
What type of push-down strategies are optimal with Solr?
Examples of push-down strategies
Presented at Lucene Revolution - http://sched.co/BAwV
This document provides an introduction to Apache Solr, including:
- Solr is an open-source search engine and REST API built on Lucene for indexing and searching documents.
- Solr architecture includes nodes, cores, schemas, and concepts like SolrCloud which uses Zookeeper for coordination across collections and shards.
- Documents are indexed, queried, updated, and deleted via the REST API or client libraries. Queries support various types including range, date, boolean, and proximity queries.
- Installation and configuration of standalone Solr involves downloading, extracting, and running bin/solr scripts to start the server and create cores.
- Resources for learning more include tutorials, documentation, and integration options
The document describes Windows Credentials Editor (WCE), a tool that manipulates Windows logon sessions to dump and modify credentials in memory. WCE has two main features - it can dump in-memory credentials like usernames, domains, and NTLM hashes from current, future, and terminated logon sessions; and it supports pass-the-hash by allowing changes to NTLM credentials or creation of new logon sessions with arbitrary credentials. The document discusses two methods WCE could use - directly calling authentication package APIs, which requires running code in LSASS; or reading LSASS memory to locate logon session and credential structures and decrypt credentials without injecting code.
This document describes a new method for exploiting PL/SQL injection without needing to create functions or procedures. It involves injecting a pre-compiled cursor using the DBMS_SQL package to execute arbitrary SQL. The attacker can use this to grant privileges to themselves or create their own functions without any system privileges beyond CREATE SESSION. It provides an example exploiting the SDO_DROP_USER_BEFORE trigger in Oracle to gain DBA privileges in this way without needing CREATE PROCEDURE permission.
By using specially crafted parameters in double quotes, it is possible to bypass the input validation of the Oracle dbms_assert package and inject SQL code. This allows dozens of already patched Oracle vulnerabilities to be exploited again across versions 8.1.7.4 to 10.2.0.2. The researcher notified Oracle of the problem in April 2006. To mitigate risks, privileges like CREATE PROCEDURE should be revoked to prevent injection of malicious functions or procedures.
The document discusses performing digital forensics investigations using open source tools. It covers the major steps of the process: data acquisition, examination, and report preparation. For data acquisition, it describes how to gather volatile system data like memory dumps and network traffic, as well as disk images. For examination, it discusses forensic analysis software like The Sleuth Kit and Autopsy, analyzing memory dumps with Volatility, and examining network traffic with Wireshark. It also provides examples of timeline creation and registry analysis.
The document discusses how Windows Credentials Editor (WCE) can be used to obtain credentials stored in memory on Windows systems, allowing an attacker to steal usernames and hashes to perform pass-the-hash attacks without cracking passwords. WCE enables bypassing common pre-exploitation techniques by directly using harvested credentials. Leaving logon sessions disconnected rather than logged off can leave credentials exposed in memory as "zombie sessions".
Santorini is a popular Greek island located in the Cyclades islands in the southern Aegean Sea. It formed from volcanic explosions that left the island with steep cliffs surrounding a central caldera filled with water. The island is home to around 7,000 residents spread across 10 villages and has a temperate climate. Santorini has archaeological sites from the Minoan era and was devastated by a massive volcanic eruption around 1600 BC. Tourism is now the main industry, attracting thousands of visitors each year to see the scenic caldera views and sunset from towns like Oia and Fira.
This document provides an overview of database security platforms and the evolution of this market. Some key points:
- Database security platforms have evolved beyond just monitoring database activity and now incorporate features like vulnerability assessment, user rights management, data discovery/filtering, and blocking capabilities.
- The increased scope of monitoring coverage and additional security features mean "Database Activity Monitoring" is no longer an accurate term - these solutions are now more appropriately called "Database Security Platforms."
- These platforms consolidate multiple database security tools into a single solution and can monitor both relational and non-relational databases as well as multiple database types.
- Vendors are beginning to differentiate their database security platforms based on primary use cases
This document provides an overview of essential Linux commands and utilities for SQL Server DBAs. It covers topics such as Linux history, users and permissions, file editing and navigation commands like vi, process monitoring with ps and top, and system diagnostic utilities like sar, vmstat, and mpstat. The document aims to teach SQL Server DBAs basic Linux skills to manage their environment and troubleshoot issues.
ElasticSearch is a flexible and powerful open source, distributed real-time search and analytics engine for the cloud. It is JSON-oriented, uses a RESTful API, and has a schema-free design. Logstash is a tool for collecting, parsing, and storing logs and events in ElasticSearch for later use and analysis. It has many input, filter, and output plugins to collect data from various sources, parse it, and send it to destinations like ElasticSearch. Kibana works with ElasticSearch to visualize and explore stored logs and data.
This document provides an overview of various Oracle database tuning tools available in Oracle 10gR2, including the Automatic Workload Repository (AWR), Automatic Database Diagnostic Monitor (ADDM), Active Session History (ASH), metrics, and advisors. It discusses how these tools provide automated, consistent performance monitoring and issue identification to help tune and optimize the database with little manual effort.
Auditing security of Oracle DB (Karel Miko)DCIT, a.s.
The document discusses auditing security of Oracle databases. It divides the audit into four technical phases:
1) Auditing the operating system level, including checking permissions on the Oracle home directory and verifying the OS account used for Oracle has appropriate privileges.
2) Auditing the Oracle RDBMS level, including validating the Oracle version and installed patches.
3) Auditing Oracle database instances, including verifying database options and privileges granted to users and roles.
4) Auditing related processes, such as the Oracle listener and associated configuration files.
This document describes a case study of discovering and exploiting a SQL injection vulnerability. Over the course of three days, the researcher tested various parameters of a web application using sqlmap and custom payloads. After initial failures, the researcher realized the application was using Windows Search and leveraged its Advanced Query Syntax to conduct file queries and infer file contents. This allowed retrieving a local web.config file containing a SQL Server password. The researcher concluded that thorough manual analysis is needed to fully understand vulnerabilities beyond just using automated scanners.
The document discusses various Linux system monitoring utilities including SAR, SADC/SADF, MPSTAT, VMSTAT, and TOP. SAR provides CPU, memory, I/O, network, and other system activity reports. SADC collects system data which SADF can then format and output. MPSTAT reports processor-level statistics. VMSTAT provides virtual memory statistics. TOP displays active tasks and system resources usage.
This document summarizes a presentation on optimizing System Center Operations Manager (SCOM) and presenting services. The presentation covers defining services and service level objectives, identifying performance bottlenecks at the database, management server, and agent layers, and using the SCOM SDK for automation including automatically changing alert resolution states and resending alerts. Code examples are provided for monitoring database performance, identifying SQL and I/O wait times, and automating tasks using PowerShell and the SCOM SDK.
ShmooCON 2009 : Re-playing with (Blind) SQL InjectionChema Alonso
Talk delivered by Chema Alonso & Jose Palazon "Palako" in ShmooCON 2009 at Washington about SQL Injection, Blind SQL Injection, Time-Based Blind SQL Injection, RFD (Remote File Downloading) and Serialized SQL Injection. http://www.slideshare.net/chemai64/timebased-blind-sql-injection-using-heavy-queries-34887073
Python and Oracle : allies for best of data managementLaurent Leturgez
In this presentation, I described Python and how Python can Interact with Oracle database, and Oracle Cloud Infrastructure in various project : from data visualisation to data science.
This document provides an overview of Oracle tracing tools and techniques. It discusses general Oracle tracing, 10,046 tracing, and the trace analyzer. It covers enabling tracing at the session and system level using procedures like DBMS_MONITOR and ALTER SESSION. Key trace files and parameters are also explained. Tools for analyzing trace files like TKPROF and TRCANLZR are described. The document provides examples of using tracing tools to diagnose performance issues.
Covered:
1. Databases and Schemas
2. Tablespaces
3. Data Type
4. Exploring Databases
5. Locating the database server's message log
6. Locating the database's system identifier
7. Listing databases on this database server
8. How much disk space does a table use?
9. Which are my biggest tables?
10. How many rows are there in a table?
11. Quickly estimating the number of rows in a table
12. Understanding object dependencies
The document discusses various debugging techniques for the Linux kernel including:
1. Enabling debugging configuration options like CONFIG_DEBUG_KERNEL and CONFIG_DEBUG_SLAB.
2. Using printk statements with different log levels for debugging.
3. Querying information from /proc and using ioctl calls.
4. Tools like strace, gdb, kdb, and Kgdb can help debug system faults, catch flaws in code, and set breakpoints.
5. Tracing tools like LTT allow tracing of kernel events and timing information.
Mobile applications Development - Lecture 13
Local/Session Storage
WebSQL
IndexedDB
File System Access
This presentation has been developed in the context of the Mobile Applications Development course at the Computer Science Department of the University of L’Aquila (Italy).
http://www.di.univaq.it/malavolta
This document provides an overview and introduction to Splunk, an enterprise software platform for searching, monitoring, and analyzing machine-generated big data, such as logs, metrics, and events. The agenda covers what Splunk is, how to get started with Splunk including installing and licensing, basic search functionality, creating alerts and dashboards, deployment and integration options to scale Splunk across multiple sites and systems, and resources for support and the Splunk community. Key capabilities highlighted include searching and analyzing structured and unstructured machine data, indexing petabytes of data per day, role-based access controls, high availability, and integrating with third-party systems.
This document discusses how to protect databases from SQL injection vulnerabilities by fuzz testing databases to find vulnerabilities before hackers do. It covers common SQL injection techniques like in-band and out-of-band injection. It then describes how to build a custom fuzzer using PL/SQL to fuzz test databases, track results, discover vulnerable code, invoke code with test parameters, and report findings. The document demonstrates how fuzz testing can find real vulnerabilities and provides examples of an interface and secure coding techniques to help prevent vulnerabilities.
This document summarizes a presentation about rapid prototyping with Solr. It discusses getting documents indexed into Solr quickly, adjusting Solr's schema to better match needs, and showcasing data in a flexible search UI. It outlines how to leverage faceting, highlighting, spellchecking and debugging in rapid prototyping. Finally, it discusses next steps in developing a search application and taking it to production.
The document discusses how REST APIs and ORDS can help DBAs adopt more agile practices. It provides examples of how DBAs can expose database operations and metadata via REST endpoints to improve communication and automation between developers and DBAs. This includes endpoints for checking database connectivity, putting applications in maintenance mode, retrieving backup status, creating/deleting restore points, refreshing schemas, and more. The document argues that REST and ORDS can help make DBAs more agile by standardizing their operations and facilitating integration with other tools and services.
This document summarizes techniques for privilege escalation using the Metasploit framework. It begins by explaining why Metasploit is useful and why privilege escalation is important for gaining more access and control. It then provides examples of how to write Metasploit modules for local privilege escalation exploits, including generating payloads, including the necessary Metasploit mixins, and interacting with sessions post-exploitation. Specific local privilege escalation techniques demonstrated include exploiting vulnerable setuid binaries like Nmap, modifying Windows scheduled tasks, and relaying Windows authentication. The document concludes by discussing future work to improve Metasploit module development.
This document discusses a vulnerability in Oracle databases that allows privilege escalation from CREATE USER privileges to SYSDBA privileges. It provides code examples demonstrating how a user with CREATE USER privileges can create a function with the same name as a built-in SYS function to override the namespace and elevate their privileges when SYS executes the function. The document outlines best practices for prevention, including not logging in as SYS, closely monitoring CREATE USER privileges, and using a tool like Sentrigo Hedgehog for advanced monitoring and alerts. It also provides recommendations for forensic response if privilege escalation occurs.
1. The document discusses SSH tricks and configuration tips for securing SSH connections and servers. It provides examples of SSH client-side one-liners and ways to quickly set up an SSH server.
2. SSH is a secure network protocol for exchanging data between networked devices. The document outlines ways to lock down SSH servers and clients through configuration files and access controls.
3. The document shows examples of SSH port forwarding, tunnels, and other one-liners that can enable remote access or administration through SSH connections.
The document discusses a Layer 7 DDOS attack called an HTTP POST attack. It works by sending legitimate HTTP POST requests to a server but slowly sending the content over an extended period, tying up server resources. This attack is more effective than the HTTP GET Slowloris attack as it fully sends the HTTP headers immediately, bypassing defenses against Slowloris. The attack code example shows how it generates random content lengths and sends payload bytes slowly over time to perform the DDOS attack.
This document summarizes optimizations to TLS/SSL including False Start, Snap Start, and defenses against the BEAST attack. False Start allows the client to send application data before receiving the server's Finished message to reduce latency. Snap Start uses cached handshake parameters to further reduce latency. However, both introduce security risks. The BEAST attack exploits TLS CBC encryption and IV reuse, but can be prevented by changing the encryption mode or adding padding.
The document provides an overview of practical cryptography and the GPG/PGP encryption tools. It discusses symmetric and public key cryptography theory. It then demonstrates how to use GPG/PGP to generate keys, encrypt and decrypt files, digitally sign documents, verify signatures, and distribute public keys through a key server. It also discusses how the web of trust model works to validate identities through in-person key signing after carefully verifying a user's identity.
Kyle Young presents on SSH tricks and configuration tips. He discusses the history and uses of SSH, how to securely connect to SSH servers by verifying fingerprints, and ways to lock down SSH servers and clients through configuration files like sshd_config and ssh_config. He also shares some useful SSH client-side one-liners.
This document describes padding oracle attacks on cryptographic hardware devices that allow encrypted keys to be imported. It presents two types of attacks: 1) An improved Bleichenbacher attack that exploits RSA PKCS#1v1.5 padding to reveal an imported private key in an average of 49,000 oracle queries. 2) An adaptation of the Vaudenay CBC attack to reveal keys encrypted with CBC and PKCS#5 padding. It demonstrates these attacks on commercial security tokens, smartcards, and electronic ID cards to reveal stored cryptographic keys.
The document discusses proper password hashing methods for securely storing passwords. It begins by stating that most websites currently do not properly store passwords, either in plaintext or with a single hash without salt. This is irresponsible. The document then discusses proper hashing methods that should be used, including adding salt, using key derivation functions like PBKDF2, ARC4PBKDF2, and bcrypt. PBKDF2 works by repeatedly hashing the password with a salt, while ARC4PBKDF2 additionally encrypts the password and hashes with an evolving ARC4 stream for added complexity. Bcrypt is also an adaptive function that works similarly to PBKDF2 but in a more complicated way. The document
This document proposes a new method for improving the cryptanalytic time-memory trade-off technique. The original technique, introduced by Hellman in 1980, precomputes ciphertexts to reduce cryptanalysis time at the cost of memory usage. The new method reduces the number of calculations needed during cryptanalysis by a factor of two compared to the existing approach using distinguished points. As an example, the new method can crack 99.9% of Windows password hashes in 13.6 seconds using 1.4GB of precomputed data, much faster than the 101 seconds taken by the existing approach.
This document provides an introduction and overview of threading and concurrency in Perl. It begins with definitions of threads and concurrency basics. It then discusses Perl's implementation of threads since version 5.6, noting that global variables are non-shared by default and sharing must be explicit. The document outlines various threading primitives and synchronization mechanisms in Perl like locks, condition variables, and shows examples of building thread-safe data structures like queues. It concludes with best practices and implementing other common synchronization primitives.
The document is a series of lines repeatedly stating "Author: Bill Buchanan". It does not contain any other substantive information in the content. The author of the document is Bill Buchanan, as his name is listed on every line.
This document discusses various network security concepts including firewalls, proxies, NAT, and VPNs. It provides examples of network infrastructures using different types of firewalls such as packet filtering, stateful, and proxy firewalls. It also discusses standard and extended access control lists (ACLs) used with firewalls to filter traffic. Finally, it covers network address translation (NAT) and port address translation (PAT) which help hide private network addresses from the public internet.
Snort is an open source intrusion detection and prevention system that uses rules written in its own language to inspect network traffic in real-time, detect anomalous activity, and generate alerts. It works by matching packets against signatures in its rules database to identify attacks and exploits, and can detect protocol anomalies, custom signatures, and payload analysis. Snort rules allow it to detect specific patterns in network traffic including payload signatures, TCP flags, and port numbers to identify malicious activity.
The document discusses various types of denial of service (DoS) attacks including layer 4 distributed denial of service (DDoS) attacks using botnets, layer 7 attacks that can be carried out by a single attacker, and link local attacks using fraudulent IPv6 router advertisements. It also profiles various hacktivist groups that have carried out such attacks and outlines defenses against DoS attacks like ModSecurity, load balancing, and router advertisement guard.
This document is the user's manual for sqlmap, an open source penetration testing tool that automates the process of detecting and exploiting SQL injection flaws and taking over database servers. The manual provides information on installing and using sqlmap, including requirements, basic usage, supported features, techniques, and numerous configuration options for optimization, injection, detection, enumeration and brute forcing capabilities.
The document is a report from Arbor Networks that analyzes data from a survey of over 500 network operators regarding infrastructure security threats in 2011. Some key findings include:
- Distributed denial-of-service (DDoS) attacks were considered the most significant operational threat. Application-layer DDoS attacks using HTTP floods were most common.
- The largest reported DDoS attacks exceeded 100 Gbps in bandwidth. Major online gaming and gambling sites were frequently targeted.
- Most respondents experienced multiple DDoS attacks per month and detected increased awareness of the DDoS threat over the previous year.
- Network traffic detection, classification, and event correlation tools were commonly used to identify attacks and trace sources. DDo
5th LF Energy Power Grid Model Meet-up SlidesDanBrown980551
5th Power Grid Model Meet-up
It is with great pleasure that we extend to you an invitation to the 5th Power Grid Model Meet-up, scheduled for 6th June 2024. This event will adopt a hybrid format, allowing participants to join us either through an online Mircosoft Teams session or in person at TU/e located at Den Dolech 2, Eindhoven, Netherlands. The meet-up will be hosted by Eindhoven University of Technology (TU/e), a research university specializing in engineering science & technology.
Power Grid Model
The global energy transition is placing new and unprecedented demands on Distribution System Operators (DSOs). Alongside upgrades to grid capacity, processes such as digitization, capacity optimization, and congestion management are becoming vital for delivering reliable services.
Power Grid Model is an open source project from Linux Foundation Energy and provides a calculation engine that is increasingly essential for DSOs. It offers a standards-based foundation enabling real-time power systems analysis, simulations of electrical power grids, and sophisticated what-if analysis. In addition, it enables in-depth studies and analysis of the electrical power grid’s behavior and performance. This comprehensive model incorporates essential factors such as power generation capacity, electrical losses, voltage levels, power flows, and system stability.
Power Grid Model is currently being applied in a wide variety of use cases, including grid planning, expansion, reliability, and congestion studies. It can also help in analyzing the impact of renewable energy integration, assessing the effects of disturbances or faults, and developing strategies for grid control and optimization.
What to expect
For the upcoming meetup we are organizing, we have an exciting lineup of activities planned:
-Insightful presentations covering two practical applications of the Power Grid Model.
-An update on the latest advancements in Power Grid -Model technology during the first and second quarters of 2024.
-An interactive brainstorming session to discuss and propose new feature requests.
-An opportunity to connect with fellow Power Grid Model enthusiasts and users.
Fueling AI with Great Data with Airbyte WebinarZilliz
This talk will focus on how to collect data from a variety of sources, leveraging this data for RAG and other GenAI use cases, and finally charting your course to productionalization.
"Choosing proper type of scaling", Olena SyrotaFwdays
Imagine an IoT processing system that is already quite mature and production-ready and for which client coverage is growing and scaling and performance aspects are life and death questions. The system has Redis, MongoDB, and stream processing based on ksqldb. In this talk, firstly, we will analyze scaling approaches and then select the proper ones for our system.
Northern Engraving | Nameplate Manufacturing Process - 2024Northern Engraving
Manufacturing custom quality metal nameplates and badges involves several standard operations. Processes include sheet prep, lithography, screening, coating, punch press and inspection. All decoration is completed in the flat sheet with adhesive and tooling operations following. The possibilities for creating unique durable nameplates are endless. How will you create your brand identity? We can help!
zkStudyClub - LatticeFold: A Lattice-based Folding Scheme and its Application...Alex Pruden
Folding is a recent technique for building efficient recursive SNARKs. Several elegant folding protocols have been proposed, such as Nova, Supernova, Hypernova, Protostar, and others. However, all of them rely on an additively homomorphic commitment scheme based on discrete log, and are therefore not post-quantum secure. In this work we present LatticeFold, the first lattice-based folding protocol based on the Module SIS problem. This folding protocol naturally leads to an efficient recursive lattice-based SNARK and an efficient PCD scheme. LatticeFold supports folding low-degree relations, such as R1CS, as well as high-degree relations, such as CCS. The key challenge is to construct a secure folding protocol that works with the Ajtai commitment scheme. The difficulty, is ensuring that extracted witnesses are low norm through many rounds of folding. We present a novel technique using the sumcheck protocol to ensure that extracted witnesses are always low norm no matter how many rounds of folding are used. Our evaluation of the final proof system suggests that it is as performant as Hypernova, while providing post-quantum security.
Paper Link: https://eprint.iacr.org/2024/257
AppSec PNW: Android and iOS Application Security with MobSFAjin Abraham
Mobile Security Framework - MobSF is a free and open source automated mobile application security testing environment designed to help security engineers, researchers, developers, and penetration testers to identify security vulnerabilities, malicious behaviours and privacy concerns in mobile applications using static and dynamic analysis. It supports all the popular mobile application binaries and source code formats built for Android and iOS devices. In addition to automated security assessment, it also offers an interactive testing environment to build and execute scenario based test/fuzz cases against the application.
This talk covers:
Using MobSF for static analysis of mobile applications.
Interactive dynamic security assessment of Android and iOS applications.
Solving Mobile app CTF challenges.
Reverse engineering and runtime analysis of Mobile malware.
How to shift left and integrate MobSF/mobsfscan SAST and DAST in your build pipeline.
Conversational agents, or chatbots, are increasingly used to access all sorts of services using natural language. While open-domain chatbots - like ChatGPT - can converse on any topic, task-oriented chatbots - the focus of this paper - are designed for specific tasks, like booking a flight, obtaining customer support, or setting an appointment. Like any other software, task-oriented chatbots need to be properly tested, usually by defining and executing test scenarios (i.e., sequences of user-chatbot interactions). However, there is currently a lack of methods to quantify the completeness and strength of such test scenarios, which can lead to low-quality tests, and hence to buggy chatbots.
To fill this gap, we propose adapting mutation testing (MuT) for task-oriented chatbots. To this end, we introduce a set of mutation operators that emulate faults in chatbot designs, an architecture that enables MuT on chatbots built using heterogeneous technologies, and a practical realisation as an Eclipse plugin. Moreover, we evaluate the applicability, effectiveness and efficiency of our approach on open-source chatbots, with promising results.
Ivanti’s Patch Tuesday breakdown goes beyond patching your applications and brings you the intelligence and guidance needed to prioritize where to focus your attention first. Catch early analysis on our Ivanti blog, then join industry expert Chris Goettl for the Patch Tuesday Webinar Event. There we’ll do a deep dive into each of the bulletins and give guidance on the risks associated with the newly-identified vulnerabilities.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
3. About Red-Database-
Security
¡ Founded 2004 in Germany
¡ Dedicated to Oracle Security
¡ Consulting / Training / Software
¡ More than 1500 security vulnerabilities found in
Oracle products
¡ More than 2000 Oracle databases audited in
2011
4. Introduction
• More and more databases affected by attacks
• Database forensic is still an exotic/academic
topic
• No easy to use tools available.
• Collected data is difficult to analyse
è This presentation will show new approaches
which will make the analysis easier
5. Current Status – Books & Documents
• Oracle Forensics from Paul M. Wright out of
stock (used copies 230 USD)
, new books coming soon
http://www.amazon.com/gp/product/0977671526/sr=8-2/
qid=1315500507/ref=olp_product_details?
ie=UTF8&me=&qid=1315500507&sr=8-2&seller=
• Oracle Forensics Series from David Litchfield
http://www.databasesecurity.com/oracle-forensics.htm
• Several smaller documents
6. Available Tools for Forensic
• Logminer (free, Oracle)
• Data Unloader (most commercial, e.g. qDUL
from Qualea)
• Verity Data Block Examiner, cadfile, … (free,
v3rity Ltd.)
• McAfee Security Scanner for Databases
(commercial, Analysis)
7. Traces
Different kind of traces could be used
• Files on OS level
• Results from OS Commands at OS level
• Volatile tables –only available if DB is up and
running
• Temporary tables – content automatically by
Oracle after a while
• Permanent tables
9. Find Traces (Tables/Views)
• GV$* (Volatile, use GV$* instead of V$ to be
Oracle cluster (RAC) compliant)
• WRH$* (Temporary)
• Audit Views
• USER$
• MON_MOD$ (Temporary)
• COL_USAGE$ (Temporary)
• Recycle-Bin
• …
10. Oracle Forensic Problems
• Still requires a deep knowledge of database
architecture/design
• Requires good SQL know how (Outer-Joins are
mandatory in many Selects queries, e.g. join
audit&user tables)
• Requires a strong knowledge of the Oracle
(and the application) repository
• Requires a strong knowledge about typical
database attacks (what can be found where)
• Little to less tool support
11. Typical Approach for DB Forensics
• Collect traces from the file system and database
• OS: copy files
• DB: spool the output from SQL statements to a spool
file to preserve the evidence1
• Copy the collected files to the examiner PC
• Analyze the collected evidence
è Difficult
to analyze because the data type, format,
dependencies is lost.
è Just a big text file. No query language.
1 http://www.databasesecurity.com/dbsec/LiveResponse.pdf
12. Current Approach
Victim DB
Sqlplus / as sysdba
SQL> spool coll.lst
SQL> SELECT LAST_ACTIVE_TIME, PARSING_USER_ID, SQL_TEXT FROM V$SQL
ORDER BY LAST_ACTIVE_TIME ASC;
SQL> SELECT ST.PARSING_SCHEMA_ID, TX.SQL_TEXT FROM WRH$_SQLSTAT ST,
WRH$_SQLTEXT TX WHERE TX.SNAP_ID = ST.SNAP_ID;
SQL> SELECT * FROM AUD$;
SQL> SELECT USER_ID, SESSION_ID, SAMPLE_TIME FROM SYS.WRH
$_ACTIVE_SESSION_HISTORY ;
SQL> SELECT SID, USER#, USERNAME, TERMINAL, OSUSER, PROGRAM,
LOGON_TIME FROM V$SESSION;
SQL> SELECT USER#, NAME, ASTATUS, PASSWORD, CTIME, PTIME, LTIME FROM
SYS.USER$ WHERE TYPE#=1;
Examiner PC
Notepad coll.lst
13.
14. Advanced Approach
• Same data collection approach but use external
tables instead of unstructured text files
• An Oracle external table allows to preserve the
entire table data including binary data, data
types, …. in a binary file
è Requires Oracle 10.2 or higher
è Analysis will be much easier
è Much faster than normal spooling
è Joins and lookups between the difference
collected information is still possible by using the
renamed external tables
1 http://www.databasesecurity.com/dbsec/LiveResponse.pdf
15. Advanced Approach
1.) Victim DB
• UNIX:
• As root:collect_unix_artifacts_as_root.sh
• As Oracle: collect_unix_artifacts_as_oracle.sh
• Oracle:
• As SYS: collect_db_artifact_as_sys.sql
2.) Transfer Data to Examiner PC (+ burn to DVD)
3.) Examiner PC
* Create objects (prepare_examiner_db_case001.sql)
4.) Analyse
16. Advanced Approach II (Tables/Views)
Victim DB
CREATE TABLE forensicmat.ext_gvversion ORGANIZATION
EXTERNAL( TYPE ORACLE_DATAPUMP DEFAULT DIRECTORY
data_unload_dir LOCATION ( 'ext_gvversion.dmp' ))
AS select * from gv$version;
Examiner PC
CREATE TABLE "EXT_GVVERSION” ("INST_ID" NUMBER,
"BANNER" VARCHAR2(80))
ORGANIZATION EXTERNAL
( TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY for_ora_ext_tables1
LOCATION
( 'ext_gvversion.dmp’ ) );
17.
18. Advanced Approach (OS Commands)
Victim DB
ls -laR --full-time $ORACLE_HOME | tee -a >$FORDIR/
oracle/commands/all_files.txt
Examiner PC
CREATE TABLE ext_all_files
(file_mode varchar2(11), num_of_links number,
owner_name varchar2(32), group_name varchar2(32),
bytes number, file_last_mod_date varchar2(10),
file_last_mod_time varchar2(20), gmt varchar2(6),
filename varchar2(256) )
ORGANIZATION EXTERNAL
( TYPE oracle_loader
DEFAULT DIRECTORY for_ora_commands1
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ' '
MISSING FIELD VALUES ARE NULL )
LOCATION ('all_files.txt') )
PARALLEL 5 REJECT LIMIT UNLIMITED;
19.
20.
21. Advanced Approach (OS Files)
Victim DB
cp -p -v /etc/passwd $FORDIR/unix/files/passwd.txt
Examiner PC
CREATE TABLE ext_etc_passwd
(username varchar2(32), shadow varchar2(32),
userid number, groupid number,
usercomment varchar2(128), shell varchar2(128) )
ORGANIZATION EXTERNAL
( TYPE oracle_loader
DEFAULT DIRECTORY for_unix_files1
ACCESS PARAMETERS
(RECORDS DELIMITED BY NEWLINE
FIELDS TERMINATED BY ':'
MISSING FIELD VALUES ARE NULL )
LOCATION ('passwd.txt') )
PARALLEL 5 REJECT LIMIT UNLIMITED;
22.
23. Timeline Creation
¡ A timeline can be helpful during the analysis of
forensic data
¡ Data from different source is displayed together
¡ Easy to implement
24. Timeline Creation
¡ Every information with a timestamp (e.g. User
locking) will be a separate row and unified with the
UNION command
¡ SYS.USER$ contains different timestamps
¡ CTIME – User created
¡ PTIME – Password changed
¡ LTIME – User locked
¡ A single row in SYS.USER$ will become 3 lines in the
timeline table/view
¡ Additional information must be added from
different tables/view (e.g. DB startup, auditing, ...)
25. Timeline Creation
select 0 as inst_id, 'DBA' as dstype, 'DBA_USERS' as datasource, created as
timest, 'User Created' as activity, 'CREATED' as timestamp_name,username as
detail1, username as username, null as serial#, null as session_id from
ext_dba_users
union all
select 0 as inst_id, 'DBA' as dstype,'DBA_USERS' as datasource, lock_date as
timest, 'User Locked' as activity, 'LOCK_DATE' as timestamp_name,username as
detail1, username as username, null as serial#, null as session_id from
ext_dba_users where lock_date is not null
union all
select 0 as inst_id, 'DBA' as dstype,'DBA_OBJECTS' as datasource, created as
timest, 'Table Created' as activity, 'CREATED' as timestamp_name,owner||'.'||
object_name as detail1, owner as username, null as serial#, null as session_id
from ext_dba_objects where object_type='TABLE‘
union all
select 0 as inst_id, 'DBA' as dstype,'DBA_OBJECTS' as datasource, created as
timest, 'View Created' as activity, 'CREATED' as timestamp_name,owner||'.'||
object_name as detail1, owner as username, null as serial#, null as session_id
from ext_dba_objects where object_type='VIEW‘
...
30. Typical Tables and Pattern
• The following slides contain typical database
objects (like sys.user$) and common attack
traces which can be found in these objects.
• Data from audit.logs (disabled in most cases in
the real world) is not covered in this
presentation
• Files (like listener.log) are skipped to save some
time.
32. Tables – sys.user$
¡ Interesting Columns
¡ lcount
¡ Number of invalid login attempts
¡ Resetted after successful login
¡ Maximum number dependent from the profile
setting
¡ ltime (Lock-Time)
¡ Lock time of the account
33. Tables – sys.user$
¡ Typical attack patterns - lcount
¡ Multiple accounts have a lcount > 0
è Someone tries to guess user accounts without
locking them
¡ Agent Accounts (e.g. Tivoli) have an lcount> 0 &
lcount < max from Profile
è Someone tries to guess the password of an agent
account. Lcount of agent accounts is normally 0 or
max Profile
¡ Big lcount value (e.g. 30.000)
è Bruteforce attack using a tool or someone forgot
to change the client side password of an agent.
34. Tables – sys.user$
¡ Typical attack patterns- ltime
¡ Multiple accounts with similar ltime
è Someone tried to guess user accounts but the
accounts were locked.
35. Tables – sys.wrh$_
active_session_history
¡ Interesting Columns
¡ program
¡ Used Program
¡ Module
¡ Used module name
¡ Machine (since 11.2)
¡ What user was coming from what machine
è Important for password changes
¡ Warning!. The data from sys.wrh
$active_session_history is not always reliable.
Sometimes 0 (=SYS) is used even if the connect was
not done by SYS.
36. Tables – sys.wrh$_
active_session_history
¡ Typical attack patterns
¡ Program
¡ Unwanted/unauthorized programs
¡ Export utilities
¡ Module
¡ Program and Module do not match (e.g.
oracle.exe & „TOAD 10.3.0.1“ è renamed tool to
bypass login trigger
¡ Machine
¡ Login from unusual machine
¡ Combination User & Machine
37. Tables – sys.wrh$_
active_session_history (11.2)
select program, username, machine, count(*) as cnt
from sys.wrh$_active_session_history w, dba_users d
where w.user_id=d.user_id (+)
and (lower(program) not like '%oracle%(%)%')
group by program, username, machine
38. Tables – sys.wrh$_
active_session_history
select program, username, count(*) as cnt
from sys.wrh$_active_session_history w, dba_users d
where w.user_id=d.user_id (+)
and (lower(program) not like '%oracle%(%)%')
group by program, username
42. Tables – sys.mon_mods$
¡ Typical attack patterns
¡ obj#
¡ Suspicious Statements (Insert/Update/Delete/Select)
¡ Inserts
¡ Insert in critical tables (Privileges, ...)
¡ Updates
¡ Update of log entries (e.g. AUD$, custom Log-
Tables, ...)
¡ Update of critical data
¡ High value of update values on SYS.USER$ can be an
indication of brute force attacks (high lcount value)
¡ Deletes
¡ Delete of log entries (e.g. AUD$, custom Log-
Tables, ...)
43. Tables – sys.mon_mods$
select u.name as owner,o.name as table_name, m.inserts,
m.updates, m.deletes, m.timestamp
from sys.mon_mods$ m, sys.user$ u, sys.obj$ o
where o.obj#=m.obj# and u.user#=o.owner#
46. Database Blocks
SQL> select distinct dbms_rowid.rowid_block_number(rowid) from password;
DBMS_ROWID.ROWID_BLOCK_NUMBER(ROWID)
------------------------------------
57170
SQL> select tablespace_name from user_segments where segment_name in
('PASSWORD'
);
TABLESPACE_NAME
------------------------------
SYSTEM
SQL> select file_id from dba_data_files where tablespace_name='SYSTEM';
FILE_ID
----------
1
9
SQL> alter system dump datafile 1 block 57170;
System altered.
52. Pattern – Privilege Escalation
• Privilege escalation often uses stored
procedures as helper function for privilege
escalation
• Additional entries in DBA_ROLE_PRIVS,
DBA_TAB_PRIVS, DBA_SYS_PRIVS
• Probably deleted entries in SYS.SYSAUTH$ /
SYS.OBJAUTH$ / (visible in data blocks)
53. Pattern – Run OS Commands
• DBA_EXTERNAL_TABLES: External Table with
preprocessor (column ACCESS_PARAMETERS)
• DBA_JAVA_POLICY: new entries
• DBA_LIBRARIES: new entries
• CTXSYS.CTX_PREFERENCE_VALUES: Oracle Text
user filter , e.g. PRV_ATTRIBUTE=oratclsh.exe
54. Pattern – Backdoors
• Various places depending from the used
backdoor
• SYS.USER$
• Oracle Password File
• Logon trigger
• Privileges (e.g. grant execute on
SYS.DBMS_STREAMS_RPC to public)
• ...
56. Pattern – Data Export
• Attackers often export the database (or parts
of it) using the official export utilities.
• These traces can be easily found in the
• Listener.log
• sys.wrh$_ active_session_history (requires special
license)
57. Pattern – oradebug
• Details of this attacks will be shown by Laszlo
Toth talk “Almost invisible cloak in Oracle
databases” at Hacktivity (15:10-15:55)
• Oradebug commands are recorded in the
trace files and sometimes incident response
files (if oradebug causes an Oracle error (e.g.
ORA-07445))
• Tracefiles can easily be removed on OS level
58. Summary
• More convenient tools for databases forensics
needed to allow non-databases (security)
experts to find traces.
• Atomization for multiple databases needed
• Top down approaches are often easier to
understand than bottom up approaches