The document discusses recovery of lost or corrupted InnoDB tables in MySQL. It describes how MySQL stores data in InnoDB, including the system tables SYS_INDEXES and SYS_TABLES. It also discusses typical failure scenarios like deleted records, dropped tables, and corrupted tablespaces. The document introduces the InnoDB recovery tool which can be used to recover tables from corrupted InnoDB data files and table definitions.
Recovery of lost or corrupted inno db tables(mysql uc 2010)guest808c167
The document discusses recovery of lost or corrupted InnoDB tables in MySQL. It provides an overview of how MySQL stores data in InnoDB, including system tables, primary and secondary keys, and typical failure scenarios. It then introduces an InnoDB recovery tool that can split the InnoDB tablespace into pages and find valid records by parsing pages and applying constraints defined in table definitions.
Sap abap-data structures and internal tablesMustafa Nadim
Data structures and internal tables allow programs to store and manipulate data in memory. Structures define the layout of related data fields, while internal tables provide a way to store multiple occurrences of structured data. The document demonstrates how to declare structures and internal tables, populate them with data from database tables, and process the stored data within programs.
The document discusses MARC 21 records, which are machine-readable cataloging records used to describe items in a library collection. A MARC record contains several fields identified by 3-digit tags, including fields for author, title, publication details, and subject headings. Fields like 245 contain title information, 260 contains publication details, and 650 contains subject headings. The document outlines many common MARC fields and their definitions.
The document provides information about MARC (Machine Readable Cataloging) records including:
- MARC defines the format for catalog records so they can be read by computers and shared between libraries. Fields, indicators, subfields and tags are used to identify different parts of the record.
- The main types of MARC records are bibliographic, authority, and holding records which contain descriptions, headings, item locations.
- Cataloging rules determine how to describe items which are then encoded in MARC format to allow computer storage and retrieval of records.
Internal tables in ABAP can store and process data extracted from database tables. They consist of a body containing multiple rows of data and an optional header line. Various statements like APPEND, INSERT, DELETE, and LOOP AT can be used to manipulate data in internal tables. Control structures like AT NEW and AT END allow processing data based on changes in field values between table rows.
The document discusses internal tables in ABAP/4, including defining, processing, accessing, and initializing internal tables. Internal tables are structured data types that allow programs to reorganize and perform calculations on database table contents. They exist only during program runtime and cannot be accessed outside the program environment. Lines are accessed individually using a work area interface. There are two types - with or without a header line. Internal tables are created by defining a table type and object, or referring to an existing structure. They are filled using statements like APPEND, COLLECT, INSERT, and copying from database tables. Lines are read using LOOP or READ and modified using MODIFY. DELETE removes lines individually or by condition. SORT orders the
Internal tables in ABAP allow storing multiple records of the same type. They can be defined with or without a header line, which acts as a work area. Data is accessed using statements like LOOP, READ, APPEND that place one record at a time in the work area. The MODIFY, INSERT, DELETE statements update records. The SORT, COLLECT statements rearrange records in the table.
Recovery of lost or corrupted inno db tables(mysql uc 2010)guest808c167
The document discusses recovery of lost or corrupted InnoDB tables in MySQL. It provides an overview of how MySQL stores data in InnoDB, including system tables, primary and secondary keys, and typical failure scenarios. It then introduces an InnoDB recovery tool that can split the InnoDB tablespace into pages and find valid records by parsing pages and applying constraints defined in table definitions.
Sap abap-data structures and internal tablesMustafa Nadim
Data structures and internal tables allow programs to store and manipulate data in memory. Structures define the layout of related data fields, while internal tables provide a way to store multiple occurrences of structured data. The document demonstrates how to declare structures and internal tables, populate them with data from database tables, and process the stored data within programs.
The document discusses MARC 21 records, which are machine-readable cataloging records used to describe items in a library collection. A MARC record contains several fields identified by 3-digit tags, including fields for author, title, publication details, and subject headings. Fields like 245 contain title information, 260 contains publication details, and 650 contains subject headings. The document outlines many common MARC fields and their definitions.
The document provides information about MARC (Machine Readable Cataloging) records including:
- MARC defines the format for catalog records so they can be read by computers and shared between libraries. Fields, indicators, subfields and tags are used to identify different parts of the record.
- The main types of MARC records are bibliographic, authority, and holding records which contain descriptions, headings, item locations.
- Cataloging rules determine how to describe items which are then encoded in MARC format to allow computer storage and retrieval of records.
Internal tables in ABAP can store and process data extracted from database tables. They consist of a body containing multiple rows of data and an optional header line. Various statements like APPEND, INSERT, DELETE, and LOOP AT can be used to manipulate data in internal tables. Control structures like AT NEW and AT END allow processing data based on changes in field values between table rows.
The document discusses internal tables in ABAP/4, including defining, processing, accessing, and initializing internal tables. Internal tables are structured data types that allow programs to reorganize and perform calculations on database table contents. They exist only during program runtime and cannot be accessed outside the program environment. Lines are accessed individually using a work area interface. There are two types - with or without a header line. Internal tables are created by defining a table type and object, or referring to an existing structure. They are filled using statements like APPEND, COLLECT, INSERT, and copying from database tables. Lines are read using LOOP or READ and modified using MODIFY. DELETE removes lines individually or by condition. SORT orders the
Internal tables in ABAP allow storing multiple records of the same type. They can be defined with or without a header line, which acts as a work area. Data is accessed using statements like LOOP, READ, APPEND that place one record at a time in the work area. The MODIFY, INSERT, DELETE statements update records. The SORT, COLLECT statements rearrange records in the table.
The document discusses Structured Query Language (SQL) and its basic statements. It covers:
- SQL is used to request and retrieve data from databases. The DBMS processes SQL queries and returns results.
- SQL statements are divided into DDL (data definition language) for managing schema, DML (data manipulation language) for data queries/modification, and DCL (data control language) for managing transactions and access control.
- The document provides examples of using SQL commands like CREATE TABLE, ALTER TABLE, DROP TABLE, INSERT, UPDATE, DELETE, SELECT and indexes. It also covers data types, constraints and operators used in SQL queries.
This document provides an overview of working with multiple tables in SQL, including topics like joins, aliases, inner joins, outer joins, and joining more than two tables. It discusses how joins interact with the relational database structure and ERD diagrams. It provides examples of different join types and how they handle discrepancies in the data. It also covers adding calculations to queries using functions like COUNT and aggregate functions. The document uses the sample sTunes database to demonstrate various SQL queries and joins.
The document provides information on defining database objects like tables, fields, domains, and data elements in ABAP Dictionary. It describes how to create these objects step-by-step including defining domains with data types and lengths, creating data elements with field labels, and building tables with fields linked to data elements and domains. The document also covers foreign key relationships, structures, currency/quantity fields, and domain value lists.
This document provides an overview of MARC (Machine Readable Cataloging), which is a standard format for bibliographic data. It discusses the history and development of MARC, describes the components and structure of a MARC record including the leader, directory, and variable fields, and explains some key MARC terminology like tags, indicators, and subfields. It also lists some of the most frequently used field tags and provides examples of control fields in the 01X-09X range. The document is intended as an introduction and overview of the MARC format.
presentation on MARC21 Standard Bibliography for LibMSMuhammad Zeeshan
I have developed a module named MARC21 standard bibliography (fixed and customized) for LibMS(An online library management system developing by Aligarh Muslim University, Aligarh and owned by MHRD, Govt. of INDIA).
The slide contains the description about MARC21 standard cataloging and the module which I have developed.
MARC (Machine Readable Cataloging) is a metadata format for describing bibliographic resources. It uses fields, indicators, subfields, and tags to encode bibliographic data in a structured format. A MARC record contains the metadata needed to process and display a bibliographic item, such as title, author, publisher, subjects, and more. MARC was created to facilitate information sharing and exchange between library catalogs and systems.
The document discusses Machine-Readable Cataloging (MARC), the data format used by libraries to catalog and organize bibliographic information in a way that allows computers to interpret, exchange, and display it. MARC defines fields, indicators, subfields and other elements that provide a framework for bibliographic data to be recorded and processed by computers. Key points covered include what MARC stands for, the history and development of the MARC format, how MARC records are structured and some important fields like title (245) and author (100) fields.
bis 155 week 3 quiz data analysis with spreadsheets with lab,bis 155,bis 155 entire course,bis 155 devry,devry bis 155,bis 155 ilabs,bis 155 exercise, bis 155 final exam,devry bis 155 course project,bis155 week 3 ilab,bis 155 week 3 quiz
The document provides an overview of the MARC 21 format. It discusses that MARC 21 is a standard used for the global exchange of bibliographic information between library systems. It describes the key components of a MARC record including the leader, directory, and variable fields. It also explains some of the common field tags used in MARC 21 records like the 100 tag for main author entry and 245 tag for title information.
bis 155 week 3 quiz data analysis with spreadsheets with lab,bis 155,bis 155 entire course,bis 155 devry,devry bis 155,bis 155 ilabs,bis 155 exercise, bis 155 final exam,devry bis 155 course project,bis155 week 3 ilab,bis 155 week 3 quiz
Recovery of lost or corrupted inno db tables(mysql uc 2010)Aleksandr Kuzminsky
The document discusses recovery of lost or corrupted InnoDB tables in MySQL. It provides an overview of how MySQL stores data in InnoDB, describes typical failure scenarios that can cause data loss or corruption, and introduces an InnoDB recovery tool that can be used to parse InnoDB tablespace pages and apply constraints to recover records from corrupted pages.
This document provides an overview of InnoDB file formats and recovery. It discusses the InnoDB tablespace format, including the shared tablespace that contains the data dictionary and rollback segments, as well as tablespaces per table that contain indexes and external pages. It also describes the InnoDB log files, index structures including the primary and secondary indexes, and typical page and record formats. The goal is to provide background needed to understand InnoDB data recovery.
The document provides an overview of the InnoDB storage engine architecture in MySQL. It describes how InnoDB implements the ACID properties through atomic transactions that can either fully commit or roll back. It also explains the physical storage structure, including the system tablespace stored in the ibdata file, tablespace files, and redo log files. The document details the internal page, index, and transaction log structures used to store and access data on disk.
Inno db internals innodb file formats and source code structurezhaolinjnu
This document discusses the goals, architecture, and on-disk format of the InnoDB storage engine for MySQL. InnoDB aims to provide transaction support, reliability, and scalability. Its architecture includes buffering, locking, recovery, and efficient I/O mechanisms. The on-disk format is designed for durability, performance through techniques like compression, and compatibility through file format management. Source code is organized into subdirectories corresponding to major subsystems.
This document discusses the different types of tablespaces in InnoDB including the system tablespace (ibdata1), file-per-table tablespaces (.ibd), general tablespaces (.ibd), undo tablespaces (undo_001), and temporary tablespaces (.ibt, ibtmp1). It provides details on the structure and management of space within these tablespaces including pages, extents, segments, the file space header, extent descriptor pages, and the doublewrite buffer.
The document summarizes a presentation given by Marco Tusa and Aleksandr Kuzminsky on using the Percona Data Recovery Tool for InnoDB to recover data from corrupted or missing InnoDB files. It introduces the speakers and outlines how the tool can be used to extract data from ibdata files, attach .ibd files after corruption, recover deleted records, recover dropped tables, and recover from wrong updates by parsing binary logs. Examples are provided of running the tool's page_parser, constraints_parser and sys_parser components to extract schema and data.
The document summarizes a presentation on the internals of InnoDB file formats and source code structure. The presentation covers the goals of InnoDB being optimized for online transaction processing (OLTP) with performance, reliability, and scalability. It describes the InnoDB architecture, on-disk file formats including tablespaces, pages, rows, and indexes. It also discusses the source code structure.
Inno Db Internals Inno Db File Formats And Source Code StructureMySQLConference
The document discusses InnoDB, the transactional storage engine for MySQL. It provides an overview of InnoDB's goals, architecture, on-disk file formats, source code structure and more. Specifically, it covers InnoDB's focus on transaction processing, performance and reliability. It describes InnoDB's key components like tablespaces, pages, rows, indexes and logging. The presentation aims to explain InnoDB's internals and design considerations to achieve durability, performance, efficiency and compatibility.
The document discusses disk caches in Linux, including:
- Traditional Linux designs used separate buffer and page caches, requiring synchronization between caches.
- Modern Linux unified the caches into a single page cache to avoid synchronization overhead and better support NFS.
- The page cache uses radix trees for fast lookup of pages indexed by their address space and page offset. Tags on radix tree nodes track dirty and writeback pages.
In this presentation I am illustrating how and why InnodDB perform Merge and Split pages. I will also show what are the possible things to do to reduce the impact.
This document discusses optimizing data warehouse star schemas with MySQL. It provides tips for database design including using a star schema approach with dimension and fact tables. It recommends MySQL 5.x, optimizing the MySQL configuration file, designing tables and indexes effectively, using the correct data loading architecture, analyzing tables, writing efficient query styles, and examining explain plans. Optimizing these areas can help ensure fast query performance from large data warehouses built with MySQL.
InnoDB is the default storage engine for MySQL databases. It uses tablespaces, pages, and a data dictionary to store table definitions, structures, and indexes. Corruption can occur if the data dictionary, files, or pages are damaged. Errors must be analyzed to determine if the data dictionary, files, or pages need repair. Potential fixes include hex editing files to correct metadata, updating checksums, or restoring from backups.
The document discusses Structured Query Language (SQL) and its basic statements. It covers:
- SQL is used to request and retrieve data from databases. The DBMS processes SQL queries and returns results.
- SQL statements are divided into DDL (data definition language) for managing schema, DML (data manipulation language) for data queries/modification, and DCL (data control language) for managing transactions and access control.
- The document provides examples of using SQL commands like CREATE TABLE, ALTER TABLE, DROP TABLE, INSERT, UPDATE, DELETE, SELECT and indexes. It also covers data types, constraints and operators used in SQL queries.
This document provides an overview of working with multiple tables in SQL, including topics like joins, aliases, inner joins, outer joins, and joining more than two tables. It discusses how joins interact with the relational database structure and ERD diagrams. It provides examples of different join types and how they handle discrepancies in the data. It also covers adding calculations to queries using functions like COUNT and aggregate functions. The document uses the sample sTunes database to demonstrate various SQL queries and joins.
The document provides information on defining database objects like tables, fields, domains, and data elements in ABAP Dictionary. It describes how to create these objects step-by-step including defining domains with data types and lengths, creating data elements with field labels, and building tables with fields linked to data elements and domains. The document also covers foreign key relationships, structures, currency/quantity fields, and domain value lists.
This document provides an overview of MARC (Machine Readable Cataloging), which is a standard format for bibliographic data. It discusses the history and development of MARC, describes the components and structure of a MARC record including the leader, directory, and variable fields, and explains some key MARC terminology like tags, indicators, and subfields. It also lists some of the most frequently used field tags and provides examples of control fields in the 01X-09X range. The document is intended as an introduction and overview of the MARC format.
presentation on MARC21 Standard Bibliography for LibMSMuhammad Zeeshan
I have developed a module named MARC21 standard bibliography (fixed and customized) for LibMS(An online library management system developing by Aligarh Muslim University, Aligarh and owned by MHRD, Govt. of INDIA).
The slide contains the description about MARC21 standard cataloging and the module which I have developed.
MARC (Machine Readable Cataloging) is a metadata format for describing bibliographic resources. It uses fields, indicators, subfields, and tags to encode bibliographic data in a structured format. A MARC record contains the metadata needed to process and display a bibliographic item, such as title, author, publisher, subjects, and more. MARC was created to facilitate information sharing and exchange between library catalogs and systems.
The document discusses Machine-Readable Cataloging (MARC), the data format used by libraries to catalog and organize bibliographic information in a way that allows computers to interpret, exchange, and display it. MARC defines fields, indicators, subfields and other elements that provide a framework for bibliographic data to be recorded and processed by computers. Key points covered include what MARC stands for, the history and development of the MARC format, how MARC records are structured and some important fields like title (245) and author (100) fields.
bis 155 week 3 quiz data analysis with spreadsheets with lab,bis 155,bis 155 entire course,bis 155 devry,devry bis 155,bis 155 ilabs,bis 155 exercise, bis 155 final exam,devry bis 155 course project,bis155 week 3 ilab,bis 155 week 3 quiz
The document provides an overview of the MARC 21 format. It discusses that MARC 21 is a standard used for the global exchange of bibliographic information between library systems. It describes the key components of a MARC record including the leader, directory, and variable fields. It also explains some of the common field tags used in MARC 21 records like the 100 tag for main author entry and 245 tag for title information.
bis 155 week 3 quiz data analysis with spreadsheets with lab,bis 155,bis 155 entire course,bis 155 devry,devry bis 155,bis 155 ilabs,bis 155 exercise, bis 155 final exam,devry bis 155 course project,bis155 week 3 ilab,bis 155 week 3 quiz
Recovery of lost or corrupted inno db tables(mysql uc 2010)Aleksandr Kuzminsky
The document discusses recovery of lost or corrupted InnoDB tables in MySQL. It provides an overview of how MySQL stores data in InnoDB, describes typical failure scenarios that can cause data loss or corruption, and introduces an InnoDB recovery tool that can be used to parse InnoDB tablespace pages and apply constraints to recover records from corrupted pages.
This document provides an overview of InnoDB file formats and recovery. It discusses the InnoDB tablespace format, including the shared tablespace that contains the data dictionary and rollback segments, as well as tablespaces per table that contain indexes and external pages. It also describes the InnoDB log files, index structures including the primary and secondary indexes, and typical page and record formats. The goal is to provide background needed to understand InnoDB data recovery.
The document provides an overview of the InnoDB storage engine architecture in MySQL. It describes how InnoDB implements the ACID properties through atomic transactions that can either fully commit or roll back. It also explains the physical storage structure, including the system tablespace stored in the ibdata file, tablespace files, and redo log files. The document details the internal page, index, and transaction log structures used to store and access data on disk.
Inno db internals innodb file formats and source code structurezhaolinjnu
This document discusses the goals, architecture, and on-disk format of the InnoDB storage engine for MySQL. InnoDB aims to provide transaction support, reliability, and scalability. Its architecture includes buffering, locking, recovery, and efficient I/O mechanisms. The on-disk format is designed for durability, performance through techniques like compression, and compatibility through file format management. Source code is organized into subdirectories corresponding to major subsystems.
This document discusses the different types of tablespaces in InnoDB including the system tablespace (ibdata1), file-per-table tablespaces (.ibd), general tablespaces (.ibd), undo tablespaces (undo_001), and temporary tablespaces (.ibt, ibtmp1). It provides details on the structure and management of space within these tablespaces including pages, extents, segments, the file space header, extent descriptor pages, and the doublewrite buffer.
The document summarizes a presentation given by Marco Tusa and Aleksandr Kuzminsky on using the Percona Data Recovery Tool for InnoDB to recover data from corrupted or missing InnoDB files. It introduces the speakers and outlines how the tool can be used to extract data from ibdata files, attach .ibd files after corruption, recover deleted records, recover dropped tables, and recover from wrong updates by parsing binary logs. Examples are provided of running the tool's page_parser, constraints_parser and sys_parser components to extract schema and data.
The document summarizes a presentation on the internals of InnoDB file formats and source code structure. The presentation covers the goals of InnoDB being optimized for online transaction processing (OLTP) with performance, reliability, and scalability. It describes the InnoDB architecture, on-disk file formats including tablespaces, pages, rows, and indexes. It also discusses the source code structure.
Inno Db Internals Inno Db File Formats And Source Code StructureMySQLConference
The document discusses InnoDB, the transactional storage engine for MySQL. It provides an overview of InnoDB's goals, architecture, on-disk file formats, source code structure and more. Specifically, it covers InnoDB's focus on transaction processing, performance and reliability. It describes InnoDB's key components like tablespaces, pages, rows, indexes and logging. The presentation aims to explain InnoDB's internals and design considerations to achieve durability, performance, efficiency and compatibility.
The document discusses disk caches in Linux, including:
- Traditional Linux designs used separate buffer and page caches, requiring synchronization between caches.
- Modern Linux unified the caches into a single page cache to avoid synchronization overhead and better support NFS.
- The page cache uses radix trees for fast lookup of pages indexed by their address space and page offset. Tags on radix tree nodes track dirty and writeback pages.
In this presentation I am illustrating how and why InnodDB perform Merge and Split pages. I will also show what are the possible things to do to reduce the impact.
This document discusses optimizing data warehouse star schemas with MySQL. It provides tips for database design including using a star schema approach with dimension and fact tables. It recommends MySQL 5.x, optimizing the MySQL configuration file, designing tables and indexes effectively, using the correct data loading architecture, analyzing tables, writing efficient query styles, and examining explain plans. Optimizing these areas can help ensure fast query performance from large data warehouses built with MySQL.
InnoDB is the default storage engine for MySQL databases. It uses tablespaces, pages, and a data dictionary to store table definitions, structures, and indexes. Corruption can occur if the data dictionary, files, or pages are damaged. Errors must be analyzed to determine if the data dictionary, files, or pages need repair. Potential fixes include hex editing files to correct metadata, updating checksums, or restoring from backups.
This document discusses managing indexes in a database. It describes different types of indexes like B-tree and bitmap indexes and how to create, reorganize, and drop indexes. It also covers getting index information from the data dictionary and guidelines for optimizing index performance and storage.
This document discusses managing indexes in a database. It describes different types of indexes like B-tree and bitmap indexes and how to create, reorganize, and drop indexes. It also covers getting index information from the data dictionary and guidelines for optimizing index performance and storage.
The document describes the functionality and purpose of the SAP ABAP Data Dictionary. The Data Dictionary provides a platform-independent interface to database metadata. It facilitates development by eliminating the need for programmers to manage specific database details. The Data Dictionary contains objects like domains, data elements, tables and their relationships which are used to develop and maintain ABAP applications.
How to use Parquet as a basis for ETL and analyticsJulien Le Dem
Parquet is a columnar format designed to be extremely efficient and interoperable across the hadoop ecosystem. Its integration in most of the Hadoop processing frameworks (Impala, Hive, Pig, Cascading, Crunch, Scalding, Spark, …) and serialization models (Thrift, Avro, Protocol Buffers, …) makes it easy to use in existing ETL and processing pipelines, while giving flexibility of choice on the query engine (whether in Java or C++). In this talk, we will describe how one can us Parquet with a wide variety of data analysis tools like Spark, Impala, Pig, Hive, and Cascading to create powerful, efficient data analysis pipelines. Data management is simplified as the format is self describing and handles schema evolution. Support for nested structures enables more natural modeling of data for Hadoop compared to flat representations that create the need for often costly joins.
[db tech showcase Tokyo 2017] C23: Lessons from SQLite4 by SQLite.org - Richa...Insight Technology, Inc.
SQLite4 was a project started at the beginning of 2012 and designed to provide a follow-on to SQLite3 without the constraints of backwards compatibility. SQLite4 was built around a Log Structured Merge (LSM) storage engine that is transactional, stores all content in a single file on disk, and that is faster than LevelDB. Other innovations in include the use of decimal floating-point arthimetic and a single storage engine namespace used for all tables and indexes. Expectations were initially high. However, development stopped about 2.5 years later, after finding that the design of SQLite4 would never be competitive with SQLite3. This talk overviews the technological ideas tried in SQLite4 and discusses why they did not work out for the kinds of workloads typically encountered for an embedded database engine.
This document discusses SQLite record recovery from deleted areas of an SQLite database file. It begins with an introduction to SQLite and why it is useful for forensic analysis. It then covers the structure of SQLite database files including header pages, table B-trees, index B-trees, overflow pages, and free pages. The document simulates traversing and parsing the cells within a table B-tree to understand how records are stored and indexed. It aims to help analysts understand SQLite file structure to enable recovery of deleted records through analysis of unused areas.
15 Ways to Kill Your Mysql Application Performanceguest9912e5
Jay is the North American Community Relations Manager at MySQL. Author of Pro MySQL, Jay has also written articles for Linux Magazine and regularly assists software developers in identifying how to make the most effective use of MySQL. He has given sessions on performance tuning at the MySQL Users Conference, RedHat Summit, NY PHP Conference, OSCON and Ohio LinuxFest, among others.In his abundant free time, when not being pestered by his two needy cats and two noisy dogs, he daydreams in PHP code and ponders the ramifications of __clone().
Similar to Open sql2010 recovery-of-lost-or-corrupted-innodb-tables (20)
This document discusses XtraBackup, a tool for performing hot backups of InnoDB/XtraDB tables without stopping MySQL servers. It allows copying tables while the server is running and logging changes, then preparing the backup by applying the log to the copied tables. The document covers the basic design of XtraBackup, its usage for hot backups, setting up slaves from backups, incremental/differential backups, exporting/importing tables, and statistics collection.
Zabbix - an important part of your IT infrastructureArvids Godjuks
This document contains a presentation about Zabbix, an open source monitoring software. It discusses Zabbix's history as a free and open source project, its current capabilities, and planned new features for the upcoming Zabbix 2.0 release. Some of the highlighted new features include improved user experience, low level discovery, monitoring of multiple network interfaces, native JMX monitoring, and integration with cloud platforms and NoSQL databases. The presentation aims to explain how Zabbix can help users by monitoring IT infrastructure and solving common problems.
PHP libevent Daemons. A high performance and reliable solution. Practical exp...Arvids Godjuks
It is considered (rigthly in most cases) that it's a bad manner to implement a daemon using PHP. It's OK for prototyping but no-no for production. That were our thoughts when we've started to implement a new version of browser game. It was planned to describe an interface to communicate with daemon that will be then rewritten in pure C.
However, fist libevent PHP daemon performance tests forced us to think if we really need to rewrite it in C. What was the performance? Are there memory leaks? All these will be covered in the speech.
You will also learn about problems, features and how a real project results.
Detecting logged in user's abnormal activityArvids Godjuks
Detection of abnormal user's activity is currently not performed in most popular Intrusion Detection Systems (IDS). However, it's not so rare when one user credentials are used by another user (for example, when password was stolen or watched). Also there are more and more sensitive data available through Internet.
To prevent this type of attacks we've developed an algorithm of building preferences based user behavior model.
It is using Markov chains to represent user behavioral information. For the time being, an experimental system that allows to analyze such method efficiency and detect irregular access to medical data is under development.
Since systems protected are a set of webservices, popular open source tools such as PHP, MySQL, GraphML, and Flare were used to implent it.
Taking AI to the Next Level in Manufacturing.pdfssuserfac0301
Read Taking AI to the Next Level in Manufacturing to gain insights on AI adoption in the manufacturing industry, such as:
1. How quickly AI is being implemented in manufacturing.
2. Which barriers stand in the way of AI adoption.
3. How data quality and governance form the backbone of AI.
4. Organizational processes and structures that may inhibit effective AI adoption.
6. Ideas and approaches to help build your organization's AI strategy.
Removing Uninteresting Bytes in Software FuzzingAftab Hussain
Imagine a world where software fuzzing, the process of mutating bytes in test seeds to uncover hidden and erroneous program behaviors, becomes faster and more effective. A lot depends on the initial seeds, which can significantly dictate the trajectory of a fuzzing campaign, particularly in terms of how long it takes to uncover interesting behaviour in your code. We introduce DIAR, a technique designed to speedup fuzzing campaigns by pinpointing and eliminating those uninteresting bytes in the seeds. Picture this: instead of wasting valuable resources on meaningless mutations in large, bloated seeds, DIAR removes the unnecessary bytes, streamlining the entire process.
In this work, we equipped AFL, a popular fuzzer, with DIAR and examined two critical Linux libraries -- Libxml's xmllint, a tool for parsing xml documents, and Binutil's readelf, an essential debugging and security analysis command-line tool used to display detailed information about ELF (Executable and Linkable Format). Our preliminary results show that AFL+DIAR does not only discover new paths more quickly but also achieves higher coverage overall. This work thus showcases how starting with lean and optimized seeds can lead to faster, more comprehensive fuzzing campaigns -- and DIAR helps you find such seeds.
- These are slides of the talk given at IEEE International Conference on Software Testing Verification and Validation Workshop, ICSTW 2022.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
AI-Powered Food Delivery Transforming App Development in Saudi Arabia.pdfTechgropse Pvt.Ltd.
In this blog post, we'll delve into the intersection of AI and app development in Saudi Arabia, focusing on the food delivery sector. We'll explore how AI is revolutionizing the way Saudi consumers order food, how restaurants manage their operations, and how delivery partners navigate the bustling streets of cities like Riyadh, Jeddah, and Dammam. Through real-world case studies, we'll showcase how leading Saudi food delivery apps are leveraging AI to redefine convenience, personalization, and efficiency.
Threats to mobile devices are more prevalent and increasing in scope and complexity. Users of mobile devices desire to take full advantage of the features
available on those devices, but many of the features provide convenience and capability but sacrifice security. This best practices guide outlines steps the users can take to better protect personal devices and information.
“An Outlook of the Ongoing and Future Relationship between Blockchain Technologies and Process-aware Information Systems.” Invited talk at the joint workshop on Blockchain for Information Systems (BC4IS) and Blockchain for Trusted Data Sharing (B4TDS), co-located with with the 36th International Conference on Advanced Information Systems Engineering (CAiSE), 3 June 2024, Limassol, Cyprus.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
OpenID AuthZEN Interop Read Out - AuthorizationDavid Brossard
During Identiverse 2024 and EIC 2024, members of the OpenID AuthZEN WG got together and demoed their authorization endpoints conforming to the AuthZEN API
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
Best 20 SEO Techniques To Improve Website Visibility In SERPPixlogix Infotech
Boost your website's visibility with proven SEO techniques! Our latest blog dives into essential strategies to enhance your online presence, increase traffic, and rank higher on search engines. From keyword optimization to quality content creation, learn how to make your site stand out in the crowded digital landscape. Discover actionable tips and expert insights to elevate your SEO game.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
UiPath Test Automation using UiPath Test Suite series, part 6DianaGray10
Welcome to UiPath Test Automation using UiPath Test Suite series part 6. In this session, we will cover Test Automation with generative AI and Open AI.
UiPath Test Automation with generative AI and Open AI webinar offers an in-depth exploration of leveraging cutting-edge technologies for test automation within the UiPath platform. Attendees will delve into the integration of generative AI, a test automation solution, with Open AI advanced natural language processing capabilities.
Throughout the session, participants will discover how this synergy empowers testers to automate repetitive tasks, enhance testing accuracy, and expedite the software testing life cycle. Topics covered include the seamless integration process, practical use cases, and the benefits of harnessing AI-driven automation for UiPath testing initiatives. By attending this webinar, testers, and automation professionals can gain valuable insights into harnessing the power of AI to optimize their test automation workflows within the UiPath ecosystem, ultimately driving efficiency and quality in software development processes.
What will you get from this session?
1. Insights into integrating generative AI.
2. Understanding how this integration enhances test automation within the UiPath platform
3. Practical demonstrations
4. Exploration of real-world use cases illustrating the benefits of AI-driven test automation for UiPath
Topics covered:
What is generative AI
Test Automation with generative AI and Open AI.
UiPath integration with generative AI
Speaker:
Deepak Rai, Automation Practice Lead, Boundaryless Group and UiPath MVP
Generating privacy-protected synthetic data using Secludy and MilvusZilliz
During this demo, the founders of Secludy will demonstrate how their system utilizes Milvus to store and manipulate embeddings for generating privacy-protected synthetic data. Their approach not only maintains the confidentiality of the original data but also enhances the utility and scalability of LLMs under privacy constraints. Attendees, including machine learning engineers, data scientists, and data managers, will witness first-hand how Secludy's integration with Milvus empowers organizations to harness the power of LLMs securely and efficiently.
Cosa hanno in comune un mattoncino Lego e la backdoor XZ?Speck&Tech
ABSTRACT: A prima vista, un mattoncino Lego e la backdoor XZ potrebbero avere in comune il fatto di essere entrambi blocchi di costruzione, o dipendenze di progetti creativi e software. La realtà è che un mattoncino Lego e il caso della backdoor XZ hanno molto di più di tutto ciò in comune.
Partecipate alla presentazione per immergervi in una storia di interoperabilità, standard e formati aperti, per poi discutere del ruolo importante che i contributori hanno in una comunità open source sostenibile.
BIO: Sostenitrice del software libero e dei formati standard e aperti. È stata un membro attivo dei progetti Fedora e openSUSE e ha co-fondato l'Associazione LibreItalia dove è stata coinvolta in diversi eventi, migrazioni e formazione relativi a LibreOffice. In precedenza ha lavorato a migrazioni e corsi di formazione su LibreOffice per diverse amministrazioni pubbliche e privati. Da gennaio 2020 lavora in SUSE come Software Release Engineer per Uyuni e SUSE Manager e quando non segue la sua passione per i computer e per Geeko coltiva la sua curiosità per l'astronomia (da cui deriva il suo nickname deneb_alpha).
Let's Integrate MuleSoft RPA, COMPOSER, APM with AWS IDP along with Slackshyamraj55
Discover the seamless integration of RPA (Robotic Process Automation), COMPOSER, and APM with AWS IDP enhanced with Slack notifications. Explore how these technologies converge to streamline workflows, optimize performance, and ensure secure access, all while leveraging the power of AWS IDP and real-time communication via Slack notifications.
Open sql2010 recovery-of-lost-or-corrupted-innodb-tables
1. Recovery of lost or corrupted
InnoDB tables
Yet another reason to use InnoDB
OpenSQL Camp 2010, Sankt Augustin
Aleksandr.Kuzminsky@Percona.com
Percona Inc.
http://MySQLPerformanceBlog.com
2. Agenda
1. InnoDB format overview
2. Internal system tables SYS_INDEXES and SYS_TABLES
3. InnoDB Primary and Secondary keys
4. Typical failure scenarios
5. InnoDB recovery tool
-2-/43
Three things are certain:
Death, taxes and lost data.
Guess which has occurred?
4. Recovery of lost or corrupted InnoDB tables
How MySQL stores data in InnoDB
1. A table space (ibdata1)
• System tablespace(data dictionary, undo, insert buffer, etc.)
• PRIMARY indices (PK + data)
• SECONDARY indices (SK + PK) If the key is (f1, f2) it is
stored as (f1, f2, PK)
1. file per table (.ibd)
• PRIMARY index
• SECONDARY indices
1. InnoDB pages size 16k (uncompressed)
2. Every index is identified by index_id
-4-/43
6. Recovery of lost or corrupted InnoDB tables
How MySQL stores data in InnoDB
Page identifier index_id
TABLE: name test/site_folders, id 0 119, columns 9, indexes 1, appr.rows 1
COLUMNS: id: DATA_INT len 4 prec 0; name: type 12 len 765 prec 0; sites_count: DATA_INT
len 4 prec 0;
created_at: DATA_INT len 8 prec 0; updated_at: DATA_INT len 8 prec 0;
DB_ROW_ID: DATA_SYS prtype 256 len 6 prec 0; DB_TRX_ID: DATA_SYS prtype 257
len 6 prec 0;
DB_ROLL_PTR: DATA_SYS prtype 258 len 7 prec 0;
INDEX: name PRIMARY, id 0 254, fields 1/7, type 3
root page 271, appr.key vals 1, leaf pages 1, size pages 1
FIELDS: id DB_TRX_ID DB_ROLL_PTR name sites_count created_at updated_at
mysql> CREATE TABLE innodb_table_monitor(x int)
engine=innodb
Error log:
-6-/43
7. Recovery of lost or corrupted InnoDB tables
InnoDB page format
FIL HEADER
PAGE_HEADER
INFINUM+SUPREMUM RECORDS
USER RECORDS
FREE SPACE
Page Directory
Fil Trailer
-7-/43
8. Recovery of lost or corrupted InnoDB tables
InnoDB page format
Fil Header
Name Si
ze
Remarks
FIL_PAGE_SPACE 4 4 ID of the space the page is in
FIL_PAGE_OFFSET 4 ordinal page number from start of space
FIL_PAGE_PREV 4 offset of previous page in key order
FIL_PAGE_NEXT 4 offset of next page in key order
FIL_PAGE_LSN 8 log serial number of page's latest log record
FIL_PAGE_TYPE 2 current defined types are: FIL_PAGE_INDEX, FIL_PAGE_UNDO_LOG,
FIL_PAGE_INODE, FIL_PAGE_IBUF_FREE_LIST
FIL_PAGE_FILE_FLU
SH_LSN
8 "the file has been flushed to disk at least up to this lsn" (log serial number), valid
only on the first page of the file
FIL_PAGE_ARCH_LO
G_NO
4 the latest archived log file number at the time that FIL_PAGE_FILE_FLUSH_LSN was
written (in the log)
Data are stored in
FIL_PAGE_INDEX
-8-/43
9. Recovery of lost or corrupted InnoDB tables
InnoDB page format
Page Header
Name Size Remarks
PAGE_N_DIR_SLOTS 2 number of directory slots in the Page Directory part; initial value = 2
PAGE_HEAP_TOP 2 record pointer to first record in heap
PAGE_N_HEAP 2 number of heap records; initial value = 2
PAGE_FREE 2 record pointer to first free record
PAGE_GARBAGE 2 "number of bytes in deleted records"
PAGE_LAST_INSERT 2 record pointer to the last inserted record
PAGE_DIRECTION 2 either PAGE_LEFT, PAGE_RIGHT, or PAGE_NO_DIRECTION
PAGE_N_DIRECTION 2 number of consecutive inserts in the same direction, e.g. "last 5 were all to the left"
PAGE_N_RECS 2 number of user records
PAGE_MAX_TRX_ID 8 the highest ID of a transaction which might have changed a record on the page (only set for secondary
indexes)
PAGE_LEVEL 2 level within the index (0 for a leaf page)
PAGE_INDEX_ID 8 identifier of the index the page belongs to
PAGE_BTR_SEG_LEA
F
10 "file segment header for the leaf pages in a B-tree" (this is irrelevant here)
PAGE_BTR_SEG_TOP 10 "file segment header for the non-leaf pages in a B-tree" (this is irrelevant here)
index_id
Highest bit is row format(1
-COMPACT, 0 - REDUNDANT )
-9-/43
10. Recovery of lost or corrupted InnoDB tables
How to check row format?
• The highest bit of the PAGE_N_HEAP from the
page header
• 0 stands for version REDUNDANT, 1 - for COMACT
• dc -e "2o `hexdump –C d pagefile | grep
00000020 | awk '{ print $12}'` p" | sed
's/./& /g' | awk '{ print $1}'
-10-/43
11. Recovery of lost or corrupted InnoDB tables
Rows in an InnoDB page
•Rows in a single pages is a linked list
•The first record INFIMUM
•The last record SUPREMUM
•Sorted by Primary key
next infimum
0 supremu
mnext 10
0
data...
next 10
1
data...
next 10
2
data...
next 10
3
data...
-11-/43
12. Recovery of lost or corrupted InnoDB tables
Records are saved in insert order
insert into t1 values(10, 'aaa');
insert into t1 values(30, 'ccc');
insert into t1 values(20, 'bbb');
JG....................N<E.......
................................
.............................2..
...infimum......supremum......6.
........)....2..aaa.............
...*....2..ccc.... ...........+.
...2..bbb.......................
................................
-12-/43
13. Recovery of lost or corrupted InnoDB tables
Row format
EXAMPLE:
CREATE TABLE `t1` (
`ID` int(11) unsigned NOT NULL,
`NAME` varchar(120),
`N_FIELDS` int(10),
PRIMARY KEY (`ID`)
) ENGINE=InnoDB DEFAULT
CHARSET=latin1
Name Size
Field Start Offsets (F*1) or (F*2) bytes
Extra Bytes
6 bytes (5 bytes if
COMPACT format)
Field Contents depends on content
-13-/43
14. Recovery of lost or corrupted InnoDB tables
InnoDB page format (REDUNDANT)
Extra bytes
Name Size Description
record_status 2 bits _ORDINARY, _NODE_PTR, _INFIMUM, _SUPREMUM
deleted_flag 1 bit 1 if record is deleted
min_rec_flag 1 bit 1 if record is predefined minimum record
n_owned 4 bits number of records owned by this record
heap_no 13 bits record's order number in heap of index page
n_fields 10 bits number of fields in this record, 1 to 1023
1byte_offs_flag 1 bit 1 if each Field Start Offsets is 1 byte long (this item is also called the "short" flag)
next 16 bits 16 bits pointer to next record in page
-14-/43
15. Recovery of lost or corrupted InnoDB tables
InnoDB page format (COMPACT)
Extra bytes
Name Size, bits Description
record_status
deleted_flag
min_rec_flag
4 4 bits used to delete mark a record, and mark a predefined
minimum record in alphabetical order
n_owned 4 the number of records owned by this record
(this term is explained in page0page.h)
heap_no 13 the order number of this record in the
heap of the index page
record type 3 000=conventional, 001=node pointer (inside B-tree),
010=infimum, 011=supremum, 1xx=reserved
next 16 bits 16 a relative pointer to the next record in the page
-15-/43
16. Recovery of lost or corrupted InnoDB tables
REDUNDANT
A row: (10, ‘abcdef’, 20)
4 6 7
Actualy stored as: (10, TRX_ID, PTR_ID, ‘abcdef’, 20)
6 4
Field Offsets
…. next
Extra 6 bytes:
0x00 00 00 0A
record_status
deleted_flag
min_rec_flag
n_owned
heap_no
n_fields
1byte_offs_flag
Fields
... ... abcdef 0x80 00 00 14
-16-/43
17. Recovery of lost or corrupted InnoDB tables
COMPACT
A row: (10, ‘abcdef’, 20)
6 NULLS
Actualy stored as: (10, TRX_ID, PTR_ID, ‘abcdef’, 20)
Field Offsets
…. next
Extra 5 bytes:
0x00 00 00 0A
Fields
... ... abcdef 0x80 00 00 14
A bit per NULL-
able field
-17-/43
18. Recovery of lost or corrupted InnoDB tables
Data types
INT types (fixed-size)
String types
• VARCHAR(x) – variable-size
• CHAR(x) – fixed-size, variable-size if UTF-8
DECIMAL
• Stored in strings before 5.0.3, variable in size
• Binary format after 5.0.3, fixed-size.
-18-/43
19. Recovery of lost or corrupted InnoDB tables
BLOB and other long fields
• Field length (so called offset) is one or two byte long
• Page size is 16k
• If record size < (UNIV_PAGE_SIZE/2-200) == ~7k
– the record is stored internally (in a PK page)
• Otherwise – 768 bytes internally, the rest in an
external page
-19-/43
21. Recovery of lost or corrupted InnoDB tables
Why are SYS_* tables needed?
• Correspondence “table name” -> “index_id”
• Storage for other internal information
-21-/43
22. Recovery of lost or corrupted InnoDB tables
How MySQL stores data in InnoDB
SYS_TABLES and SYS_INDEXES
Always REDUNDANT format!
CREATE TABLE `SYS_INDEXES` (
`TABLE_ID` bigint(20) unsigned NOT NULL
default '0',
`ID` bigint(20) unsigned NOT NULL default
'0',
`NAME` varchar(120) default NULL,
`N_FIELDS` int(10) unsigned default NULL,
`TYPE` int(10) unsigned default NULL,
`SPACE` int(10) unsigned default NULL,
`PAGE_NO` int(10) unsigned default NULL,
PRIMARY KEY (`TABLE_ID`,`ID`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
CREATE TABLE `SYS_TABLES` (
`NAME` varchar(255) NOT NULL default '',
`ID` bigint(20) unsigned NOT NULL default
'0',
`N_COLS` int(10) unsigned default NULL,
`TYPE` int(10) unsigned default NULL,
`MIX_ID` bigint(20) unsigned default NULL,
`MIX_LEN` int(10) unsigned default NULL,
`CLUSTER_NAME` varchar(255) default NULL,
`SPACE` int(10) unsigned default NULL,
PRIMARY KEY (`NAME`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1
index_id = 0-3 index_id = 0-1
Name:
PRIMARY
GEN_CLUSTER_ID
or unique index name
-22-/43
23. Recovery of lost or corrupted InnoDB tables
How MySQL stores data in InnoDB
NAME ID …
"archive/msg_store" 40 8 1 0 0 NULL 0
"archive/msg_store" 40 8 1 0 0 NULL 0
"archive/msg_store" 40 8 1 0 0 NULL 0
TABLE_ID ID NAME …
40 196389 "PRIMARY" 2 3 0 21031026
40 196390 "msg_hash" 1 0 0 21031028
SYS_TABLES
SYS_INDEXES
Example:
-23-/43
28. Recovery of lost or corrupted InnoDB tables
Deleted records
• DELETE FROM table WHERE id = 5;
• Forgotten WHERE clause
Band-aid:
• Stop/kill mysqld ASAP
-28-/43
29. Recovery of lost or corrupted InnoDB tables
How delete is performed?
"row/row0upd.c“:
“…/* How is a delete performed?...The delete is
performed by setting the delete bit in the record and
substituting the id of the deleting transaction for the
original trx id, and substituting a new roll ptr for
previous roll ptr. The old trx id and roll ptr are saved
in the undo log record.
Thus, no physical changes occur in the index tree
structure at the time of the delete. Only when the
undo log is purged, the index records will be
physically deleted from the index trees.…”
-29-/43
30. Recovery of lost or corrupted InnoDB tables
Dropped table/database
• DROP TABLE table;
• DROP DATABASE database;
• Often happens when restoring from SQL dump
• Bad because .FRM file goes away
• Especially painful when innodb_file_per_table
Band-aid:
• Stop/kill mysqld ASAP
• Stop IO on an HDD or mount read-only or take a raw
image
-30-/43
31. Recovery of lost or corrupted InnoDB tables
Corrupted InnoDB tablespace
• Hardware failures
• OS or filesystem failures
• InnoDB bugs
• Corrupted InnoDB tablespace by other processes
Band-aid:
• Stop mysqld
• Take a copy of InnoDB files
-31-/43
32. Recovery of lost or corrupted InnoDB tables
Wrong UPDATE statement
• UPDATE user SET Password =
PASSWORD(‘qwerty’) WHERE User=‘root’;
• Again forgotten WHERE clause
• Bad because changes are applied in a PRIMARY
index immediately
• Old version goes to UNDO segment
Band-aid:
• Stop/kill mysqld ASAP
-32-/43
36. Recovery of lost or corrupted InnoDB tables
How to get CREATE info from .frm files
1. CREATE TABLE t1 (id int) Engine=INNODB;
2. Replace t1.frm with the one’s you need to get
scheme
3. Run “show create table t1”
If mysqld crashes
1. See the end of bvi t1.frm :
.ID.NAME.N_FIELDS..
2. *.FRM viewer !TODO
-36-/43
37. Recovery of lost or corrupted InnoDB tables
InnoDB recovery tool
http://launchpad.net/percona-innodb-recovery-tool/
1. Written in Percona
2. Contributed by Percona and community
3. Supported by Percona
Consists of two major tools
•page_parser – splits InnoDB tablespace into 16k pages
•constraints_parser – scans a page and finds good records
-37-/43
38. Recovery of lost or corrupted InnoDB tables
InnoDB recovery tool
server# ./page_parser -4 -f /var/lib/mysql/ibdata1
Opening file: /var/lib/mysql/ibdata1
Read data from fn=3...
Read page #0.. saving it to pages-1259793800/0-18219008/0-00000000.page
Read page #1.. saving it to pages-1259793800/0-0/1-00000001.page
Read page #2.. saving it to pages-1259793800/4294967295-65535/2-00000002.page
Read page #3.. saving it to pages-1259793800/0-0/3-00000003.page
page_parser
In the recent versions the InnoDB pages are
sorted by page type
-38-/43
39. Recovery of lost or corrupted InnoDB tables
Page signature check
0{....0...4...4......=..E.
......
........<..~...A.......|..
......
..........................
......
...infimum......supremumf.
....qT
M/T/196001834/XXXXX
XXXXXXXXXXX L
X X X X X X X X X
XXXXXXXXXXXXX
1. INFIMUM and SUPREMUM
records are in fixed positions
2. Works with corrupted pages
-39-/43
40. Recovery of lost or corrupted InnoDB tables
InnoDB recovery tool
server# ./constraints_parser -4 -f pages-1259793800/0-16/51-00000051.page
constraints_parser
Table structure is defined in "include/table_defs.h"
See HOWTO for details
http://code.google.com/p/innodb-tools/wiki/InnodbRecoveryHowto
Filters inside table_defs.h are very important (if InnoDB
page is corrupted)
-40-/43
41. Recovery of lost or corrupted InnoDB tables
Check InnoDB page before reading recs
#./constraints_parser -5 -U -f pages/0-418/12665-00012665.page -V
Initializing table definitions...
Processing table: document_type_fieldsets_link
- total fields: 5
- nullable fields: 0
- minimum header size: 5
- minimum rec size: 25
- maximum rec size: 25
Read data from fn=3...
Page id: 12665
Checking a page
Infimum offset: 0x63
Supremum offset: 0x70
Next record at offset: 0x9F (159)
Next record at offset: 0xB0 (176)
Next record at offset: 0x3D95 (15765)
…
Next record at offset: 0x70 (112)
Page is good
1. Check if the tool can follow
all records by addresses
2. If so, find a rec. exactly at
the position where the
record is.
3. Helps a lot for COMPACT
format!
-41-/43
42. Recovery of lost or corrupted InnoDB tables
Import result
t1 1 "browse" 10
t1 2 "dashboard" 20
t1 3 "addFolder" 18
t1 4 "editFolder" 15
mysql> LOAD DATA INFILE '/path/to/datafile'
REPLACE INTO TABLE <table_name>
FIELDS TERMINATED BY 't' OPTIONALLY
ENCLOSED BY '"'
LINES STARTING BY '<table_name>t' ;
-42-/43
43. Questions ?
Thank you for coming!
• References
• http://www.mysqlperformanceblog.com/
• http://percona.com/
• http://www.slideshare.net/akuzminsky
-43-/43