The document provides information about using the MySQL Workbench Migration Wizard to migrate databases between different database types including MySQL, Microsoft SQL Server, and PostgreSQL. It discusses setting up the necessary ODBC libraries and drivers for the source database, an overview of the migration process, type mappings, and specific instructions for migrating from Microsoft SQL Server and PostgreSQL. Setup of ODBC components for Linux, Mac OS X, and Windows is described for each database type.
MySQL is an open-source relational database management system. It stores data in separate tables and uses SQL for querying and modifying the data. MySQL has a client-server architecture and supports different storage engines. Common tools for interacting with MySQL include the mysql command line client, mysqldump for backups, and graphical tools like phpMyAdmin.
MySQL 5.6 is a relational database management system. It discusses various components of a database system including storage engines, data types, connectors, and SQL commands. MySQL is an open source database that is widely used for web applications. It provides high performance, reliability, and ease of use compared to other database options.
This document provides an introduction to accessing databases using JDBC and MySQL. It discusses how JDBC allows Java programs to connect to and query databases in a vendor-neutral way. It also describes how to install and start the MySQL database server and client programs. It provides examples of connecting to MySQL via JDBC, selecting databases, viewing table schemas, running queries, and manipulating tables by creating, dropping, and creating temporary tables from query results.
SQLLite and Java
SQLite is an embedded SQL database that is not a client/server system but is instead accessed via function calls from an application. It uses a single cross-platform database file. The android.database.sqlite package provides classes for managing SQLite databases in Android applications, including methods for creating, opening, inserting, updating, deleting, and querying the database. Queries return results as a Cursor object that can be used to access data.
The document contains interview questions and answers for an SQL Server database administrator position. It includes questions about improving query performance, resolving deadlocks, blocking troubleshooting, database backup types, database isolation levels, creating schemas and cursors, and database architecture. Key points covered are the wait for graph deadlock detection method, types of database backups, isolation levels, how to create schemas and dynamic/scrollable cursors, and the basic architecture of SQL Server databases. More interview questions and answers can be found at the provided link.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download
UNIT : -(6)
CONNECTING DATABASE WITH ADO.NET
Content:
•ADO.NET Architecture
•Data provider and its core object
•DataSet class
•Data Binding
•SQL Data Source
MySQL is an open-source relational database management system. It stores data in separate tables and uses SQL for querying and modifying the data. MySQL has a client-server architecture and supports different storage engines. Common tools for interacting with MySQL include the mysql command line client, mysqldump for backups, and graphical tools like phpMyAdmin.
MySQL 5.6 is a relational database management system. It discusses various components of a database system including storage engines, data types, connectors, and SQL commands. MySQL is an open source database that is widely used for web applications. It provides high performance, reliability, and ease of use compared to other database options.
This document provides an introduction to accessing databases using JDBC and MySQL. It discusses how JDBC allows Java programs to connect to and query databases in a vendor-neutral way. It also describes how to install and start the MySQL database server and client programs. It provides examples of connecting to MySQL via JDBC, selecting databases, viewing table schemas, running queries, and manipulating tables by creating, dropping, and creating temporary tables from query results.
SQLLite and Java
SQLite is an embedded SQL database that is not a client/server system but is instead accessed via function calls from an application. It uses a single cross-platform database file. The android.database.sqlite package provides classes for managing SQLite databases in Android applications, including methods for creating, opening, inserting, updating, deleting, and querying the database. Queries return results as a Cursor object that can be used to access data.
The document contains interview questions and answers for an SQL Server database administrator position. It includes questions about improving query performance, resolving deadlocks, blocking troubleshooting, database backup types, database isolation levels, creating schemas and cursors, and database architecture. Key points covered are the wait for graph deadlock detection method, types of database backups, isolation levels, how to create schemas and dynamic/scrollable cursors, and the basic architecture of SQL Server databases. More interview questions and answers can be found at the provided link.
The Information Technology have led us into an era where the production, sharing and use of information are now part of everyday life and of which we are often unaware actors almost: it is now almost inevitable not leave a digital trail of many of the actions we do every day; for example, by digital content such as photos, videos, blog posts and everything that revolves around the social networks (Facebook and Twitter in particular). Added to this is that with the "internet of things", we see an increase in devices such as watches, bracelets, thermostats and many other items that are able to connect to the network and therefore generate large data streams. This explosion of data justifies the birth, in the world of the term Big Data: it indicates the data produced in large quantities, with remarkable speed and in different formats, which requires processing technologies and resources that go far beyond the conventional systems management and storage of data. It is immediately clear that, 1) models of data storage based on the relational model, and 2) processing systems based on stored procedures and computations on grids are not applicable in these contexts. As regards the point 1, the RDBMS, widely used for a great variety of applications, have some problems when the amount of data grows beyond certain limits. The scalability and cost of implementation are only a part of the disadvantages: very often, in fact, when there is opposite to the management of big data, also the variability, or the lack of a fixed structure, represents a significant problem. This has given a boost to the development of the NoSQL database. The website NoSQL Databases defines NoSQL databases such as "Next Generation Databases mostly addressing some of the points: being non-relational, distributed, open source and horizontally scalable." These databases are: distributed, open source, scalable horizontally, without a predetermined pattern (key-value, column-oriented, document-based and graph-based), easily replicable, devoid of the ACID and can handle large amounts of data. These databases are integrated or integrated with processing tools based on the MapReduce paradigm proposed by Google in 2009. MapReduce with the open source Hadoop framework represent the new model for distributed processing of large amounts of data that goes to supplant techniques based on stored procedures and computational grids (step 2). The relational model taught courses in basic database design, has many limitations compared to the demands posed by new applications based on Big Data and NoSQL databases that use to store data and MapReduce to process large amounts of data.
Course Website http://pbdmng.datatoknowledge.it/
Contact me for other informations and to download
UNIT : -(6)
CONNECTING DATABASE WITH ADO.NET
Content:
•ADO.NET Architecture
•Data provider and its core object
•DataSet class
•Data Binding
•SQL Data Source
A user guide that introduces a new User Interface to HPE NonStop SQL/MX DBS.
SQL/MX DBS is a solution that provides a multi-tenant database environment where the databases are isolated from each other while still sharing common resources such as compute power, storage, and network capacity. However, while the databases share the storage, each database uses dedicated, unshared, devices which prevents them from encountering database bottlenecks such as database cache and lock-space. Cache and lock space are part of the NonStop SQL Data Access Managers which are dedicated to only one database and not shared with others.
SQLite is a public-domain software package that provides a lightweight relational database management system (RDBMS). It can be used to store user-defined records in tables. SQLite is flexible in where it can run and how it can be used, such as for in-memory databases, data stores, client/server stand-ins, and as a generic SQL engine. The SQLite codebase supports multiple platforms and consists of the SQLite core, sqlite3 command-line tool, Tcl extension, and configuration/building options. SQL is the main language for interacting with relational databases like SQLite and consists of commands for data definition, manipulation, and more.
SQL Server 2008 is a relational database management system and enterprise data platform. It includes components like the database engine, integration services, analysis services, and reporting services. The database engine efficiently stores, retrieves, and manipulates relational and XML data. SQL Server 2008 allows databases to be partitioned across multiple files for large tables. Other database objects in SQL Server 2008 include indexes, triggers, constraints, diagrams, and views.
The document discusses Liquibase, an open source tool for tracking and applying database schema changes. It describes how Liquibase allows developers to define database changes in change logs, tracks which changes have been applied, and facilitates rolling changes back if needed. Key features highlighted include supporting multiple formats, automatic rollback capabilities, and integration with development workflows.
Scott MacVicar presented on SQLite, an embedded SQL database engine that is serverless, public domain, and has a small footprint. SQLite is used by many applications and devices due to its portability and zero configuration. SQLite 3 introduced improvements like 64-bit row IDs, UTF support, and improved concurrency over SQLite 2. Benchmarks showed SQLite 3 generally outperforming MySQL in tests for insert, select, update and delete operations.
Microsoft SQL Server 2012 Components and Tools (Quick Overview) - Rev 1.3Naji El Kotob
This document provides an overview of SQL Server tools and core services. It describes several Microsoft SQL Server tools, including SQL Server Management Studio (SSMS), SQL Server Configuration Manager, and SQL Profiler. It also outlines the main SQL Server core services: the Database Engine, SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), and SQL Server Analysis Services (SSAS). The document indicates that it will include an interactive demonstration exploring these SQL Server components.
JDBC is the Java API for connecting to and interacting with relational databases. It includes interfaces and classes that allow Java programs to establish a connection with a database, execute SQL statements, process results, and retrieve metadata. The key interfaces are Driver, Connection, Statement, and ResultSet. A JDBC program loads a JDBC driver, obtains a Connection, uses it to create Statements for querying or updating the database, processes the ResultSet, and closes the connection.
This document provides an overview of an online SQL DBA training course. The training contains 6 modules that cover topics such as SQL Server architecture, installation, configuration, security, backup/recovery, high availability, and clustering. Specific topics include installing and upgrading SQL Server, performance tuning, indexing, replication, log shipping, database mirroring, and AlwaysOn availability groups. The goal is to help students learn how to administer a SQL Server database infrastructure.
Using MS-SQL Server with Visual DataFlexwebhostingguy
This document discusses migrating a Visual DataFlex application from using native DataFlex databases to using Microsoft SQL Server. It provides steps to download and install SQL Server Express, load the SQL Server connectivity driver in Visual DataFlex, and use a conversion wizard to migrate tables to SQL Server. Screenshots show the conversion process and how the migrated tables appear in SQL Server Management Studio. Maintaining the migrated application is also briefly discussed.
The document discusses both the advantages and disadvantages of using MySQL. The key advantages are that MySQL is easy to use, has readily available support from a large community of developers, is open-source, is incredibly inexpensive, and is an industry standard. However, it also has some stability issues, suffers from poor performance scaling for high concurrency uses cases, lacks community development since being acquired by Oracle, and has functionality that depends on third-party addons rather than being out of the box. It is also not designed to do everything and has limitations compared to other database options.
This document provides a summary of Transact-SQL (T-SQL) and querying Microsoft SQL Server 2008 databases. It includes a brief history of SQL Server versions and T-SQL, an overview of SQL statements, data types, operators, functions and commenting code. It also discusses tools for querying databases like SQL Server Management Studio, SQLCMD and PowerShell.
1\9.SSIS 2008R2_Training - Introduction to SSISPramod Singla
It's a SQL server 2008R2 SSIS introduction session.These slides will give you brief introduction of SSIS. You can skip the session if you are already know the basics and history of SSIS.
This presentation is about managing database scripts, why we need to do it from theoretical and practice perspective, how it improves continues integration and delivery process on real projects.
This document discusses building dynamic web sites using databases. It begins by explaining that truly dynamic sites have content that changes over time, is customized for users, and can be automatically generated. It recommends using a database rather than storing content in files, as databases are faster, more efficient, and easier to manage when content grows large. The document then provides an overview of key database concepts like tables, fields, queries, and the relational structure. It gives an example of how a student database might be implemented and why a database is better than flat files for such an application. Finally, it discusses MySQL as a popular open-source database and shows basic concepts like connecting to the database, selecting a database, performing queries, and extracting record
MS SQL Server is a database server produced by Microsoft that enables users to write and execute SQL queries and statements. It consists of several features like Query Analyzer, Profiler, and Service Manager. Multiple instances of SQL Server can be installed on a machine, with each instance having its own set of users, databases, and other objects. SQL Server uses data files, filegroups, and transaction logs to store database objects and record transactions. The data dictionary contains metadata about database schemas and is stored differently in Oracle and SQL Server.
The document describes how to configure and run multiple MySQL database instances on a single server. It involves:
1. Creating separate data directories and configuration files for each instance, configured to use different ports;
2. Installing the database tables into each data directory;
3. Starting each MySQL instance using its customized configuration file.
This allows running two instances on ports 3307 and 3308, each with its own isolated data and configuration. The server processes are started and the ports are verified to confirm the instances are running independently.
The document discusses various topics related to the InnoDB storage engine in MySQL, including its key features, backup and recovery procedures, checkpoint processing, moving or copying InnoDB tables, and the storage engines supported in MySQL 5.5. It provides technical details on InnoDB's implementation of transactions, locking, and crash recovery. The document also describes how to force InnoDB recovery if needed to dump tables from a corrupted database.
This document provides an overview of basic concepts in relational database management systems (RDBMS) including what a database is, common RDBMS software, how data is organized into tables with rows and columns, the purpose of a schema and primary keys, and what SQL is and its advantages. A database stores data in an organized way using RDBMS software, which allows users to create, access, and manipulate data across multiple tables that are related through primary keys. SQL is the most commonly used language for communicating with databases and has advantages of being relatively simple to learn while enabling interaction with many different database systems.
MySQL provides several features to improve performance, including flexibility in storage engines and data types, table maintenance utilities, and engine-specific optimizations. Performance can be monitored using commands like SHOW STATUS and SHOW PROCESSLIST. Slow query logging can help identify inefficient queries for optimization. The innodb_buffer_pool_size and key_buffer_size variables can be calculated based on table sizes to improve caching. Administrative tasks like user and database management, events, and security controls are demonstrated. Information_schema can be queried to view table details, columns and engines.
This document provides instructions for setting up different types of MySQL replication architectures:
1) It describes how to configure basic master-slave replication between two servers with step-by-step instructions for configuring the master and slave servers.
2) It also provides a second method for implementing master-slave replication with additional details on configuring the replication user and importing databases.
3) Finally, it outlines how to set up a master-master replication configuration between two MySQL servers to provide high availability, with each server acting as both a master and slave.
A user guide that introduces a new User Interface to HPE NonStop SQL/MX DBS.
SQL/MX DBS is a solution that provides a multi-tenant database environment where the databases are isolated from each other while still sharing common resources such as compute power, storage, and network capacity. However, while the databases share the storage, each database uses dedicated, unshared, devices which prevents them from encountering database bottlenecks such as database cache and lock-space. Cache and lock space are part of the NonStop SQL Data Access Managers which are dedicated to only one database and not shared with others.
SQLite is a public-domain software package that provides a lightweight relational database management system (RDBMS). It can be used to store user-defined records in tables. SQLite is flexible in where it can run and how it can be used, such as for in-memory databases, data stores, client/server stand-ins, and as a generic SQL engine. The SQLite codebase supports multiple platforms and consists of the SQLite core, sqlite3 command-line tool, Tcl extension, and configuration/building options. SQL is the main language for interacting with relational databases like SQLite and consists of commands for data definition, manipulation, and more.
SQL Server 2008 is a relational database management system and enterprise data platform. It includes components like the database engine, integration services, analysis services, and reporting services. The database engine efficiently stores, retrieves, and manipulates relational and XML data. SQL Server 2008 allows databases to be partitioned across multiple files for large tables. Other database objects in SQL Server 2008 include indexes, triggers, constraints, diagrams, and views.
The document discusses Liquibase, an open source tool for tracking and applying database schema changes. It describes how Liquibase allows developers to define database changes in change logs, tracks which changes have been applied, and facilitates rolling changes back if needed. Key features highlighted include supporting multiple formats, automatic rollback capabilities, and integration with development workflows.
Scott MacVicar presented on SQLite, an embedded SQL database engine that is serverless, public domain, and has a small footprint. SQLite is used by many applications and devices due to its portability and zero configuration. SQLite 3 introduced improvements like 64-bit row IDs, UTF support, and improved concurrency over SQLite 2. Benchmarks showed SQLite 3 generally outperforming MySQL in tests for insert, select, update and delete operations.
Microsoft SQL Server 2012 Components and Tools (Quick Overview) - Rev 1.3Naji El Kotob
This document provides an overview of SQL Server tools and core services. It describes several Microsoft SQL Server tools, including SQL Server Management Studio (SSMS), SQL Server Configuration Manager, and SQL Profiler. It also outlines the main SQL Server core services: the Database Engine, SQL Server Integration Services (SSIS), SQL Server Reporting Services (SSRS), and SQL Server Analysis Services (SSAS). The document indicates that it will include an interactive demonstration exploring these SQL Server components.
JDBC is the Java API for connecting to and interacting with relational databases. It includes interfaces and classes that allow Java programs to establish a connection with a database, execute SQL statements, process results, and retrieve metadata. The key interfaces are Driver, Connection, Statement, and ResultSet. A JDBC program loads a JDBC driver, obtains a Connection, uses it to create Statements for querying or updating the database, processes the ResultSet, and closes the connection.
This document provides an overview of an online SQL DBA training course. The training contains 6 modules that cover topics such as SQL Server architecture, installation, configuration, security, backup/recovery, high availability, and clustering. Specific topics include installing and upgrading SQL Server, performance tuning, indexing, replication, log shipping, database mirroring, and AlwaysOn availability groups. The goal is to help students learn how to administer a SQL Server database infrastructure.
Using MS-SQL Server with Visual DataFlexwebhostingguy
This document discusses migrating a Visual DataFlex application from using native DataFlex databases to using Microsoft SQL Server. It provides steps to download and install SQL Server Express, load the SQL Server connectivity driver in Visual DataFlex, and use a conversion wizard to migrate tables to SQL Server. Screenshots show the conversion process and how the migrated tables appear in SQL Server Management Studio. Maintaining the migrated application is also briefly discussed.
The document discusses both the advantages and disadvantages of using MySQL. The key advantages are that MySQL is easy to use, has readily available support from a large community of developers, is open-source, is incredibly inexpensive, and is an industry standard. However, it also has some stability issues, suffers from poor performance scaling for high concurrency uses cases, lacks community development since being acquired by Oracle, and has functionality that depends on third-party addons rather than being out of the box. It is also not designed to do everything and has limitations compared to other database options.
This document provides a summary of Transact-SQL (T-SQL) and querying Microsoft SQL Server 2008 databases. It includes a brief history of SQL Server versions and T-SQL, an overview of SQL statements, data types, operators, functions and commenting code. It also discusses tools for querying databases like SQL Server Management Studio, SQLCMD and PowerShell.
1\9.SSIS 2008R2_Training - Introduction to SSISPramod Singla
It's a SQL server 2008R2 SSIS introduction session.These slides will give you brief introduction of SSIS. You can skip the session if you are already know the basics and history of SSIS.
This presentation is about managing database scripts, why we need to do it from theoretical and practice perspective, how it improves continues integration and delivery process on real projects.
This document discusses building dynamic web sites using databases. It begins by explaining that truly dynamic sites have content that changes over time, is customized for users, and can be automatically generated. It recommends using a database rather than storing content in files, as databases are faster, more efficient, and easier to manage when content grows large. The document then provides an overview of key database concepts like tables, fields, queries, and the relational structure. It gives an example of how a student database might be implemented and why a database is better than flat files for such an application. Finally, it discusses MySQL as a popular open-source database and shows basic concepts like connecting to the database, selecting a database, performing queries, and extracting record
MS SQL Server is a database server produced by Microsoft that enables users to write and execute SQL queries and statements. It consists of several features like Query Analyzer, Profiler, and Service Manager. Multiple instances of SQL Server can be installed on a machine, with each instance having its own set of users, databases, and other objects. SQL Server uses data files, filegroups, and transaction logs to store database objects and record transactions. The data dictionary contains metadata about database schemas and is stored differently in Oracle and SQL Server.
The document describes how to configure and run multiple MySQL database instances on a single server. It involves:
1. Creating separate data directories and configuration files for each instance, configured to use different ports;
2. Installing the database tables into each data directory;
3. Starting each MySQL instance using its customized configuration file.
This allows running two instances on ports 3307 and 3308, each with its own isolated data and configuration. The server processes are started and the ports are verified to confirm the instances are running independently.
The document discusses various topics related to the InnoDB storage engine in MySQL, including its key features, backup and recovery procedures, checkpoint processing, moving or copying InnoDB tables, and the storage engines supported in MySQL 5.5. It provides technical details on InnoDB's implementation of transactions, locking, and crash recovery. The document also describes how to force InnoDB recovery if needed to dump tables from a corrupted database.
This document provides an overview of basic concepts in relational database management systems (RDBMS) including what a database is, common RDBMS software, how data is organized into tables with rows and columns, the purpose of a schema and primary keys, and what SQL is and its advantages. A database stores data in an organized way using RDBMS software, which allows users to create, access, and manipulate data across multiple tables that are related through primary keys. SQL is the most commonly used language for communicating with databases and has advantages of being relatively simple to learn while enabling interaction with many different database systems.
MySQL provides several features to improve performance, including flexibility in storage engines and data types, table maintenance utilities, and engine-specific optimizations. Performance can be monitored using commands like SHOW STATUS and SHOW PROCESSLIST. Slow query logging can help identify inefficient queries for optimization. The innodb_buffer_pool_size and key_buffer_size variables can be calculated based on table sizes to improve caching. Administrative tasks like user and database management, events, and security controls are demonstrated. Information_schema can be queried to view table details, columns and engines.
This document provides instructions for setting up different types of MySQL replication architectures:
1) It describes how to configure basic master-slave replication between two servers with step-by-step instructions for configuring the master and slave servers.
2) It also provides a second method for implementing master-slave replication with additional details on configuring the replication user and importing databases.
3) Finally, it outlines how to set up a master-master replication configuration between two MySQL servers to provide high availability, with each server acting as both a master and slave.
This document outlines how to configure multiple instances of MySQL on a single server. It describes commenting out port settings in the my.cnf file, creating configuration blocks for two instances, cloning the data directory for the second instance, and using mysqld_multi to start and manage the instances. Running mysqld_multi report verifies that both instances are running on different ports (3306 and 3307). Clients can connect to each instance by specifying the unique port or socket file location.
Upgrading mysql version 5.5.30 to 5.6.10Vasudeva Rao
The document provides steps to upgrade a MySQL database from version 5.5.30 to 5.6.10 on a Linux server. It involves downloading the MySQL 5.6 RPM files, stopping the existing 5.5 server, moving the existing data directory, removing the 5.5 RPMs, installing the 5.6 RPMs, moving the data directory back, starting the 5.6 server, and running mysql_upgrade to convert the database to the new version's format. Additional configuration changes for the new 5.6 version are also recommended.
The document describes various data definition language (DDL) and data manipulation language (DML) commands in MySQL. Some key commands include using CREATE to add new databases, tables, indexes, and constraints. ALTER is used to modify existing database objects. DROP removes databases, tables, columns or indexes. DML commands like SELECT are used to query data, WHERE filters rows, JOIN combines tables, and INSERT, UPDATE, DELETE modify data. COUNT, SUM, DISTINCT and other functions can be used to aggregate or transform result sets.
This document provides instructions for various MySQL backup and restore operations using mysqldump and other tools. It discusses:
- Backing up and restoring a single database or multiple databases using mysqldump.
- Backing up all databases, restoring databases, and restoring a single database.
- Backing up a specific table.
- Importing data into MySQL from text files using mysqlimport or LOAD DATA LOCAL INFILE statements.
- Additional topics covered include MySQL backups on Linux and Windows, different backup types, dumping databases to other servers, and loading SQL files and CSV files into MySQL.
So, you know how to deploy your code, what about your database? This talk will go through deploying your database with LiquiBase and DBDeploy a non-framework based approach to handling migrations of DDL and DML.
With MySQL being the most popular open source DBMS in the world and with an estimated growth of 16 percent anually until 2020,we can assume that sooner or later an Oracle DBA will be handling a MySQL database in their shop. This beginner/intermediate-level session will take you through my journey of an Oracle DBA and my first 100 days of starting to administer a MySQL database, show several demos and all the roadblocks and the success I had along this path.
MDAC is a framework that allows developers to access data stores uniformly. It consists of ADO, OLE DB, and ODBC components. MDAC architecture includes three layers: a programming interface (ADO/ADO.NET), a database access layer provided by vendors, and the database. OLE DB allows uniform data store access. ODBC provides a native interface through which drivers access specific databases. ADO is a high-level interface that uses OLE DB. It consists of objects and collections that allow creating, retrieving, updating and deleting data.
This document discusses Java Database Connectivity (JDBC) which provides Java applications with an API for accessing databases. It describes the four types of JDBC drivers: Type 1 uses JDBC-ODBC bridge, Type 2 uses native database APIs, Type 3 communicates through a middle-tier server, and Type 4 communicates directly via sockets. The document also outlines the basic steps to use JDBC for database connectivity including loading a driver, establishing a connection, creating statements, executing SQL, and processing result sets.
Database Wiz is a tool that provides a common interface for users to easily connect to, create, modify, import, and export data from multiple database types with minimal knowledge. It allows novice users to interact with databases through simple menus and clicks, and also allows expert users to write and execute complex queries. The tool contains modules for database connection, creation, manipulation, and import/export functions. It also provides sample screenshots of its interfaces for accessing Access, Oracle, and SQL Server databases.
This document provides an overview of accessing databases using JDBC and MySQL. It discusses how JDBC allows Java programs to connect to and query databases. It also describes MySQL, a popular open-source database server that supports JDBC. The document demonstrates starting and connecting to MySQL, selecting databases, viewing schema and data, and performing basic operations like creating, dropping, and querying tables.
This document provides an introduction to accessing databases using JDBC and MySQL. It discusses how JDBC allows Java programs to connect to and query databases in a vendor-neutral way. It also describes how to install and start the MySQL database server and client programs. It provides examples of connecting to MySQL via JDBC, selecting databases, viewing table schemas, running queries, and manipulating tables by creating, dropping, and creating temporary tables from query results.
This document provides an introduction to accessing databases using JDBC and MySQL. It discusses how JDBC allows Java programs to connect to and query databases in a vendor-neutral way. It also describes how to install and start the MySQL database server and client programs. It provides examples of connecting to MySQL from Java using JDBC, viewing database and table schemas, running queries, and manipulating tables by creating, dropping, and selecting data into temporary tables.
This document provides an introduction to accessing databases using JDBC and MySQL. It discusses how JDBC allows Java programs to connect to and query databases in a vendor-neutral way. It also describes how to install and start the MySQL database server and client programs. It provides examples of connecting to MySQL via JDBC, selecting databases, viewing table schemas, running queries, and creating, dropping, and manipulating tables.
This document provides an overview of SQLite, including:
- SQLite is an embedded SQL database that is not a client-server system and stores the entire database in a single disk file.
- It supports ACID transactions for reliability and data integrity.
- SQLite is used widely in applications like web browsers, Adobe software, Android, and more due to its small size and not requiring a separate database server.
- The Android SDK includes classes for managing SQLite databases like SQLiteDatabase for executing queries, updates and deletes.
This document provides an overview of MySQL, including:
1) MySQL is an open-source relational database management system that is popular for web applications.
2) The document reviews database terminology and covers installing and configuring MySQL on Linux and Windows.
3) Administrative tasks like starting/stopping the MySQL server, creating user accounts, and using MySQL commands are described.
Usage Note of Qt ODBC Database Access on LinuxWilliam Lee
This document provides instructions for setting up ODBC database access on Linux using Qt. It describes installing the unixODBC library, Microsoft SQL Server ODBC driver, and Qt ODBC SQL driver plugin. It also explains how to create a DSN, use the sqlcmd tool to connect to SQL Server, and write Qt applications that use the ODBC driver plugin to connect via ODBC.
The document discusses using SQLite as the database for Android applications, including its history, advantages for mobile use, features, architecture, and examples of creating SQLite databases and performing basic CRUD operations in Android code. It provides an overview of SQLite's lightweight and portable design that makes it suitable for embedded systems like mobile devices.
The document provides an introduction and overview of NoSQL databases. It discusses why NoSQL databases were created, the different categories of NoSQL databases including column stores, document stores, and key-value stores. It also provides an overview of Hadoop, describing it as a framework that allows distributed processing of large datasets across computer clusters.
Sigit Kurniawan discusses MongoDB and provides an overview of key concepts. The document covers SQL vs NoSQL, MongoDB features, data types, installation on Windows, and CRUD operations. MongoDB is a document database designed for scalability and flexible schemas. It uses dynamic schemas and is horizontally scalable.
This document provides instructions for connecting to a MySQL database from a C++ application using MySQL Connector/C++. It discusses installing the MySQL Connector/C++ driver, making a connection to a MySQL database, executing queries to retrieve and insert data, and closing the connection. Sample C++ code is provided to demonstrate connecting to a database, running queries, processing result sets, and disconnecting.
Obevo is an open-source database deployment tool that handles complex database schemas and deployments at enterprise scale. It addresses challenges such as maintaining migration files, determining dependency order, and onboarding existing production databases. Obevo represents database objects as files in a similar structure to code, enables stateful objects through multiple change sections, and uses dependency analysis and topological sorting to determine deployment order. It also supports ORM and in-memory database integration through translation layers.
This document discusses the need for database synchronization between Microsoft SQL Server and MySQL databases. It proposes developing a tool that would use linked servers and T-SQL scripts to synchronize data between the two database types. The solution presented to a client involved setting up ODBC connections, linking the MySQL database to MSSQL, and creating stored procedures to handle insert, update, and delete operations between tables. Advantages are synchronization of queryable data and ability to schedule syncs as jobs. Disadvantages include needing technical expertise and tedious script creation. The proposed tool would provide a user interface for configuration and scheduling of syncs.
The document describes a lab manual for a course on MongoDB at SRK Institute of Technology. The course aims to teach students how to install and configure MongoDB, perform database operations using it, and develop applications integrating MongoDB with Java and PHP. The lab manual contains 12 experiments covering MongoDB installation, creating and dropping databases and collections, inserting, querying, updating, and deleting documents, indexing, and connecting MongoDB to Java and PHP applications.
For the full video of this presentation, please visit: https://www.edge-ai-vision.com/2024/06/building-and-scaling-ai-applications-with-the-nx-ai-manager-a-presentation-from-network-optix/
Robin van Emden, Senior Director of Data Science at Network Optix, presents the “Building and Scaling AI Applications with the Nx AI Manager,” tutorial at the May 2024 Embedded Vision Summit.
In this presentation, van Emden covers the basics of scaling edge AI solutions using the Nx tool kit. He emphasizes the process of developing AI models and deploying them globally. He also showcases the conversion of AI models and the creation of effective edge AI pipelines, with a focus on pre-processing, model conversion, selecting the appropriate inference engine for the target hardware and post-processing.
van Emden shows how Nx can simplify the developer’s life and facilitate a rapid transition from concept to production-ready applications.He provides valuable insights into developing scalable and efficient edge AI solutions, with a strong focus on practical implementation.
In the rapidly evolving landscape of technologies, XML continues to play a vital role in structuring, storing, and transporting data across diverse systems. The recent advancements in artificial intelligence (AI) present new methodologies for enhancing XML development workflows, introducing efficiency, automation, and intelligent capabilities. This presentation will outline the scope and perspective of utilizing AI in XML development. The potential benefits and the possible pitfalls will be highlighted, providing a balanced view of the subject.
We will explore the capabilities of AI in understanding XML markup languages and autonomously creating structured XML content. Additionally, we will examine the capacity of AI to enrich plain text with appropriate XML markup. Practical examples and methodological guidelines will be provided to elucidate how AI can be effectively prompted to interpret and generate accurate XML markup.
Further emphasis will be placed on the role of AI in developing XSLT, or schemas such as XSD and Schematron. We will address the techniques and strategies adopted to create prompts for generating code, explaining code, or refactoring the code, and the results achieved.
The discussion will extend to how AI can be used to transform XML content. In particular, the focus will be on the use of AI XPath extension functions in XSLT, Schematron, Schematron Quick Fixes, or for XML content refactoring.
The presentation aims to deliver a comprehensive overview of AI usage in XML development, providing attendees with the necessary knowledge to make informed decisions. Whether you’re at the early stages of adopting AI or considering integrating it in advanced XML development, this presentation will cover all levels of expertise.
By highlighting the potential advantages and challenges of integrating AI with XML development tools and languages, the presentation seeks to inspire thoughtful conversation around the future of XML development. We’ll not only delve into the technical aspects of AI-powered XML development but also discuss practical implications and possible future directions.
Main news related to the CCS TSI 2023 (2023/1695)Jakub Marek
An English 🇬🇧 translation of a presentation to the speech I gave about the main changes brought by CCS TSI 2023 at the biggest Czech conference on Communications and signalling systems on Railways, which was held in Clarion Hotel Olomouc from 7th to 9th November 2023 (konferenceszt.cz). Attended by around 500 participants and 200 on-line followers.
The original Czech 🇨🇿 version of the presentation can be found here: https://www.slideshare.net/slideshow/hlavni-novinky-souvisejici-s-ccs-tsi-2023-2023-1695/269688092 .
The videorecording (in Czech) from the presentation is available here: https://youtu.be/WzjJWm4IyPk?si=SImb06tuXGb30BEH .
Building Production Ready Search Pipelines with Spark and MilvusZilliz
Spark is the widely used ETL tool for processing, indexing and ingesting data to serving stack for search. Milvus is the production-ready open-source vector database. In this talk we will show how to use Spark to process unstructured data to extract vector representations, and push the vectors to Milvus vector database for search serving.
How to Interpret Trends in the Kalyan Rajdhani Mix Chart.pdfChart Kalyan
A Mix Chart displays historical data of numbers in a graphical or tabular form. The Kalyan Rajdhani Mix Chart specifically shows the results of a sequence of numbers over different periods.
Monitoring and Managing Anomaly Detection on OpenShift.pdfTosin Akinosho
Monitoring and Managing Anomaly Detection on OpenShift
Overview
Dive into the world of anomaly detection on edge devices with our comprehensive hands-on tutorial. This SlideShare presentation will guide you through the entire process, from data collection and model training to edge deployment and real-time monitoring. Perfect for those looking to implement robust anomaly detection systems on resource-constrained IoT/edge devices.
Key Topics Covered
1. Introduction to Anomaly Detection
- Understand the fundamentals of anomaly detection and its importance in identifying unusual behavior or failures in systems.
2. Understanding Edge (IoT)
- Learn about edge computing and IoT, and how they enable real-time data processing and decision-making at the source.
3. What is ArgoCD?
- Discover ArgoCD, a declarative, GitOps continuous delivery tool for Kubernetes, and its role in deploying applications on edge devices.
4. Deployment Using ArgoCD for Edge Devices
- Step-by-step guide on deploying anomaly detection models on edge devices using ArgoCD.
5. Introduction to Apache Kafka and S3
- Explore Apache Kafka for real-time data streaming and Amazon S3 for scalable storage solutions.
6. Viewing Kafka Messages in the Data Lake
- Learn how to view and analyze Kafka messages stored in a data lake for better insights.
7. What is Prometheus?
- Get to know Prometheus, an open-source monitoring and alerting toolkit, and its application in monitoring edge devices.
8. Monitoring Application Metrics with Prometheus
- Detailed instructions on setting up Prometheus to monitor the performance and health of your anomaly detection system.
9. What is Camel K?
- Introduction to Camel K, a lightweight integration framework built on Apache Camel, designed for Kubernetes.
10. Configuring Camel K Integrations for Data Pipelines
- Learn how to configure Camel K for seamless data pipeline integrations in your anomaly detection workflow.
11. What is a Jupyter Notebook?
- Overview of Jupyter Notebooks, an open-source web application for creating and sharing documents with live code, equations, visualizations, and narrative text.
12. Jupyter Notebooks with Code Examples
- Hands-on examples and code snippets in Jupyter Notebooks to help you implement and test anomaly detection models.
Have you ever been confused by the myriad of choices offered by AWS for hosting a website or an API?
Lambda, Elastic Beanstalk, Lightsail, Amplify, S3 (and more!) can each host websites + APIs. But which one should we choose?
Which one is cheapest? Which one is fastest? Which one will scale to meet our needs?
Join me in this session as we dive into each AWS hosting service to determine which one is best for your scenario and explain why!
Webinar: Designing a schema for a Data WarehouseFederico Razzoli
Are you new to data warehouses (DWH)? Do you need to check whether your data warehouse follows the best practices for a good design? In both cases, this webinar is for you.
A data warehouse is a central relational database that contains all measurements about a business or an organisation. This data comes from a variety of heterogeneous data sources, which includes databases of any type that back the applications used by the company, data files exported by some applications, or APIs provided by internal or external services.
But designing a data warehouse correctly is a hard task, which requires gathering information about the business processes that need to be analysed in the first place. These processes must be translated into so-called star schemas, which means, denormalised databases where each table represents a dimension or facts.
We will discuss these topics:
- How to gather information about a business;
- Understanding dictionaries and how to identify business entities;
- Dimensions and facts;
- Setting a table granularity;
- Types of facts;
- Types of dimensions;
- Snowflakes and how to avoid them;
- Expanding existing dimensions and facts.
Driving Business Innovation: Latest Generative AI Advancements & Success StorySafe Software
Are you ready to revolutionize how you handle data? Join us for a webinar where we’ll bring you up to speed with the latest advancements in Generative AI technology and discover how leveraging FME with tools from giants like Google Gemini, Amazon, and Microsoft OpenAI can supercharge your workflow efficiency.
During the hour, we’ll take you through:
Guest Speaker Segment with Hannah Barrington: Dive into the world of dynamic real estate marketing with Hannah, the Marketing Manager at Workspace Group. Hear firsthand how their team generates engaging descriptions for thousands of office units by integrating diverse data sources—from PDF floorplans to web pages—using FME transformers, like OpenAIVisionConnector and AnthropicVisionConnector. This use case will show you how GenAI can streamline content creation for marketing across the board.
Ollama Use Case: Learn how Scenario Specialist Dmitri Bagh has utilized Ollama within FME to input data, create custom models, and enhance security protocols. This segment will include demos to illustrate the full capabilities of FME in AI-driven processes.
Custom AI Models: Discover how to leverage FME to build personalized AI models using your data. Whether it’s populating a model with local data for added security or integrating public AI tools, find out how FME facilitates a versatile and secure approach to AI.
We’ll wrap up with a live Q&A session where you can engage with our experts on your specific use cases, and learn more about optimizing your data workflows with AI.
This webinar is ideal for professionals seeking to harness the power of AI within their data management systems while ensuring high levels of customization and security. Whether you're a novice or an expert, gain actionable insights and strategies to elevate your data processes. Join us to see how FME and AI can revolutionize how you work with data!
Your One-Stop Shop for Python Success: Top 10 US Python Development Providersakankshawande
Simplify your search for a reliable Python development partner! This list presents the top 10 trusted US providers offering comprehensive Python development services, ensuring your project's success from conception to completion.
Project Management Semester Long Project - Acuityjpupo2018
Acuity is an innovative learning app designed to transform the way you engage with knowledge. Powered by AI technology, Acuity takes complex topics and distills them into concise, interactive summaries that are easy to read & understand. Whether you're exploring the depths of quantum mechanics or seeking insight into historical events, Acuity provides the key information you need without the burden of lengthy texts.
Programming Foundation Models with DSPy - Meetup SlidesZilliz
Prompting language models is hard, while programming language models is easy. In this talk, I will discuss the state-of-the-art framework DSPy for programming foundation models with its powerful optimizers and runtime constraint system.
HCL Notes and Domino License Cost Reduction in the World of DLAUpanagenda
Webinar Recording: https://www.panagenda.com/webinars/hcl-notes-and-domino-license-cost-reduction-in-the-world-of-dlau/
The introduction of DLAU and the CCB & CCX licensing model caused quite a stir in the HCL community. As a Notes and Domino customer, you may have faced challenges with unexpected user counts and license costs. You probably have questions on how this new licensing approach works and how to benefit from it. Most importantly, you likely have budget constraints and want to save money where possible. Don’t worry, we can help with all of this!
We’ll show you how to fix common misconfigurations that cause higher-than-expected user counts, and how to identify accounts which you can deactivate to save money. There are also frequent patterns that can cause unnecessary cost, like using a person document instead of a mail-in for shared mailboxes. We’ll provide examples and solutions for those as well. And naturally we’ll explain the new licensing model.
Join HCL Ambassador Marc Thomas in this webinar with a special guest appearance from Franz Walder. It will give you the tools and know-how to stay on top of what is going on with Domino licensing. You will be able lower your cost through an optimized configuration and keep it low going forward.
These topics will be covered
- Reducing license cost by finding and fixing misconfigurations and superfluous accounts
- How do CCB and CCX licenses really work?
- Understanding the DLAU tool and how to best utilize it
- Tips for common problem areas, like team mailboxes, functional/test users, etc
- Practical examples and best practices to implement right away
HCL Notes and Domino License Cost Reduction in the World of DLAU
Database migration
1. Database Migration Wizard:
Table of Contents
(1) ODBC Libraries :
MySQL Workbench provides the ability to migrate ODBC compliant databases to
MySQL. The MySQL Workbench Migration Wizard was added in MySQL
Workbench 5.2.41.
Convert (migrate) different database types, including MySQL, across servers
Convert tables and copy data, but will not convert stored procedures,
views, or triggers
Allows customization and editing during the migration process
Works on Linux, Mac OS X, and Microsoft Windows
This is not an exhaustive list. The following sections discuss these and additional
migration capabilities. Set up may be the most challenging aspect of using the
MySQL Workbench Migration Wizard. There is the installation section, which
describes setting up ODBC requirements for Linux, Mac OS X, and Microsoft
Windows, and the Database Product Specific Notes section that references setup
conditions for each RDBMS.
The MySQL Workbench Migration Wizard uses ODBC to connect to a source
database, except for MySQL. You will need the ODBC driver installed that
corresponds to the database you want to migrate from. For example, PostgreSQL
can be migrated with the psqlodbc ODBC driver; Microsoft SQL Server can be
migrated using the native Microsoft SQL Server driver on Windows or with
FreeTDS on Linux and Mac OS X.
The following diagram shows the general components involved in an ODBC
connection:
2. MySQL Workbench migration installation diagram
(i) ODBC Libraries
Linux :
iODBC: MySQL Workbench binaries provided by Oracle already include iODBC and
no additional action is required. If you compile it yourself, you must install iODBC
or unixODBC. iODBC is recommended. You can use the iODBC library provided by
your distribution.
pyodbc: is the Python module used by MySQL Workbench to interface with ODBC,
and may be used to migrate ODBC compliant databases such as PostgreSQL and
DB2. In Windows and Mac OS X, it is included with Workbench. In Linux, binaries
provided by Oracle also include pyodbc.
If you're using a self-compiled binary, make sure you have the latest version, and
that it is compiled against the ODBC manager library that you chose, whether it is
iODBC or unixODBC. As of version 3.0.6, pyodbc will compile against unixODBC by
default. If you are compiling against iODBC then you must perform the following
steps:
3. 1. Install the development files for iODBC. Usually you just need to install the
libiodbc-devel or libiodbc2-dev package provided by your distribution.
2. In the pyodbc source directory, edit the setup.py file and around line 157,
replace the following line: settings['libraries'].append('odbc') with
settings['libraries'].append('iodbc')
3. Execute the following command as the root user: CFLAGS=`iodbc-config --
cflags` LDFLAGS=`iodbc-config --libs` python setup.py install.
(ii) ODBC Drivers :
For each RDBMS, you need its corresponding ODBC driver, which must also be
installed on the same machine that MySQL Workbench is running on. This driver is
usually provided by the RDBMS manufacturer, but in some cases they can also be
provided by third party vendors or open source projects.
Operating systems usually provide a graphical interface to help set up ODBC
drivers and data sources. Use that to install the driver (i.e., make the ODBC
Manager "see" a newly installed ODBC driver). You can also use it to create a data
source for a specific database instance, to be connected using a previously
configured driver. Typically you need to provide a name for the data source (the
DSN), in addition to the database server IP, port, username, and sometimes the
database the user has access to.
If MySQL Workbench is able to locate an ODBC manager GUI for your system, a
“Plugins”,”Start ODBC – Administrator” menu item be present under the Plugins
menu as a convenience shortcut to start it.
Linux: There are a few GUI utilities, some of which are included with
unixODBC. Refer to the documentation for your distribution. iODBC
provides iodbcadm-gtk. Official binaries of MySQL Workbench include it
and it can be accessed through the “Plugins”, “Start ODBC Administrator”
menu item.
Mac OS X: You can use the ODBC Administrator tool, which is provided as a
separate download from Apple. If the tool is installed in the
/Applications/Utilities folder, you can start it through the “Plugins”, “Start
ODBC Administrator” menu item.
4. Microsoft Windows: You can use the Data Sources (ODBC) tool under
Administrative Tools. And it can be started through the “Plugins”, “Start
ODBC Administrator” menu item.
ODBC Driver architecture
Since the ODBC driver needs to be installed in the client side, you will need an
ODBC driver that supports your client’s operating system and architecture. For
example, if you are running MySQL Workbench from Linux x64, then you need a
Linux x64 ODBC driver for your RDBMS. In Mac OS X, MySQL Workbench is built as
a 32-bit application, so you need the 32-bit drivers.
(2) Migration Overview :
(i) A visual guide to performing a database migration
(ii) Migrating from supported databases
(iii) Migrating from unsupported (generic) databases
The Migration Wizard performs the following steps when migrating a database to
MySQL:
1. Connects to the source RDBMS and retrieves a list of available
databases/schemas.
2. Reverse engineers selected database/schemas into a internal
representation specific to the source RDBMS. This step will also perform
the renaming of objects/schemas depending on the type of object name
mapping method that is chosen.
3. Automatically migrates the source RDBMS objects into MySQL specific
objects.
a. Target schema objects are created.
b. Target table objects are created.
i. Columns for each table are copied.
A. Datatypes are mapped to MySQL datatypes.
B. Default values are mapped to a MySQL supported
default value, if possible.
ii. Indexes are converted.
iii. Primary Keys are converted.
iv. Triggers are copied, and commented out if the source is not
MySQL.
5. c. Foreign Keys for all tables (of all schemas) are converted.
d. View objects are copied, and commented out if the source is not
MySQL.
e. Stored Procedure and Function objects are copied, and commented
out if the source is not MySQL.
4. Provides an opportunity to review the changes, for editing and correcting
errors in the migrated objects.
5. Creates the migrated objects in the target MySQL server. If there are errors,
you can return to the previous step and correct them, and retry the target
creation.
6. Copy data of the migrated tables from the source RDBMS to MySQL.
MySQL Workbench provides support for migrating from some specific RDBMS
products. The Migration Wizard will provide the best results when migrating from
such products. However, in some cases, other unsupported database products
can also be migrated by using its Generic database support, as long as you have
an ODBC driver for it. In this case, the migration will be less automatic, but should
still work nonetheless.
(i) A visual guide to performing a database migration (GUI with screen shots)
http://dev.mysql.com/doc/workbench/en/wb-migration-overview-steps.html
(ii) Migrating from supported databases:
When a supported RDBMS product is being migrated, the MySQL Workbench
Migration Wizard will automatically convert as much information as it can, but
you may still be required to manually edit the automatically migrated schema for
difficult cases, or when the default mapping is not as desired.
Generally speaking, only table information and its data are automatically
converted to MySQL. Code objects such as views, stored procedures, and triggers,
are not. But supported RDBMS products will be retrieved and displayed in the
wizard. You can then manually convert them, or save them for converting at a
later time.
6. The following RDBMS products and versions are currently tested and supported
by the MySQL Workbench Migration Wizard, although other RDBMS products can
also be migrated with Section 10.2.3, “Migrating from unsupported (generic)
databases
Microsoft SQL Server 2000
Microsoft SQL Server 2005
Microsoft SQL Server 2008
Microsoft SQL Server 2012
MySQL Server 4.1 and greater as the source, and MySQL Server 5.1 and
greater as the target
PostgreSQL 8.0 and greater
Sybase Adaptive Server Enterprise 15.x and greater
(iii) Migrating from unsupported (generic) databases:
Most ODBC compliant databases may be migrated using the generic database
support. In this case, code objects will not be retrieved from the source database;
only tables and data.
When using the generic support, column datatypes are mapped using the
following steps:
1. It searches for the first entry in the Generic Datatype Mapping Table for the
source type name. If the length/scale ranges of the entry matches the
source column, it will pick that type. Otherwise, it continues searching.
2. If no matches were found in the generic table, then it tries to directly map
the source type to a MySQL type of the same name.
3. If the source type name doesn't match any of the MySQL datatypes, thenit
will not be converted and an error is logged. You can then manually specify
the target datatype in the Manual Object Editing step of the wizard.
(3) Conceptual DBMS equivalents :
Handling Microsoft SQL Server and MySQL structural differences:
A Microsoft SQL Server database is made up of one catalog and one or more
schemata. MySQL only supports one schema for each database (or rather, a
MySQL database is a schema) so this difference in design must be planned for.
The Migration Wizard must know how to handle the migration of schemata for
7. the source (Microsoft SQL Server) database. It can either keep all of the schemata
as they are (the Migration Wizard will create one database per schema), or merge
them into a single MySQL database. Additional configure options include: either
remove the schema names (the Migration Wizard will handle the possible name
collisions that may appear along the way), and an option to add the schema name
to the database object names as a prefix.
FYI…..
http://dev.mysql.com/doc/workbench/en/wb-migration-database-concepts.html
(4) Microsoft SQL Server migration
(i) Preparations
(ii) Drivers
(iii) Connection Setup
(iv) Microsoft SQL Server Type Mapping
Introduction:
The MySQL Workbench Migration Wizard is tested against the following Microsoft
SQL Server versions: 2000, 2005, 2008, and 2012.
(i) Preparations:
To be able to migrate from Microsoft SQL Server, ensure the following:
The source SQL Server instance is running, and accepts TCP connections
You know the IP and port of the source SQL server instance. If you will be
migrating using a Microsoft ODBC driver for SQL Server (the default in
Windows), you will need to know the host and the name of the SQL Server
instance.
Make sure that the SQL Server is reachable from where you will be running
MySQL Workbench. More specifically, check the firewall settings.
Make sure that the account you will use has proper privileges to the
database that will be migrated.
8. (ii) Drivers:
(I) Windows
(II) Linux
(I) Windows:
Microsoft Windows XP or newer includes an ODBC driver for Microsoft SQL
Server, so there are no additional actions required.
(II) Linux:Setting up drivers on Linux.
FreeTDS:
FreeTDS version 0.92 or greater is required. Note that many distributions ship
older versions of FreeTDS, so it may need to be installed separately. Additionally,
the FreeTDS version provided by distributions may also be compiled for the wrong
ODBC library (usually to unixODBC instead of iODBC, which MySQL Workbench
uses). Because of that you will probably need to build this library yourself.
Important: using FreeTDS with iODBC
When compiling FreeTDS for use with iODBC (the default with the official
binaries), you must compile it with the --enable-odbc-wide command line option
for the configure script. Failing to do so will result in crashes and other
unpredictable errors.
A script is provided to compile FreeTDS using the options required for MySQL
Workbench You can find it in the /usr/share/mysql-
workbench/extras/build_freetds.sh directory in Linux or
MySQLWorkbench.app/Contents/SharedSupport/build_freetds.sh folder in the
Mac. To use it, follow these steps:
1. Make sure you have the iODBC headers installed. In Linux, install the
libiodbc-devel or libiodbc2-dev package from your distribution. In Mac OS
X, the headers come with the system and no additional action is required
for this step.
2. mkdir ~/freetds to create a directory - within the users home directory.
3. Copy the build_freetds.sh script to ~/freetds
4. Get the latest FreeTDS sources from ftp://ftp.freetds.org/pub/freetds/ and
place it in the ~/freetds directory. Make sure to get version 0.92 or newer.
9. 5. cd ~/freetds
6. Execute build_freetds.sh
7. After compilation is done, install it using make install from the path given
by the script.
8. Install the driver in the ODBC Administrator, to make the ODBC subsystem
to recognize it. The name of the driver file is libtdsodbc.so and is located in
/usr/lib or /usr/local/lib
Once the driver is installed, you should be able to create data sources for it from
the ODBC Administrator GUI. Protocol version selection in FreeTDS
When using FreeTDS, TDS_VERSION=7.0 is needed in the connection string. If you
pick a FreeTDS specific connection method option in the connection dialog, that
option is added to the connection string automatically.
(III) Connection Setup:
Using an ODBC Data Source ,Using Connection Parameters
(IV) Microsoft SQL Server Type Mapping:
http://dev.mysql.com/doc/workbench/en/wb-migration-database-mssql-
typemapping.html
(5) PostgreSQL migration :
(i) Preparations
(ii) Drivers
(iii) Connection Setup
(iv) PostgreSQL Type Mapping
Native support for PostgreSQL 8.x and 9.x was added in MySQL Workbench
5.2.44. MySQL Workbench versions prior to this would migrate PostgreSQL using
the generic migration support.
(i) Preparations:
(I) Microsoft Windows
(II) Linux
10. (III) Mac OS X
Before proceeding, you will need the following:
Follow the installation guide for installing iODBC on your system.
Access to a running PostgreSQL instance with privileges to the database you
want to migrate, otherwise known as the "source database." The Migration
Wizard officially supports PostgreSQL 8.0 and above, although older
versions may work.
Access to a running MySQL Server instance with privileges to the database
you want to migrate. The Migration Wizard officially supports MySQL 5.0
and above.
MySQL Workbench 5.2.44 or newer.
(I) Microsoft Windows:
Download and install the MSI package for psqlODBC. Choose the newest file from
http://www.postgresql.org/ftp/odbc/versions/msi/, which will be at the bottom
of the downloads page. This will install psqlODBC on your system and allow you to
migrate from Postgresql to MySQL using MySQL Workbench.
(II) Linux :
After installing iODBC, proceed to install the PostgreSQL ODBC drivers.
Download the psqlODBC source tarball file from
http://www.postgresql.org/ftp/odbc/versions/src/. Use the latest version
available for download, which will be at the bottom of the downloads page. The
file will look similar to psqlodbc-09.01.0200.tar.gz. Extract this tarball to a
temporary location, open a terminal, and cd into that directory. The installation
process is:
shell> cd the/src/directory
shell> ./configure --with-iodbc --enable-pthreads
shell> make
shell>sudo make install
Verify the installation by confirming that the file psqlodbcw.so is in the
/usr/local/lib directory.
11. Next, you must register your new ODBC Driver.
Open the iODBC Data Source Administrator application by either executing
iodbcadm-gtk in the command-line, or by launching it from the Overview page of
the MySQL Workbench Migration Wizard by clicking the Open ODBC
Administrator button. Go to the ODBC Drivers tab in the iODBC Data Source
Administrator. It should look similar to:
Figure: The iODBC Data Source Administrator
Click Add a driver then fill out the form with the following values:
Description of the driver: psqlODBC
Driver file name: /usr/local/lib/psqlodbcw.so
Setup file name: No value is needed here
And lastly, clicking OK will complete the psqlODBC driver registration.
12. (III) Mac OS X :
To compile psqlODBC on Mac OS X, you will need to have Xcode and its
"Command Line Tools" component installed on your system, as this includes the
required gcc compiler. Xcode is free, and available from the AppStore. And after
installing Xcode, open it and go to Preferences, Downloads, Components, and
then install the "Command Line Tools" component.
Download the psqlODBC source tarball file from
http://www.postgresql.org/ftp/odbc/versions/src/. Use the latest version
available for download, which will be at the bottom of the downloads page. The
file will look similar to psqlodbc-09.01.0200.tar.gz. Extract this tarball to a
temporary location, open a terminal, and cd into that directory. The installation
process is:
shell> cd the/src/directory
shell> ./configure --with-iodbc --enable-pthreads
shell> CFLAGS="-arch i386 -arch x86_64" make
shell>sudo make install
(ii) Drivers:
If you are compiling psqlodb, first configure with the --without-libpq option.
(iii) Connection Setup:
After loading the Migration Wizard, click on the Start Migration button in the
Overview page to begin the migration process. You will first connect to the source
PostgreSQL database. Here you will provide the information about the PostgreSQL
RDBMS that you are migrating from, the ODBC driver that will be used for the
migration, and all of the parameters required for the connection. The name of the
ODBC driver is the one you set up when you registered your psqlODBC driver with
the driver manager.
Opening the Database System dropdown will reveal each RDBMS that is
supported on your system. Select PostgreSQL from the list. Below that is the
13. Stored Connection dropdown, which is optional. Stored connections will be listed
here, which connections are saved after defining a connection with the Store
connection for future use as checkbox enabled.
The three Connection Method options are:
ODBC (manually entered parameters): Each parameter, like a username, is
defined separately
ODBC Data Source: For pre-configured data sources (DSN)
ODBC (direct connection string): A full ODBC connection string
Note :
The psqlODBC driver does not allow a connection without specifying a database
name. The migration process is similar to other databases.
(iv) PostgreSQL Type Mapping:
http://dev.mysql.com/doc/workbench/en/wb-migration-database-postgresql-
typemapping.html
(6) MySQL migration:
Introduction
Notes about copying MySQL, and what you can do with it.
(7) Using the MySQL Workbench Migration Wizard :
(i) Connecting to the databases :
A connection is made to the source and target database servers.
Source Connection Setup:
The Source Connection offers the MySQL, Microsoft SQL Server, and Generic
RDBMS database system options. This selection determines the available
Parameters and Advanced configuration options. This connection definition may
be saved using the Store connection for future use as option, and there is also the
Test Connection option.
14. Target Connection Setup
The MySQL Server that will be home to the newly migrated database.
(ii) Schemata Retrieval and Selection :
Fetch Schemata List
The names of available schemas will be retrieved from the source RDBMS. The
account used for the connection will need to have appropriate privileges for
listing and reading the schemas you want to migrate. Target RDBMS connection
settings will also be validated.
The steps that are performed include: connects to the source DBMS, checks the
connection, and retrieves the schema list from the source.
Schemata Selection
Select the schemata that you want to migrate.
(iii) Reverse Engineering :
This is an automated step, where the actions include: Connect to the source
DBMS, Reverse engineer the selected schemata, and perform post-processing if
needed.
(iv) Object Selection :
By default, all table objects will be migrated. Use the “Show Selection” button in
order to disable individual table objects from being migrated.
(v) Migration :
Reverse engineered objects from the source RDBMS will be automatically
converted to MySQL compatible objects. Default data type and default column
value mappings will be used. You will be able to review and edit the generated
objects and column definitions in the next step.
The steps performed include migrating the selected objects, and generating the
SQL CREATE statements.
15. (vi) Manual Editing :
The migrated objects may be reviewed and edited here. You can manually edit
the generated SQL before applying them to the target database. Target schemas
and tables may be renamed, and column definitions may be changed, by double-
clicking on them.
By default, the “All Objects” View is loaded. Other View options include
“Migration Problems” and “Column Mappings”.
All Objects: Shows all objects, which can also be edited by double-clicking.
Migration Problem: Will list all of the migration problems, or report that no
mapping problems were found.
Column Mappings: Displays all of the schema columns, which may also be
edited. There is an advanced “Show Code and Messages” option that
displays the SQL CREATE script for the selected object.
(vii) Target Creation Options :
Defines addition settings for the target schema.
Configuration options include:
Create schema in target RDBMS:
Create a SQL script file:
An option to keep the schemata if they already exist. Objects that already
exist will not be recreated or update.
(viii) Schema Creation :
The SQL scripts generated for the migrated schema objects will now be executed
in the target database. You can monitor execution in the logs; if errors exist then
they will be fixed in the next step. Table data will be migrated in a later step as
well.
This is an automated step, where the actions include: Create Script File, Connect
to Target Database, and Create Schemata and Objects.
16. (ix) Create Target Results :
Scripts to create the target schemas were executed, but the data has not yet been
migrated. This step allows reviewing a creation report. If there are any errors,
then you can manually fix the scripts and click “Recreate Objects” to retry the
schema creation or return to the Manual Editingpage to correct them there, and
then retry the target creation.
To edit, first select the object, and then the SQL CREATE Script will be shown for
the selected object. Edit it there, then press “Apply” to save.
(x) Data Migration Setup :
Provides additional options for data transfer, including the ability to set up a
script to automate this transfer in the future.
(xi) Bulk Data Transfer :
The transfer is executed here.
(xii) Migration Report :
Displays the final report that can be reviewed to ensure a proper migration was
executed.