This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
Introduction to the Hadoop Ecosystem with Hadoop 2.0 aka YARN (Java Serbia Ed...Uwe Printz
Talk held at the Java User Group on 05.09.2013 in Novi Sad, Serbia
Agenda:
- What is Big Data & Hadoop?
- Core Hadoop
- The Hadoop Ecosystem
- Use Cases
- What‘s next? Hadoop 2.0!
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
This ppt was used by Devrim at pgDay Asia 2017. He talked about some important facts about WAL - Transaction Logs or xlogs in PostgreSQL. Some of these can really come handy on a bad day
Introduction to the Hadoop Ecosystem with Hadoop 2.0 aka YARN (Java Serbia Ed...Uwe Printz
Talk held at the Java User Group on 05.09.2013 in Novi Sad, Serbia
Agenda:
- What is Big Data & Hadoop?
- Core Hadoop
- The Hadoop Ecosystem
- Use Cases
- What‘s next? Hadoop 2.0!
Hadoop is the popular open source like Facebook, Twitter, RFID readers, sensors, and implementation of MapReduce, a powerful tool so on.Your management wants to derive designed for deep analysis and transformation of information from both the relational data and thevery large data sets. Hadoop enables you to unstructuredexplore complex data, using custom analyses data, and wants this information as soon astailored to your information and questions. possible.Hadoop is the system that allows unstructured What should you do? Hadoop may be the answer!data to be distributed across hundreds or Hadoop is an open source project of the Apachethousands of machines forming shared nothing Foundation.clusters, and the execution of Map/Reduce It is a framework written in Java originallyroutines to run on the data in that cluster. Hadoop developed by Doug Cutting who named it after hishas its own filesystem which replicates data to sons toy elephant.multiple nodes to ensure if one node holding data Hadoop uses Google’s MapReduce and Google Filegoes down, there are at least 2 other nodes from System technologies as its foundation.which to retrieve that piece of information. This It is optimized to handle massive quantities of dataprotects the data availability from node failure, which could be structured, unstructured orsomething which is critical when there are many semi-structured, using commodity hardware, thatnodes in a cluster (aka RAID at a server level). is, relatively inexpensive computers. This massive parallel processing is done with greatWhat is Hadoop? performance. However, it is a batch operation handling massive quantities of data, so theThe data are stored in a relational database in your response time is not immediate.desktop computer and this desktop computer As of Hadoop version 0.20.2, updates are nothas no problem handling this load. possible, but appends will be possible starting inThen your company starts growing very quickly, version 0.21.and that data grows to 10GB. Hadoop replicates its data across differentAnd then 100GB. computers, so that if one goes down, the data areAnd you start to reach the limits of your current processed on one of the replicated computers.desktop computer. Hadoop is not suitable for OnLine Transaction So you scale-up by investing in a larger computer, Processing workloads where data are randomly and you are then OK for a few more months. accessed on structured data like a relational When your data grows to 10TB, and then 100TB. database.Hadoop is not suitable for OnLineAnd you are fast approaching the limits of that Analytical Processing or Decision Support Systemcomputer. workloads where data are sequentially accessed onMoreover, you are now asked to feed your structured data like a relational database, to application with unstructured data coming from generate reports that provide business sources intelligence. Hadoop is used for Big Data. It complements OnLine Transaction Processing and OnLine Analytical Pro
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using a simple programming model. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-avaiability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-availabile service on top of a cluster of computers, each of which may be prone to failures.
This presentation is for people who want to understand how PostgreSQL shares information among processes using shared memory. Topics covered include the internal data page format, usage of the shared buffers, locking methods, and various other shared memory data structures.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
Material de apoio - Instalação e configuração de sistemas operacionais de redes Linux.
Sistemas de Arquivos
Tipos de Sistemas de Arquivos ext2, ext3, ext4, reiserfs, xfs
Setting up a GeoServer can sometimes be deceptively simple. However, going from proof-of-concept to production requires a number of steps to be taken in order to optimize the server in terms of availability, performance and scalability. The presentation will show how to get from a basic setup to a battle-ready, rock-solid installation.
Hpe Data Protector Disaster Recovery GuideAndrey Karpov
This chapter provides a general overview of the disaster recovery process, explains the basic terms used in the Disaster Recovery guide and provides an overview of disaster recovery methods
Carefully follow the instructions below to prepare for disaster recovery and ensure a fast and efficient restore. The preparation procedure does not depend on the disaster recovery method, and includes developing a detailed disaster recovery plan, performing consistent and relevant backups, and updating the SRD file on Windows.
Assisted Manual Disaster Recovery (AMDR)
Manual Disaster Recovery (MDR)
This chapter contains descriptions of problems you might encounter while performing a disaster recovery. You can start with problems connected to a particular disaster recovery method and continue with general disaster recovery problems.
Example Preparation Tasks
Introducing Data Redaction - an enabler to data security in EDB Postgres Adva...EDB
With the rapid growth in digitalization, coupled with the current pandemic situation globally, many organizations and businesses are forced to operate remotely and online, more than they would prefer. At such times, how do corporations and businesses ensure data security, especially the secure management of personal information?
There are many techniques used to secure information, such as authentication, authorization, access control, virtual database, and encryption. In this webinar, we focus on Data Redaction - a technique that limits sensitive data exposure in EDB Postgres Advanced Server (EPAS).
This webinar covers:
- What is EDB Data Redaction
- How to limit sensitive data exposure in EPAS
- Provision for Oracle compatibility in EPAS
- Demo
사례로 알아보는 MariaDB 마이그레이션
현대적인 IT 환경과 애플리케이션을 만들기 위해 우리는 오늘도 고민을 거듭합니다. 최근 들어 오픈소스 DB가 많은 업무에 적용되고 검증이 되면서, 점차 무거운 상용 데이터베이스를 가벼운 오픈소스 DB로 전환하는 움직임이 대기업의 미션 크리티컬 업무까지로 확산하고 있습니다. 이는 클라우드 환경 및 마이크로 서비스 개념 확산과도 일치하는 움직임입니다.
상용 DB를 MariaDB로 이관한 사례를 통해 마이그레이션의 과정과 효과를 살펴 볼 수 있습니다.
MariaDB로 이관하는 것은 어렵다는 생각을 막연히 가지고 계셨다면 본 자료를 통해 이기종 데이터베이스를 MariaDB로 마이그레이션 하는 작업이 어렵지 않게 수행될 수 있다는 점을 실제 사례를 통해 확인하시길 바랍니다.
웨비나 동영상
https://www.youtube.com/watch?v=xRsETZ5cKz8&t=52s
This presentation covers a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale. On the application side we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database we will see how wal compression, auto vacuum and checkpoint settings as well as a number of other configuration parameters can greatly affect the write performance of your database and application.
This presentation is for people who want to understand how PostgreSQL shares information among processes using shared memory. Topics covered include the internal data page format, usage of the shared buffers, locking methods, and various other shared memory data structures.
PostgreSQL High-Performance Cheat Sheets contains quick methods to find performance issues.
Summary of the course so that when problems arise, you are able to easily uncover what are the performance bottlenecks.
Material de apoio - Instalação e configuração de sistemas operacionais de redes Linux.
Sistemas de Arquivos
Tipos de Sistemas de Arquivos ext2, ext3, ext4, reiserfs, xfs
Setting up a GeoServer can sometimes be deceptively simple. However, going from proof-of-concept to production requires a number of steps to be taken in order to optimize the server in terms of availability, performance and scalability. The presentation will show how to get from a basic setup to a battle-ready, rock-solid installation.
Hpe Data Protector Disaster Recovery GuideAndrey Karpov
This chapter provides a general overview of the disaster recovery process, explains the basic terms used in the Disaster Recovery guide and provides an overview of disaster recovery methods
Carefully follow the instructions below to prepare for disaster recovery and ensure a fast and efficient restore. The preparation procedure does not depend on the disaster recovery method, and includes developing a detailed disaster recovery plan, performing consistent and relevant backups, and updating the SRD file on Windows.
Assisted Manual Disaster Recovery (AMDR)
Manual Disaster Recovery (MDR)
This chapter contains descriptions of problems you might encounter while performing a disaster recovery. You can start with problems connected to a particular disaster recovery method and continue with general disaster recovery problems.
Example Preparation Tasks
Introducing Data Redaction - an enabler to data security in EDB Postgres Adva...EDB
With the rapid growth in digitalization, coupled with the current pandemic situation globally, many organizations and businesses are forced to operate remotely and online, more than they would prefer. At such times, how do corporations and businesses ensure data security, especially the secure management of personal information?
There are many techniques used to secure information, such as authentication, authorization, access control, virtual database, and encryption. In this webinar, we focus on Data Redaction - a technique that limits sensitive data exposure in EDB Postgres Advanced Server (EPAS).
This webinar covers:
- What is EDB Data Redaction
- How to limit sensitive data exposure in EPAS
- Provision for Oracle compatibility in EPAS
- Demo
사례로 알아보는 MariaDB 마이그레이션
현대적인 IT 환경과 애플리케이션을 만들기 위해 우리는 오늘도 고민을 거듭합니다. 최근 들어 오픈소스 DB가 많은 업무에 적용되고 검증이 되면서, 점차 무거운 상용 데이터베이스를 가벼운 오픈소스 DB로 전환하는 움직임이 대기업의 미션 크리티컬 업무까지로 확산하고 있습니다. 이는 클라우드 환경 및 마이크로 서비스 개념 확산과도 일치하는 움직임입니다.
상용 DB를 MariaDB로 이관한 사례를 통해 마이그레이션의 과정과 효과를 살펴 볼 수 있습니다.
MariaDB로 이관하는 것은 어렵다는 생각을 막연히 가지고 계셨다면 본 자료를 통해 이기종 데이터베이스를 MariaDB로 마이그레이션 하는 작업이 어렵지 않게 수행될 수 있다는 점을 실제 사례를 통해 확인하시길 바랍니다.
웨비나 동영상
https://www.youtube.com/watch?v=xRsETZ5cKz8&t=52s
This presentation covers a number of the way that you can tune PostgreSQL to better handle high write workloads. We will cover both application and database tuning methods as each type can have substantial benefits but can also interact in unexpected ways when you are operating at scale. On the application side we will look at write batching, use of GUID's, general index structure, the cost of additional indexes and impact of working set size. For the database we will see how wal compression, auto vacuum and checkpoint settings as well as a number of other configuration parameters can greatly affect the write performance of your database and application.
- MariaDB 소개
- MariaDB 서버 구성 및 아키텍처 이해
- MariaDB 스토리지 엔진
- MariaDB 데이터베이스 관리
- 트랜잭션 / Locking 의 이해
- MariaDB 보안
- 백업과 복구를 통한 데이터베이스 관리
- MariaDB upgrade
- MariaDB 모니터링
- MySQL 에서 MariaDB 로의 전환
네이버클라우드플랫폼에서 제공하는 클라우드 데이터베이스 서비스를 소개하고, 네이버클라우드 플랫폼의 클라우드 데이터베이스 관리 노하우에 대해 소개합니다 | Introduce cloud database services provided by Naver Cloud Platform and know-how of managing cloud databases on Naver Cloud Platform
4. Migration 수행 개요
대상 업무
- 어떤 업무를 MSSQL 기반에서 MariaDB로 이관할 업무
Migration 대상 범위
Migration 수행 일정
계획단계
구분 대상업무 전환 현황 전환 대상 비고
Object 수 00 개
+Table Type 수 00 개
•MSSQL 기반 Table 대상
•MSSQL Data
+SP, Function Type 수 00 개 •MSSQL 기반 DB 내 Object
APPLICATION 수 00 본 •APPLICATION내 SQL 문 전환 및 적용, 테스트
개발언어 닷넷 •APPLICATION내 SQL 문 전환
업무구분 진행단계 주요 Activity
작업 기간(2014/00/00 ~ 00/00) R&R 비고
W W+1 W+2 W+3 W+4 W+5 W+6 W+7 W+8 개발자 DBA
마이그레이션
대상 업무
분석
환경 분석 O O
환경구축 O O
전환
DB Object 전환 X O
Application 전환 △ O
Data 이관(TEST 이관 포함) X O
TEST
Application TEST 진행 O O
결과 보고서 작성 O O
1
5. Migration 수행 개요 - Database Schema Migration 세부 계획
계획단계
대상 파악 및 객체분석,데이터
분석, 위험요소 파악
Table, Index,
Function, Procedure
…
Table 건수 파악
오류데이터 검증
[1단계] [2단계] [3단계]
변환객체 검증
보완
시스템 분석 객체변환 데이터 이관
변환 객체 검증
후 보완
MariaDBMSSQL
1
6. 마이그레이션 관련 진단 자동화 툴
이슈
- SQL Server 에 이관대상 DB OBJECT가 너무 많아서 작업량 산정에 감이 잡히지 않는다 이 문제를 어떻게 해결할 것인가?
- MariaDB로 이관작업을 완료하였다. 제대로 적용했는지 한방에 알 수 있는 방법은 없을까?
계획단계
2
해결방안
- MySQL Workbench: Database Migration
MySQL Workbench 툴에서 제공해 주는 DATABASE WIZARD 툴을 사용하여 MSSQL의 DB OBJECT 구문을 MariaDB용 DDL문과
MSSQL DB 오브젝트 관련 스크립트를 추출할 수 있다.
- SQL Server Migration Assistant for MySQL (MySQLToSQL)
SQL Server Migration Assistant 툴을 사용하여 구현된 MariaDB 소스에 대한 검토 및 스크립트를 추출할 수 있다.
7. 마이그레이션 관련 진단 자동화 툴 - SQL Server Migration Assistant (SSMA)
계획단계
2
8. 마이그레이션 관련 진단 자동화 툴 - MySQL Workbench Data Migration Wizard
계획단계
2
11. 데이터 모델링
이슈
- SQL Server 와 MariaDB는 각각의 고유한 DATA TYPE과 특징을 가지고 있다. 어떻게 작업을 해야 이관대상 DATA들이
MariaDB에 정확하게 저장될 수 있을 것인가?
진행단계
1
해결방안
- DATA TYPE의 특징에 따라 매핑 Rule을 정의하고 해당 RULE 맞추어 데이터를 이관할 수 있도록 한다.
12. MSSQL to MariaDB DATA TYPE MAPPING2
SQL Server MariaDB
tinyint TINYINT (UNSIGNED)
smallint SMALLINT
int
MEDIUMINT
INT
bigint BIGINT
정수
SQL Server MariaDB
Decimal(p,s) DECIMAL(M,D)
고정 소수점
진행단계
13. MSSQL to MariaDB DATA TYPE MAPPING - 계속2
SQL Server MariaDB
float(n) FLOAT(N)
float(24) FLOAT(M,D)
float(53)
DOUBLE(M,D)
REAL(M,D)
부동 소수점
SQL Server MariaDB
bit
BIT
bool / boolean
BIT
진행단계
14. MSSQL to MariaDB DATA TYPE MAPPING - 계속2
SQL Server MariaDB
datetime2/datetime DATETIME
date DATE
time TIME
smalldatetime TIMESTAMP
smallint YEAR
날짜 및 시간
진행단계
15. MSSQL to MariaDB DATA TYPE MAPPING - 계속2
SQL Server MariaDB
nchar(n) / char(n) CHAR
nvarchar(n|max)/varchar(n|max)
VARCHAR
TINYTEXT
TEXT(M)
MEDIUMTEXT
LONGTEXT
nvarbinary(n|max)/varbinary(n|max)
TINYBLOB
BLOB
MEDIUMBLOB
LONGBLOB
문자
진행단계
16. MSSQL to MariaDB DATA TYPE MAPPING - 계속2
SQL Server MariaDB
hierarchyid ?
uniqueidentifier ?
sql_variant ?
table ?
MSSQL 고유 데이터 타입
진행단계
17. 큰 값 데이터 타입
이슈
- 이관 대상인 SQL Server 에서는 대량의 데이터를 저장하기 위해 사용되는 varchar(max), nvarchar(max)를 많이 발견되었다.
이런 데이터 타입을 가진 테이블을 MariaDB로 이관할 때 어떤 데이터 타입을 사용해야 할까?
MariaDB는 TEXT or BLOB 의 칼럼들 이외의 레코드의 전체 크기가 64KB를 넘으면 다음과 같은 에러가 발생한다.
진행단계
3
해결방안
- varchar(max), nvarchar(max) 가 설정된 칼럼들의 데이터 타입이 적절하게 선언되었는지 Application 및 저장된 값들의
최대 크기를 확인 후 데이터 타입을 최적화 시킨다.
- 레코드의 전체 크기가 64KB를 초과하지 않으면 VARCHAR 나 VARBINARY 타입을 사용하자.
18. 파티션테이블
이슈
- 주기적으로 삭제해야 하는 로그성 테이블에 대한 이관
- Primary Key를 기준으로 균일하게 분포되어 있는 테이블에 대한 이관.
진행단계
4
해결방안
- MariaDB에서 제공해 주는 파티션의 종류는 크게 6가지가 있다.
- 레인지 파티션, 리스트 파티션, 해시 파티션, 키 파티션, 리니어 해시 파티션/ 리니어 키 파티션, 서브파티션
- 각각의 특징을 파악한 후 가장 적절한 것을 적용하면 된다.
19. 문자집합 & 콜레이션
이슈
- 다국어 서비스를 하고 있는데 MariaDB에서는 어떤 문자집합과 콜레이션을 사용해야 하는가?
진행단계
5
문자집합 & 콜레이션 확인
- SELECT * FROM information_schema.columns
WHERE table_schema = ‘mariadb_database’ and table_name = ‘mariadb_table’;
- SHOW CREATE TABLE mariadb_table;
해결방안
- MariaDB에서 다국어를 지원하기 위해서 지원되는 utf8 를 칼럼의 문자집합으로 사용하면 된다.
- 콜레이션(collation)은 명시적으로 지정하지 않으면 묵시적으로 문자집합의 기본 콜레이션을 사용하게 되는데
utf8의 기본 콜레이션은 utf8_general_ci 콜레이션이다.
20. 자동증가 (AUTO_INCREMENT) 옵션
진행단계
6
이슈
- innodb 스토리지 엔진에서 AUTO_INCREMENT 칼럼을 가진 테이블을 생성할 때 1075 에러 발생
해결방안
- innodb 스토리지 엔진에서는 AUTO_INCREMENT 칼럼이 프라이머리 키나 유니크 키 중 적어도 하나의 인덱스에서 제일 앞에
위치하고 있어야 정상적으로 생성된다.
- MyISAM 스토리지 엔진에서는 AUTO_INCREMENT 칼럼이 프라이머리 키나 유니크 키에 위치와 관계 없이 포함되어 있으면
정상적으로 생성된다.
21. 정수 타입 뒤의 길이 지정 값의 의미 및 주의사항
진행단계
7
SQL Server Management Studio
입력된 데이터가 정상적으로 조회됨
22. 데이터이관
이슈
- SQL Server 에 저장된 DATA를 MariaDB로 어떻게 이관할 것인가?
진행단계
8
해결방안
- SQL Server 의 SSIS Package를 사용하여 이관작업을 수행한다.
- SQL Server 테이블의 데이터를 텍스트 파일로 데이터를 내보내기 한 후 MariaDB의 LOAD DATA INFILE 명령을 사용하여 이관한다.
- SQL Server 에 Linked Server 로 MariaDB를 등록한 후 OpenQuery 명령을 사용하여 이관한다.
23. 시스템 함수 차이점
진행단계
9
구분 SQL Server MariaDB
구 문 datepart weekday
사용 예 SELECT datepart(dw, ‘2015-03-02’); SELECT weekday(‘2015-03-02’);
결 과
주의사항 * 일요일이 1 ~ 토요일 7 로 RETURN됨 * 월요일이 0 ~ 일요일 6 로 RETURN 됨
24. 테스트 및 검증
진행단계
10
소스 파일 내에 존재하는 SQL문에 대해서 MariaDB 기반 쿼리로 변환
변환된 쿼리 구문에 대해 클라이언트 툴 에서 기능 확인
동적 쿼리 또는 파라미터 변형 등에 대해서는 개발자 검증 작업 시 보정작업 수행
데이터베이스 처리 방식의 차이로 인한 정렬 및 실수 연산들과 같은 차이점 또한 추가 보정
DBA/개발자 DBA/개발자 업무 담당자
Syntax 검증
기능 검증
중요업무 연계 테스트
테스트 자료 검증
중요업무 검증
기본성능 검증
중요업무 검증
26. MariaDB vs MSSQL 구문 차이
향후 계획
1
구 분 SQL Server MariaDB
컬럼 추가
ALTER TABLE table_name
ADD column_name column_property
ALTER TABLE table_name
ADD column_name column_property;
컬럼 수정
ALTER TABLE table_name
ALTER COLUMN column_name
new_column_property
ALTER TABLE table_name
MODIFY COLUMN column_name
new_column_property;
컬럼 변경
EXEC sp_rename
Table_name.Old_column_name New_column_name
ALTER TABLE table_name
CHANGE COLUMN Old_column_name New_column_name
new_column_property;
컬럼 삭제
ALTER TABLE table_name
DROP column_name
ALTER TABLE table_name
DROP COLUMN column_name;
컬럼 주석
EXEC sp_addextendedproperty
@name=N’property_name’,
@value=’description’ …
ALTER TABLE table_name
MODIFY COLUMN column_name
column_property COMMENT ‘description’;
구 분 SQL Server MariaDB
테이블
생성
CREATE TABLE [schema].table_name
( column_name data_type column_constraints,
…
table_constraints )
)
[ON filegroup / partition_scheme]
CREATE [OR REPLACE] TABLE [IF NOT EXISTS] tbl_name
(create_definition,...) [table_options ]
... [partition_options]
임시테이블
생성
로컬 임시 테이블:
CREATE TABLE #table_name
전역 임시 테이블:
CREATE TABLE ##table_name
CREATE [TEMPORARY] TABLE [IF NOT EXISTS] tbl_name
(create_definition,...) [table_options ]
... [partition_options]
27. References
• 3rd Party Tools
– SQL Server Migration Assistant for MySQL (MySQLToSQL)
https://msdn.microsoft.com/en-us/library/hh313109(v=sql.110).aspx
– MySQL Workbench: Database Migration
http://www.mysql.com/products/workbench/migrate/
• 도서
– 이성욱, 개발자와 DBA를 위한 Real MySQL, 위키북스, 2012
– 이성욱, MariaDB 10.0과 MySQL 5.6을 한번에 배우는 Real MariaDB, 위키북스, 2014
– 성동찬, MariaDB 실전 활용 노하우, 한빚미디어, 2014