SlideShare a Scribd company logo
875 How ard Street, Fifth Floor, San Francisco, CA 94103
Pivotal Greenplum Database
INSTALLATION GUIDE ON VM
Prepared By
Pivotal Korea
875 How ard Street, Fifth Floor, San Francisco, CA 94103
DOCUMENT CONTROL
For any questions regarding this document contact:
Name: Seungdon Choi
E-mail: schoi@pivotal.io
Document Revision History
Date Version Description Author Reviewer
04/02/2015 0.1 Draft for internal
review
09/02/2015 0.9 For Distribution
875 How ard Street, Fifth Floor, San Francisco, CA 94103
Table of Contents
DOCUMENT OVERVIEW...............................................................................................4
System configuration.......................................................................................................4
OS PREREQUISITE........................................................................................................5
RUN GREENPLUM INSTALLER...................................................................................7
INSTALL & CONFIG ALL HOSTS.................................................................................9
create data storage areas.............................................................................................12
validate system...............................................................................................................12
initialize greenplum database system.........................................................................14
RUN ERRANDS & CONFIGURATIONS....................................................................20
875 How ard Street, Fifth Floor, San Francisco, CA 94103
DOCUMENT OVERVIEW
본 문서는 Pivotal Greenplum Database 를 Virtual Machine 환경에 설치하는
방법에 대해서 다룬다.
i. 문서에서 다루는 가이드들은 Pivotal 의 공식적인 답변이 아니며, 공식적인 설치
문서는 http://docs.pivotal.io 를 참고하도록 한다.
ii. 문서에서 다룬 Greenplum Database 의 설정값들은 운영환경을 위한 값이
아니며, 사용자의 PC 상에서 테스트를 위한 최소 설정값이다. 실 운영환경을 위한
설정값은 공식적인 문서와 Pivotal 의 컨설팅을 요한다.
참고 문서
Pivotal Greenplum Installation Guide
http://gpdb.docs.pivotal.io/4340/index.html#install_guide/install_guide.html
SYSTEM CONFIGURATION
본 문서에서 사용된 VM 정보와 설치 정보는 다음과 같다.
Machine:
MacBook Pro – CPU 2.6GHz Intel i7, 16GB Mem
875 How ard Street, Fifth Floor, San Francisco, CA 94103
VMWare Fusion 6
VM Name IP 용도
HD1 10.10.10.1 Master
HD2 10.10.10.2 Standby Master,
Segment
HD3 10.10.10.3 Segment
HD4 10.10.10.4 Segment
각 VM 노드들은 1 vcpu, 1G Mem 으로 구성. OS: CentOS 6.5
실제 구성은 정식 문서(http://gpdb.docs.pivotal.io/4340/index.html#prep_os-
system-req.html#topic2) 를 따른다.
OS PREREQUISITE
1.설치에 필요한 Greenplum 설치 패키지 다운로드
http://network.pivotal.io 에서
Pivotal Greenplum Database  4.3.4.0 Database Server
Greenplum Database 4.3.4.0 for Red Hat Enterprise Linux 5 and 6 를 선택하여
다운로드한다. HD1 VM 에 업로드 해놓도록 한다.
각 노드에 하기와 같이 OS 설정을 진행한다.
2. Disable Firewall (ip tables)
/sbin/chkconfig iptables off
/sbin/chkconfig --list iptables
875 How ard Street, Fifth Floor, San Francisco, CA 94103
3. Disable SELinux
/etc/selinux/config
SELINUX=disabled
System reboot 을 수행한다.
sestatus
4. /etc/sysctl.conf 의 설정 추가
xfs_mount_options = rw,noatime,inode64,allocsize=16m
kernel.shmmax = 500000000
kernel.shmmni = 4096
kernel.shmall = 4000000000
kernel.sem = 250 512000 100 2048
kernel.sysrq = 1
kernel.core_uses_pid = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.msgmni = 2048
net.ipv4.tcp_syncookies = 1
net.ipv4.ip_forward = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_max_syn_backlog = 4096
net.ipv4.conf.all.arp_filter = 1
net.ipv4.ip_local_port_range = 1025 65535
net.core.netdev_max_backlog = 10000
vm.overcommit_memory = 2
#sysctl –p
5. /etc/security/limits.conf
soft nofile 65536
hard nofile 65536
soft nproc 131072
hard nproc 131072
875 How ard Street, Fifth Floor, San Francisco, CA 94103
6. NTP 설정
/etc/ntp.conf 에서 master node(HD1) 을 가르키도록 설정
server HD1 prefer
server HD2
NTP daemon 을 사용하여 모든 greenplum host 의 system clock 을 sync 맞춤.
[root@hd1 greenplum-db]# gpssh -f hostfile_exkeys -v -e 'ntpd'
[Reset ...]
[INFO] login hd1
[INFO] login hd2
[INFO] login hd3
[INFO] login hd4
[hd1] ntpd
[hd2] ntpd
[hd3] ntpd
[hd4] ntpd
[INFO] completed successfully
RUN GREENPLUM INSTALLER
Master node (HD1) 에 greenplum media 를 설치한다.
Root 유저로
-rw-r--r--. 1 gpadmin gpadmin 133791166 Feb 3 23:32 greenplum-db-4.3.4.0-build-1-RHEL5-
x86_64.zip
[gpadmin@hd1 stage]$ unzip greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.zip
Archive: greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.zip
inflating: README_INSTALL
875 How ard Street, Fifth Floor, San Francisco, CA 94103
inflating: greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.bin
[root@hd1 stage]# ./greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.bin
********************************************************************************
You must read and accept the Pivotal Database license agreement
before installing
********************************************************************************
*** IMPORTANT INFORMATION - PLEASE READ CAREFULLY ***
PIVOTAL GREENPLUM DATABASE END USER LICENSE AGREEMENT
IMPORTANT - READ CAREFULLY: This Software contains computer programs and
other proprietary material and information, the use of which is subject to
and expressly conditioned upon acceptance of this End User License
Agreement ("EULA").
…… 스킵스킵
********************************************************************************
Do you accept the Pivotal Database license agreement? [yes|no]
********************************************************************************
yes
********************************************************************************
Provide the installation path for Greenplum Database or press ENTER to
accept the default installation path: /usr/local/greenplum-db-4.3.4.0
********************************************************************************
********************************************************************************
Install Greenplum Database into </usr/local/greenplum-db-4.3.4.0>? [yes|no]
*******************************************************************************
yes
********************************************************************************
/usr/local/greenplum-db-4.3.4.0 does not exist.
Create /usr/local/greenplum-db-4.3.4.0 ? [yes|no]
875 How ard Street, Fifth Floor, San Francisco, CA 94103
(Selecting no will exit the installer)
********************************************************************************
yes
********************************************************************************
[Optional] Provide the path to a previous installation of Greenplum Database,
or press ENTER to skip this step. e.g. /usr/local/greenplum-db-4.1.1.3
This installation step will migrate any Greenplum Database extensions from the
provided path to the version currently being installed. This step is optional
and can be run later with:
gppkg --migrate <path_to_old_gphome> /usr/local/greenplum-db-4.3.4.0
********************************************************************************
Extracting product to /usr/local/greenplum-db-4.3.4.
Skipping migration of Greenplum Database extensions...
********************************************************************************
Installation complete.
Greenplum Database is installed in /usr/local/greenplum-db-4.3.4.0
Pivotal Greenplum documentation is available
for download at http://docs.gopivotal.com/gpdb
*******************************************************************************
INSTALL & CONFIG ALL HOSTS
Root 권한으로 접속하여 grenplum 환경 변수를 source 한다.
#su –
#source /usr/local/greenplum-db/greenplum_path.sh
Greenplum 이 설치될 host list 를 기록한 hostfile 을 생성한다.
vi hostfile_exkeys
875 How ard Street, Fifth Floor, San Francisco, CA 94103
hd1
hd2
hd3
hd4
설치전에 root 유저의 ssh key exchange 를 수행한다.
gpssh-exkeys -f hostfile
모든 노드에 greenplum software 를 설치한다.
[root@hd1 greenplum-db]# gpseginstall -f hostfile -u gpadmin -p changeme
20150204:00:31:30:002919 gpseginstall:hd1:root-[INFO]:-Installation Info:
link_name greenplum-db
binary_path /usr/local/greenplum-db-4.3.4.0
binary_dir_location /usr/local
binary_dir_name greenplum-db-4.3.4.0
20150204:00:31:30:002919 gpseginstall:hd1:root-[INFO]:-check cluster password access
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-de-duplicate hostnames
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-master hostname: hd1.vm
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-check for user gpadmin on cluster
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-add user gpadmin on master
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-add user gpadmin on cluster
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-chown -R gpadmin:gpadmin
/usr/local/greenplum-db
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-chown -R gpadmin:gpadmin
/usr/local/greenplum-db-4.3.4.0
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-rm -f /usr/local/greenplum-db-
4.3.4.0.tar; rm -f /usr/local/greenplum-db-4.3.4.0.tar.gz
20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-cd /usr/local; tar cf greenplum-db-
4.3.4.0.tar greenplum-db-4.3.4.0
20150204:00:31:34:002919 gpseginstall:hd1:root-[INFO]:-gzip /usr/local/greenplum-db-4.3.4.0.tar
20150204:00:31:52:002919 gpseginstall:hd1:root-[INFO]:-remote command: mkdir -p /usr/local
20150204:00:31:52:002919 gpseginstall:hd1:root-[INFO]:-remote command: rm -rf
/usr/local/greenplum-db-4.3.4.0
20150204:00:31:53:002919 gpseginstall:hd1:root-[INFO]:-scp software to remote location
20150204:00:31:57:002919 gpseginstall:hd1:root-[INFO]:-remote command: gzip -f -d
/usr/local/greenplum-db-4.3.4.0.tar.gz
20150204:00:32:04:002919 gpseginstall:hd1:root-[INFO]:-md5 check on remote location
875 How ard Street, Fifth Floor, San Francisco, CA 94103
20150204:00:32:05:002919 gpseginstall:hd1:root-[INFO]:-remote command: cd /usr/local; tar xf
greenplum-db-4.3.4.0.tar
20150204:00:32:08:002919 gpseginstall:hd1:root-[INFO]:-remote command: rm -f
/usr/local/greenplum-db-4.3.4.0.tar
20150204:00:32:09:002919 gpseginstall:hd1:root-[INFO]:-remote command: cd /usr/local; rm -f
greenplum-db; ln -fs greenplum-db-4.3.4.0 greenplum-db
20150204:00:32:09:002919 gpseginstall:hd1:root-[INFO]:-remote command: chown -R
gpadmin:gpadmin /usr/local/greenplum-db
20150204:00:32:09:002919 gpseginstall:hd1:root-[INFO]:-remote command: chown -R
gpadmin:gpadmin /usr/local/greenplum-db-4.3.4.0
20150204:00:32:10:002919 gpseginstall:hd1:root-[INFO]:-rm -f /usr/local/greenplum-db-
4.3.4.0.tar.gz
20150204:00:32:10:002919 gpseginstall:hd1:root-[INFO]:-Changing system passwords ...
20150204:00:32:11:002919 gpseginstall:hd1:root-[INFO]:-exchange ssh keys for user root
20150204:00:32:12:002919 gpseginstall:hd1:root-[INFO]:-exchange ssh keys for user gpadmin
20150204:00:32:12:002919 gpseginstall:hd1:root-[INFO]:-/usr/local/greenplum-
db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin
20150204:00:32:12:002919 gpseginstall:hd1:root-[INFO]:-remote command: .
/usr/local/greenplum-db/./greenplum_path.sh; /usr/local/greenplum-db/./sbin/gpfixuserlimts -f
/etc/security/limits.conf -u gpadmin
20150204:00:32:13:002919 gpseginstall:hd1:root-[INFO]:-version string on master: gpssh version
4.3.4.0 build 1
20150204:00:32:13:002919 gpseginstall:hd1:root-[INFO]:-remote command: .
/usr/local/greenplum-db/./greenplum_path.sh; /usr/local/greenplum-db/./bin/gpssh --version
20150204:00:32:13:002919 gpseginstall:hd1:root-[INFO]:-remote command: .
/usr/local/greenplum-db-4.3.4.0/greenplum_path.sh; /usr/local/greenplum-db-4.3.4.0/bin/gpssh --
version
20150204:00:32:18:002919 gpseginstall:hd1:root-[INFO]:-SUCCESS -- Requested commands
completed
설치 validation 을 진행한다.
su - gpadmin
.bash_profile
source /usr/local/greenplum-db/greenplum_path.sh
875 How ard Street, Fifth Floor, San Francisco, CA 94103
gpadmin ssh key exchange
gpssh-exkeys -f hostfile
install 본 확인
gpssh -f hostfile -e ls -l $GPHOME
설치가 정상적으로 완료되었다면 각 host 에 greenplum S/W 가 설치되고 각
노드에 gpadmin 으로 접속시 password 입력없이 로그인이 가능하다.
CREATE DATA STORAGE AREAS
각 노드별로 data directory 를 생성한다.
Master Node
[root@hd1 home]# mkdir -p /data/master
[root@hd1 home]# chown -R gpadmin:gpadmin /data/master
[root@hd1 home]# source /usr/local/greenplum-db/greenplum_path.sh
[root@hd1 home]# gpssh -h hd2 -e 'mkdir -p /data/master'
[hd2] mkdir -p /data/master
[root@hd1 home]# gpssh -h hd2 -e 'chown -R gpadmin:gpadmin /data/master'
[hd2] chown -R gpadmin:gpadmin /data/master
Segment Node
[root@hd1 home]# vi hostfile_gpssh_segonly
hd2
hd3
hd4
gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/primary'
gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/mirror'
gpssh -f hostfile_gpssh_segonly -e 'chown -R gpadmin:gpadmin /data/primary'
gpssh -f hostfile_gpssh_segonly -e 'chown -R gpadmin:gpadmin /data/mirror'
VALIDATE SYSTEM
875 How ard Street, Fifth Floor, San Francisco, CA 94103
1. gpcheck 를 사용하여 각 노드들의 OS 셋팅값을 validation 한다.
hostfile_gpcheck
hd1
hd2
hd3
hd4
[root@hd1 greenplum-db]# gpcheck -f hostfile_exkeys -m hd1 -s hd2
20150208:16:01:27:004841 gpcheck:hd1:root-[INFO]:-dedupe hostnames
20150208:16:01:27:004841 gpcheck:hd1:root-[INFO]:-Detected platform: Generic Linux Cluster
20150208:16:01:27:004841 gpcheck:hd1:root-[INFO]:-generate data on servers
20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-copy data files from servers
20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-delete remote tmp files
20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-Using gpcheck config file:
/usr/local/greenplum-db/./etc/gpcheck.cnf
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on
device (sr0) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on
device (sda) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on
device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on
device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on
device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on
device (/dev/sda) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on
device (sr0) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on
device (sda) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on
device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on
device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
875 How ard Street, Fifth Floor, San Francisco, CA 94103
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on
device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on
device (/dev/sda) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on
device (sr0) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on
device (sda) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on
device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on
device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on
device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on
device (/dev/sda) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on
device (sr0) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on
device (sda) IO scheduler 'cfq' does not match expected value 'deadline'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on
device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on
device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on
device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on
device (/dev/sda) blockdev readahead value '256' does not match expected value '16384'
20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-gpcheck completing...
2. Gpcheckperf 를 사용하여 각 노드들의 Hardware Performance 를 채크한다.
INITIALIZE GREENPLUM DATABASE SYSTEM
875 How ard Street, Fifth Floor, San Francisco, CA 94103
su - gpadmin
cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/
ARRAY_NAME="EMC Greenplum DW"
SEG_PREFIX=gpseg
PORT_BASE=40000
declare -a DATA_DIRECTORY=(/data/primary /data/primary /data/primary)
MASTER_HOSTNAME=hd1
MASTER_DIRECTORY=/data/master
MASTER_PORT=5432
TRUSTED SHELL=ssh
CHECK_POINT_SEGMENT=8
ENCODING=UNICODE
gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking configuration
parameters, please wait...
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Reading Greenplum configuration
file gpconfigs/gpinitsystem_config
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Locale has not been set in
gpconfigs/gpinitsystem_config, will set to default value
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Locale set to en_US.utf8
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[WARN]:-Master hostname hd1 does not
match hostname output
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking to see if hd1 can be
resolved on this host
20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Can resolve hd1 to this host
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-No DATABASE_NAME set, will
exit following template1 updates
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-MASTER_MAX_CONNECT not
set, will set to default value 250
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking configuration
parameters, Completed
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Commencing multi-home checks,
please wait...
....
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Configuring build for standard
array
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Commencing multi-home checks,
Completed
20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Building primary segment
instance array, please wait...
............
20150208:16:15:47:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking Master host
20150208:16:15:47:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking new segment hosts,
please wait...
............
20150208:16:15:54:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking new segment hosts,
Completed
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum Database Creation
Parameters
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master Configuration
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------
875 How ard Street, Fifth Floor, San Francisco, CA 94103
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master instance name = EMC
Greenplum DW
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master hostname = hd1
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master port = 5432
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master instance dir =
/data/master/gpseg-1
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master LOCALE =
en_US.utf8
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum segment prefix =
gpseg
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master Database =
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master connections = 250
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master buffers =
128000kB
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Segment connections = 750
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Segment buffers =
128000kB
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checkpoint segments = 8
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Encoding = UNICODE
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Postgres param file = Off
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Initdb to be used =
/usr/local/greenplum-db/./bin/initdb
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-GP_LIBRARY_PATH is =
/usr/local/greenplum-db/./lib
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Ulimit check = Passed
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Array host connect type =
Single hostname per node
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [1] = ::1
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [2] =
10.10.10.1
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [3] =
172.16.57.230
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [4] =
fe80::20c:29ff:fe89:1623
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [5] =
fe80::20c:29ff:fe89:162d
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Standby Master = Not
Configured
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Primary segment # = 3
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total Database segments = 12
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Trusted shell = ssh
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Number segment hosts = 4
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Mirroring config = OFF
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum Primary Segment
Configuration
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd1 /data/primary/gpseg0
40000 2 0
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd1 /data/primary/gpseg1
40001 3 1
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd1 /data/primary/gpseg2
40002 4 2
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd2 /data/primary/gpseg3
40000 5 3
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd2 /data/primary/gpseg4
875 How ard Street, Fifth Floor, San Francisco, CA 94103
40001 6 4
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd2 /data/primary/gpseg5
40002 7 5
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd3 /data/primary/gpseg6
40000 8 6
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd3 /data/primary/gpseg7
40001 9 7
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd3 /data/primary/gpseg8
40002 10 8
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd4 /data/primary/gpseg9
40000 11 9
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd4 /data/primary/gpseg10
40001 12 10
20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd4 /data/primary/gpseg11
40002 13 11
Continue with Greenplum creation Yy/Nn>
y
20150208:16:15:58:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Building the Master instance
database, please wait...
20150208:16:16:06:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Starting the Master in admin
mode
20150208:16:16:15:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Commencing parallel build of
primary segment instances
20150208:16:16:15:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Spawning parallel processes
batch [1], please wait...
............
20150208:16:16:17:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Waiting for parallel processes
batch [1], please wait...
..............................................
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------------
--
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Parallel process exit status
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------------
--
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total processes marked as
completed = 12
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total processes marked as killed
= 0
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total processes marked as failed
= 0
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------------
--
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Deleting distributed backout files
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Removing back out file
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-No errors generated from parallel
processes
20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Restarting the Greenplum
instance in production mode
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Starting gpstop with args: -a -i -m -d
/data/master/gpseg-1
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Gathering information and validating the
environment...
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog
information
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Obtaining Segment details from
master...
875 How ard Street, Fifth Floor, San Francisco, CA 94103
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Greenplum Version: 'postgres
(Greenplum Database) 4.3.4.0 build 1'
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-There are 0 connections to the
database
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Commencing Master instance
shutdown with mode='immediate'
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Master host=hd1.vm
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Commencing Master instance
shutdown with mode=immediate
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Master segment instance
directory=/data/master/gpseg-1
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Attempting forceful termination of any
leftover master process
20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Terminating processes for segment
/data/master/gpseg-1
20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Starting gpstart with args: -a -d
/data/master/gpseg-1
20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Gathering information and validating
the environment...
20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres
(Greenplum Database) 4.3.4.0 build 1'
20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Greenplum Catalog Version:
'201310150'
20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Starting Master instance in admin mode
20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog
information
20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Obtaining Segment details from
master...
20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Setting new master era
20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Master Started...
20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Shutting down master
20150208:16:17:08:047010 gpstart:hd1:gpadmin-[INFO]:-Commencing parallel segment instance
startup, please wait...
..
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-Process results...
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-----------------------------------------------------
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- Successful segment starts
= 12
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- Failed segment starts
= 0
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- Skipped segment starts (segments
are marked down in configuration) = 0
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-----------------------------------------------------
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-Successfully started 12 of 12 segment
instances
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-----------------------------------------------------
20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-Starting Master instance hd1.vm
directory /data/master/gpseg-1
20150208:16:17:11:047010 gpstart:hd1:gpadmin-[INFO]:-Command pg_ctl reports Master
hd1.vm instance active
20150208:16:17:12:047010 gpstart:hd1:gpadmin-[INFO]:-No standby master configured.
skipping...
20150208:16:17:12:047010 gpstart:hd1:gpadmin-[INFO]:-Database successfully started
20150208:16:17:12:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Completed restart of Greenplum
instance in production mode
875 How ard Street, Fifth Floor, San Francisco, CA 94103
20150208:16:17:12:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Loading gp_toolkit...
20150208:16:17:13:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Scanning utility log file for any
warning messages
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:-
*******************************************************
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:-Scan of log file indicates that
some warnings or errors
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:-were generated during the array
creation
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Please review contents of log file
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-
/home/gpadmin/gpAdminLogs/gpinitsystem_20150208.log
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To determine level of criticality
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-These messages could be from a
previous run of the utility
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-that was called today!
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:-
*******************************************************
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum Database instance
successfully created
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------------
---------
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To complete the environment
configuration, please
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-update gpadmin .bashrc file with
the following
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-1. Ensure that the
greenplum_path.sh file is sourced
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-2. Add "export
MASTER_DATA_DIRECTORY=/data/master/gpseg-1"
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- to access the Greenplum scripts
for this instance:
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1
option for the Greenplum scripts
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- Example gpstate -d
/data/master/gpseg-1
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Script log file =
/home/gpadmin/gpAdminLogs/gpinitsystem_20150208.log
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To remove instance, run
gpdeletesystem utility
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To initialize a Standby Master
Segment for this Greenplum instance
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Review options for gpinitstandby
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------------
---------
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-The Master /data/master/gpseg-
1/pg_hba.conf post gpinitsystem
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-has been configured to allow all
hosts within this new
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-array to intercommunicate. Any
hosts external to this
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-new array must be explicitly
added to this file
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Refer to the Greenplum Admin
support guide which is
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-located in the
875 How ard Street, Fifth Floor, San Francisco, CA 94103
/usr/local/greenplum-db/./docs directory
20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:----------------------------------------------
---------
RUN ERRANDS & CONFIGURATIONS
DB ENVIRONMENT CONFIGURATION
1) pg_hba.conf
Postgresql/Greenplum 과 동일하게 HAWQ 접속을 위해서는 접속할
데이터베이스, User, IP 등을 등록해야 한다.
만일 모든 client, db 에 접속 제한을 하지 않으려면 host all all 0.0.0.0/0
trust 입력
$ cd $MASTER_DATA_DIRECTORY
$ vi pg_hba.conf
# TYPE DATABASE USER CIDR-ADDRESS METHOD
# Type : local : 마스터 노드에서 접속, host : IP로 접속
# Database : all 은 모든 데이터베이스 접속 허용, 특정 db만 접근할 경우 database 명
기입.
# user : all / user명
# CIDR-ADDRESS: IP/0 : 전부 허용, IP/8: A class 허용, IP/16 B Class 허용
# IP/24 C Class 허용, IP/32 IP 일치시 허용
# METHOD : md5 : 비밀번호 필요, trust : 비밀번호 필요치 않음(비밀번호가 틀리더라도
접속 가능 함)
# 예제
local all gpadmin trust
host all all 172.16.0.0/16 md5 # 172.16.X.X B class, 패스워드
필요
host bmt all 192.168.10.0/28 trust # bmt database에 모든 user가
192.168.10.X C class로
875 How ard Street, Fifth Floor, San Francisco, CA 94103
# 패스워드 없이 접속 가능함
host bmt u1234 192.168.10..101/32 md5 # bmt 데이터 베이스에 u1234 /
해당IP에 대해서
# 패스워드가
일치해야지만 접속할 수 있음
$ gpstop –u # 시스템에 반영(실제 DB Restart 는 하지 않음)
2) postgresql.conf
PostgreSQL 의 configuration 을 모아둔 파일이다.
$ cd $MASTER_DATA_DIRECTORY
$ vi postgresql.conf
# 수정할 값
log_duration = on
client_encoding = uhc
datestyle = 'iso, ymd'
gp_external_grant_privileges = on
max_connections=200 # 세그먼트는 1000
max_prepared_transactions=200
3) .psqlrc
psql 로 접속하였을 때 쿼리 소요시간 , fetch count 를 적용한다.
$ cd ~
$ vi .psqlrc
timing
set FETCH_COUNT 10000
4) .pgpass
5) .bash_profile
.bash_profile 상에 MASTER_DATA_DIRECTORY 설정
source /usr/local/greenplum-db/greenplum_path.sh
export MASTER_DATA_DIRECTORY=/data/master/gpseg-1
일반적으로 많이 사용하는 모니터링 쿼리와 설정들은 하기와 같다.
# Get the aliases and functions
875 How ard Street, Fifth Floor, San Francisco, CA 94103
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
PATH=$PATH:$HOME/bin
export PATH PS1
PATH=${PATH}:/usr/sfw/bin:/usr/bin:/usr/local/bin:/usr/sbin:/opt/perl-
5.8.8/bin:/usr/local/sbin
export PATH
export EDITOR=vi
## Greenplum 환경설정
source /usr/local/greenplum-db/greenplum_path.sh
source /usr/local/greenplum-perfmon-web-4.0.1.0-build-1/gpperfmon_path.sh
export MASTER_DATA_DIRECTORY=/data/gpdb_master/gpseg-1 ## 각 환경설정에
맞도록 셋팅
export LC_ALL=C
## DB 접속 파라미터 셋팅
export PGPORT=5432 ##Greenplum Port
export PGDATABASE=stat ##Greenplum Database명
export USER=gpadmin ## 접속할 계정
## aliases for the tables
#-- 세션 확인
alias qq="psql -c " select now()-query_start, usename, client_addr, waiting, procpid,
sess_id from pg_stat_activity where current_query not like '%IDLE%' order by 4, 1
desc;""
#--실행 쿼리 확인
alias cq='psql -c "select now()-query_start, procpid, usename, sess_id, current_query
from pg_stat_activity where current_query not like '''%IDLE%''' order by 1 desc;"'
#-- locks 확인
alias locks="psql -c " SELECT pid, relname, locktype, mode from pg_locks, pg_class
where relation=oid and relname not like 'pg_%' order by 3;""
875 How ard Street, Fifth Floor, San Francisco, CA 94103
#리소스 큐 확인
alias rs='psql -c " select a.rsqname, a.rsqcountlimit as countlimit, a.rsqcountvalue as
countvalue, a.rsqwaiters as waiters, a.rsqcostlimit as costlimit, a.rsqcostvalue as
costvalue, b.rsqignorecostlimit as ignorecostlimit, b.rsqovercommit as overcommit from
pg_resqueue_status a, pg_resqueue b where a.rsqname = b.rsqname order by 1;"'
DB CREATION AND RUN A TEST
[gpadmin@hd1 ~]$ createdb test
[gpadmin@hd1 ~]$ export PGDATABASE=test
[gpadmin@hd1 ~]$ psql
psql (8.2.15)
Type "help" for help.
test=# l
List of databases
Name | Owner | Encoding | Access privileges
-----------+---------+----------+---------------------
postgres | gpadmin | UTF8 |
template0 | gpadmin | UTF8 | =c/gpadmin
: gpadmin=CTc/gpadmin
template1 | gpadmin | UTF8 | =c/gpadmin
: gpadmin=CTc/gpadmin
test | gpadmin | UTF8 |
BASIC GPDB COMMANDS
GPDB STOP, START
구 분 명령어
DB Start gpstart or gpstart -a
DB Stop gpstop –af
875 How ard Street, Fifth Floor, San Francisco, CA 94103
DB Restart gpstop -r
환경 정보 읽어오기(Reload) gpstop -u
마스터 노드 DB Start gpstart –m
마스터 노드 DB Stop gpstop –m
유틸리티 모드시 psql 접속 PGOPTIONS='-c gp_session_role=utility'
psql
GPDB CHECK STATUS
구 분 명령어
DB 상태 $ gpstate
DB 상세 상태 $ gpstate -s
Primary/Mirror 매핑 정보 $ gpstate -c
Mirror정보 $ gpstate -m
Standby Master 정보 $ gpstate -f
포트 정보 $ gpstate –p

More Related Content

What's hot

Wiresharkで検出できないチャットプログラム
Wiresharkで検出できないチャットプログラムWiresharkで検出できないチャットプログラム
Wiresharkで検出できないチャットプログラム
Shinichi Hirauchi
 
Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?
Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?
Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?
BGA Cyber Security
 
Bastion jump hosts with Teleport
Bastion jump hosts with TeleportBastion jump hosts with Teleport
Bastion jump hosts with Teleport
Faelix Ltd
 
Siber Tehdit Avcılığı (Threat Hunting)
Siber Tehdit Avcılığı (Threat Hunting)Siber Tehdit Avcılığı (Threat Hunting)
Siber Tehdit Avcılığı (Threat Hunting)
BGA Cyber Security
 
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스
PgDay.Seoul
 
SageMaker Neoの可能性について - 第3回 Amazon SageMaker 事例祭り+体験ハンズオン
SageMaker Neoの可能性について- 第3回 Amazon SageMaker 事例祭り+体験ハンズオンSageMaker Neoの可能性について- 第3回 Amazon SageMaker 事例祭り+体験ハンズオン
SageMaker Neoの可能性について - 第3回 Amazon SageMaker 事例祭り+体験ハンズオン
tomohiro kato
 
Yeni Nesil DDOS Saldırıları ve Korunma Yöntemleri
Yeni Nesil DDOS Saldırıları ve Korunma YöntemleriYeni Nesil DDOS Saldırıları ve Korunma Yöntemleri
Yeni Nesil DDOS Saldırıları ve Korunma Yöntemleri
BGA Cyber Security
 
Sizma testi bilgi toplama
Sizma testi bilgi toplamaSizma testi bilgi toplama
Log Yonetimi ve SIEM Kontrol Listesi
Log Yonetimi ve SIEM Kontrol Listesi Log Yonetimi ve SIEM Kontrol Listesi
Log Yonetimi ve SIEM Kontrol Listesi
Ertugrul Akbas
 
[DL Hacks]FPGA入門
[DL Hacks]FPGA入門[DL Hacks]FPGA入門
[DL Hacks]FPGA入門
Deep Learning JP
 
Dd and atomic ddl pl17 dublin
Dd and atomic ddl pl17 dublinDd and atomic ddl pl17 dublin
Dd and atomic ddl pl17 dublin
Ståle Deraas
 
暗号化したまま計算できる暗号技術とOSS開発による広がり
暗号化したまま計算できる暗号技術とOSS開発による広がり暗号化したまま計算できる暗号技術とOSS開発による広がり
暗号化したまま計算できる暗号技術とOSS開発による広がり
MITSUNARI Shigeo
 
Intel 82599 10GbE Controllerで遊ぼう
Intel 82599 10GbE Controllerで遊ぼうIntel 82599 10GbE Controllerで遊ぼう
Intel 82599 10GbE Controllerで遊ぼうTakuya ASADA
 
SIZMA TESTLERİNDE BİLGİ TOPLAMA
SIZMA TESTLERİNDE BİLGİ TOPLAMASIZMA TESTLERİNDE BİLGİ TOPLAMA
SIZMA TESTLERİNDE BİLGİ TOPLAMA
BGA Cyber Security
 
暗認本読書会12
暗認本読書会12暗認本読書会12
暗認本読書会12
MITSUNARI Shigeo
 
Millions quotes per second in pure java
Millions quotes per second in pure javaMillions quotes per second in pure java
Millions quotes per second in pure javaRoman Elizarov
 
3 Router Configuration - Cisco Packet Tracer
3 Router Configuration - Cisco Packet Tracer 3 Router Configuration - Cisco Packet Tracer
3 Router Configuration - Cisco Packet Tracer
Rajan Kasodariya
 
Best practices for MySQL High Availability
Best practices for MySQL High AvailabilityBest practices for MySQL High Availability
Best practices for MySQL High Availability
Colin Charles
 
Hacklenmiş Windows Sistem Analizi
Hacklenmiş Windows Sistem AnaliziHacklenmiş Windows Sistem Analizi
Hacklenmiş Windows Sistem Analizi
BGA Cyber Security
 
OSC 2011 Hokkaido 自宅SAN友の会(後半)
OSC 2011 Hokkaido 自宅SAN友の会(後半)OSC 2011 Hokkaido 自宅SAN友の会(後半)
OSC 2011 Hokkaido 自宅SAN友の会(後半)
Satoshi Shimazaki
 

What's hot (20)

Wiresharkで検出できないチャットプログラム
Wiresharkで検出できないチャットプログラムWiresharkで検出できないチャットプログラム
Wiresharkで検出できないチャットプログラム
 
Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?
Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?
Some Tatbikatları ve SIEM Testleri İçin Siber Saldırıları Nasıl Optimize Ederiz?
 
Bastion jump hosts with Teleport
Bastion jump hosts with TeleportBastion jump hosts with Teleport
Bastion jump hosts with Teleport
 
Siber Tehdit Avcılığı (Threat Hunting)
Siber Tehdit Avcılığı (Threat Hunting)Siber Tehdit Avcılığı (Threat Hunting)
Siber Tehdit Avcılığı (Threat Hunting)
 
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스
[Pgday.Seoul 2019] Citus를 이용한 분산 데이터베이스
 
SageMaker Neoの可能性について - 第3回 Amazon SageMaker 事例祭り+体験ハンズオン
SageMaker Neoの可能性について- 第3回 Amazon SageMaker 事例祭り+体験ハンズオンSageMaker Neoの可能性について- 第3回 Amazon SageMaker 事例祭り+体験ハンズオン
SageMaker Neoの可能性について - 第3回 Amazon SageMaker 事例祭り+体験ハンズオン
 
Yeni Nesil DDOS Saldırıları ve Korunma Yöntemleri
Yeni Nesil DDOS Saldırıları ve Korunma YöntemleriYeni Nesil DDOS Saldırıları ve Korunma Yöntemleri
Yeni Nesil DDOS Saldırıları ve Korunma Yöntemleri
 
Sizma testi bilgi toplama
Sizma testi bilgi toplamaSizma testi bilgi toplama
Sizma testi bilgi toplama
 
Log Yonetimi ve SIEM Kontrol Listesi
Log Yonetimi ve SIEM Kontrol Listesi Log Yonetimi ve SIEM Kontrol Listesi
Log Yonetimi ve SIEM Kontrol Listesi
 
[DL Hacks]FPGA入門
[DL Hacks]FPGA入門[DL Hacks]FPGA入門
[DL Hacks]FPGA入門
 
Dd and atomic ddl pl17 dublin
Dd and atomic ddl pl17 dublinDd and atomic ddl pl17 dublin
Dd and atomic ddl pl17 dublin
 
暗号化したまま計算できる暗号技術とOSS開発による広がり
暗号化したまま計算できる暗号技術とOSS開発による広がり暗号化したまま計算できる暗号技術とOSS開発による広がり
暗号化したまま計算できる暗号技術とOSS開発による広がり
 
Intel 82599 10GbE Controllerで遊ぼう
Intel 82599 10GbE Controllerで遊ぼうIntel 82599 10GbE Controllerで遊ぼう
Intel 82599 10GbE Controllerで遊ぼう
 
SIZMA TESTLERİNDE BİLGİ TOPLAMA
SIZMA TESTLERİNDE BİLGİ TOPLAMASIZMA TESTLERİNDE BİLGİ TOPLAMA
SIZMA TESTLERİNDE BİLGİ TOPLAMA
 
暗認本読書会12
暗認本読書会12暗認本読書会12
暗認本読書会12
 
Millions quotes per second in pure java
Millions quotes per second in pure javaMillions quotes per second in pure java
Millions quotes per second in pure java
 
3 Router Configuration - Cisco Packet Tracer
3 Router Configuration - Cisco Packet Tracer 3 Router Configuration - Cisco Packet Tracer
3 Router Configuration - Cisco Packet Tracer
 
Best practices for MySQL High Availability
Best practices for MySQL High AvailabilityBest practices for MySQL High Availability
Best practices for MySQL High Availability
 
Hacklenmiş Windows Sistem Analizi
Hacklenmiş Windows Sistem AnaliziHacklenmiş Windows Sistem Analizi
Hacklenmiş Windows Sistem Analizi
 
OSC 2011 Hokkaido 自宅SAN友の会(後半)
OSC 2011 Hokkaido 自宅SAN友の会(後半)OSC 2011 Hokkaido 自宅SAN友の会(後半)
OSC 2011 Hokkaido 自宅SAN友の会(後半)
 

Viewers also liked

PCF installation guide
PCF installation guidePCF installation guide
PCF installation guide
seungdon Choi
 
Greenplum Architecture
Greenplum ArchitectureGreenplum Architecture
Greenplum Architecture
Alexey Grishchenko
 
[Hands on]pws가입하기
[Hands on]pws가입하기[Hands on]pws가입하기
[Hands on]pws가입하기
seungdon Choi
 
Pivotal HD 3.0 설치가이드
Pivotal HD 3.0 설치가이드Pivotal HD 3.0 설치가이드
Pivotal HD 3.0 설치가이드
seungdon Choi
 
Phd tutorial hawq_v0.1
Phd tutorial hawq_v0.1Phd tutorial hawq_v0.1
Phd tutorial hawq_v0.1
seungdon Choi
 
Greenplum Database Overview
Greenplum Database Overview Greenplum Database Overview
Greenplum Database Overview
EMC
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
Srikrishna k
 
PCF Roadshow - Learn the past
PCF Roadshow - Learn the pastPCF Roadshow - Learn the past
PCF Roadshow - Learn the past
seungdon Choi
 
Pivotal CF 소개
Pivotal CF 소개 Pivotal CF 소개
Pivotal CF 소개
seungdon Choi
 
Pivotal Big Data Suite 소개자료
Pivotal Big Data Suite 소개자료Pivotal Big Data Suite 소개자료
Pivotal Big Data Suite 소개자료
seungdon Choi
 
James Watters - PCF Roadshow@Seoul
James Watters - PCF Roadshow@SeoulJames Watters - PCF Roadshow@Seoul
James Watters - PCF Roadshow@Seoul
seungdon Choi
 
PCF Installation Guide
PCF Installation GuidePCF Installation Guide
PCF Installation Guide
seungdon Choi
 
Apache spark - Installation
Apache spark - InstallationApache spark - Installation
Apache spark - Installation
Martin Zapletal
 
Apache Zeppelin 소개
Apache Zeppelin 소개Apache Zeppelin 소개
Apache Zeppelin 소개
KSLUG
 
Pivotal 전략 업데이트 2015 Feb
Pivotal 전략 업데이트 2015 FebPivotal 전략 업데이트 2015 Feb
Pivotal 전략 업데이트 2015 Feb
seungdon Choi
 
PCF Architecture
PCF Architecture PCF Architecture
PCF Architecture
seungdon Choi
 
Big Data with MySQL
Big Data with MySQLBig Data with MySQL
Big Data with MySQL
Ivan Zoratti
 
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo LeeData Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
Spark Summit
 
Gpdb best practices v a01 20150313
Gpdb best practices v a01 20150313Gpdb best practices v a01 20150313
Gpdb best practices v a01 20150313
Sanghee Lee
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
Rahul Jain
 

Viewers also liked (20)

PCF installation guide
PCF installation guidePCF installation guide
PCF installation guide
 
Greenplum Architecture
Greenplum ArchitectureGreenplum Architecture
Greenplum Architecture
 
[Hands on]pws가입하기
[Hands on]pws가입하기[Hands on]pws가입하기
[Hands on]pws가입하기
 
Pivotal HD 3.0 설치가이드
Pivotal HD 3.0 설치가이드Pivotal HD 3.0 설치가이드
Pivotal HD 3.0 설치가이드
 
Phd tutorial hawq_v0.1
Phd tutorial hawq_v0.1Phd tutorial hawq_v0.1
Phd tutorial hawq_v0.1
 
Greenplum Database Overview
Greenplum Database Overview Greenplum Database Overview
Greenplum Database Overview
 
Apache kafka
Apache kafkaApache kafka
Apache kafka
 
PCF Roadshow - Learn the past
PCF Roadshow - Learn the pastPCF Roadshow - Learn the past
PCF Roadshow - Learn the past
 
Pivotal CF 소개
Pivotal CF 소개 Pivotal CF 소개
Pivotal CF 소개
 
Pivotal Big Data Suite 소개자료
Pivotal Big Data Suite 소개자료Pivotal Big Data Suite 소개자료
Pivotal Big Data Suite 소개자료
 
James Watters - PCF Roadshow@Seoul
James Watters - PCF Roadshow@SeoulJames Watters - PCF Roadshow@Seoul
James Watters - PCF Roadshow@Seoul
 
PCF Installation Guide
PCF Installation GuidePCF Installation Guide
PCF Installation Guide
 
Apache spark - Installation
Apache spark - InstallationApache spark - Installation
Apache spark - Installation
 
Apache Zeppelin 소개
Apache Zeppelin 소개Apache Zeppelin 소개
Apache Zeppelin 소개
 
Pivotal 전략 업데이트 2015 Feb
Pivotal 전략 업데이트 2015 FebPivotal 전략 업데이트 2015 Feb
Pivotal 전략 업데이트 2015 Feb
 
PCF Architecture
PCF Architecture PCF Architecture
PCF Architecture
 
Big Data with MySQL
Big Data with MySQLBig Data with MySQL
Big Data with MySQL
 
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo LeeData Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
Data Science lifecycle with Apache Zeppelin and Spark by Moonsoo Lee
 
Gpdb best practices v a01 20150313
Gpdb best practices v a01 20150313Gpdb best practices v a01 20150313
Gpdb best practices v a01 20150313
 
Real time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache SparkReal time Analytics with Apache Kafka and Apache Spark
Real time Analytics with Apache Kafka and Apache Spark
 

Similar to greenplum installation guide - 4 node VM

EF09-Installing-Alfresco-components-1-by-1.pdf
EF09-Installing-Alfresco-components-1-by-1.pdfEF09-Installing-Alfresco-components-1-by-1.pdf
EF09-Installing-Alfresco-components-1-by-1.pdf
DangGonz
 
How to Upgrade Openfire on CentOS 7
How to Upgrade Openfire on CentOS 7How to Upgrade Openfire on CentOS 7
How to Upgrade Openfire on CentOS 7
VCP Muthukrishna
 
Instalar PENTAHO 5 en CentOS 6
Instalar PENTAHO 5 en CentOS 6Instalar PENTAHO 5 en CentOS 6
Instalar PENTAHO 5 en CentOS 6
Moisés Elías Araya
 
/etc/rc.d配下とかのリーディング勉強会
/etc/rc.d配下とかのリーディング勉強会/etc/rc.d配下とかのリーディング勉強会
/etc/rc.d配下とかのリーディング勉強会
Naoya Nakazawa
 
SANS @Night There's Gold in Them Thar Package Management Databases
SANS @Night There's Gold in Them Thar Package Management DatabasesSANS @Night There's Gold in Them Thar Package Management Databases
SANS @Night There's Gold in Them Thar Package Management DatabasesPhil Hagen
 
Ef09 installing-alfresco-components-1-by-1
Ef09 installing-alfresco-components-1-by-1Ef09 installing-alfresco-components-1-by-1
Ef09 installing-alfresco-components-1-by-1
Angel Borroy López
 
Sap snc configuration
Sap snc configurationSap snc configuration
Sap snc configuration
Sugianto S.Kom
 
Salesforce at Stacki Atlanta Meetup February 2016
Salesforce at Stacki Atlanta Meetup February 2016Salesforce at Stacki Atlanta Meetup February 2016
Salesforce at Stacki Atlanta Meetup February 2016
StackIQ
 
Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0
Santosh Kangane
 
Hadoop single cluster installation
Hadoop single cluster installationHadoop single cluster installation
Hadoop single cluster installation
Minh Tran
 
How to install squid proxy on server or how to install squid proxy on centos o
How to install squid proxy on server  or how to install squid proxy on centos oHow to install squid proxy on server  or how to install squid proxy on centos o
How to install squid proxy on server or how to install squid proxy on centos o
Proxiesforrent
 
Stacki - The1600+ Server Journey
Stacki - The1600+ Server JourneyStacki - The1600+ Server Journey
Stacki - The1600+ Server Journey
Salesforce Engineering
 
StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce
StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce
StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce
StackIQ
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
Gábor Nyers
 
RAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and DatabaseRAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and Database
Nikhil Kumar
 
还原Oracle中真实的cache recovery
还原Oracle中真实的cache recovery还原Oracle中真实的cache recovery
还原Oracle中真实的cache recovery
maclean liu
 
KoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbegan
KoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbeganKoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbegan
KoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbegan
Tobias Koprowski
 
Open stack pike-devstack-tutorial
Open stack pike-devstack-tutorialOpen stack pike-devstack-tutorial
Open stack pike-devstack-tutorial
Eueung Mulyana
 

Similar to greenplum installation guide - 4 node VM (20)

EF09-Installing-Alfresco-components-1-by-1.pdf
EF09-Installing-Alfresco-components-1-by-1.pdfEF09-Installing-Alfresco-components-1-by-1.pdf
EF09-Installing-Alfresco-components-1-by-1.pdf
 
How to Upgrade Openfire on CentOS 7
How to Upgrade Openfire on CentOS 7How to Upgrade Openfire on CentOS 7
How to Upgrade Openfire on CentOS 7
 
Instalar PENTAHO 5 en CentOS 6
Instalar PENTAHO 5 en CentOS 6Instalar PENTAHO 5 en CentOS 6
Instalar PENTAHO 5 en CentOS 6
 
/etc/rc.d配下とかのリーディング勉強会
/etc/rc.d配下とかのリーディング勉強会/etc/rc.d配下とかのリーディング勉強会
/etc/rc.d配下とかのリーディング勉強会
 
SANS @Night There's Gold in Them Thar Package Management Databases
SANS @Night There's Gold in Them Thar Package Management DatabasesSANS @Night There's Gold in Them Thar Package Management Databases
SANS @Night There's Gold in Them Thar Package Management Databases
 
Hacking the swisscom modem
Hacking the swisscom modemHacking the swisscom modem
Hacking the swisscom modem
 
Ef09 installing-alfresco-components-1-by-1
Ef09 installing-alfresco-components-1-by-1Ef09 installing-alfresco-components-1-by-1
Ef09 installing-alfresco-components-1-by-1
 
Sap snc configuration
Sap snc configurationSap snc configuration
Sap snc configuration
 
Salesforce at Stacki Atlanta Meetup February 2016
Salesforce at Stacki Atlanta Meetup February 2016Salesforce at Stacki Atlanta Meetup February 2016
Salesforce at Stacki Atlanta Meetup February 2016
 
Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0Oracle 11g R2 RAC setup on rhel 5.0
Oracle 11g R2 RAC setup on rhel 5.0
 
Hadoop single cluster installation
Hadoop single cluster installationHadoop single cluster installation
Hadoop single cluster installation
 
How to install squid proxy on server or how to install squid proxy on centos o
How to install squid proxy on server  or how to install squid proxy on centos oHow to install squid proxy on server  or how to install squid proxy on centos o
How to install squid proxy on server or how to install squid proxy on centos o
 
Stacki - The1600+ Server Journey
Stacki - The1600+ Server JourneyStacki - The1600+ Server Journey
Stacki - The1600+ Server Journey
 
StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce
StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce
StackiFest16: Stacki 1600+ Server Journey - Dave Peterson, Salesforce
 
Containers with systemd-nspawn
Containers with systemd-nspawnContainers with systemd-nspawn
Containers with systemd-nspawn
 
RAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and DatabaseRAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and Database
 
还原Oracle中真实的cache recovery
还原Oracle中真实的cache recovery还原Oracle中真实的cache recovery
还原Oracle中真实的cache recovery
 
les09.pdf
les09.pdfles09.pdf
les09.pdf
 
KoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbegan
KoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbeganKoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbegan
KoprowskiT_SQLSat219_Kiev_2AM-aDisasterJustbegan
 
Open stack pike-devstack-tutorial
Open stack pike-devstack-tutorialOpen stack pike-devstack-tutorial
Open stack pike-devstack-tutorial
 

Recently uploaded

FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
DianaGray10
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Thierry Lestable
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
Laura Byrne
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
Cheryl Hung
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
Safe Software
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
James Anderson
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
Dorra BARTAGUIZ
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
DianaGray10
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
Product School
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
Thijs Feryn
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
91mobiles
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
OnBoard
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Inflectra
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
g2nightmarescribd
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
Alan Dix
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
Elena Simperl
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
Product School
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
BookNet Canada
 

Recently uploaded (20)

FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdfFIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
FIDO Alliance Osaka Seminar: FIDO Security Aspects.pdf
 
UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3UiPath Test Automation using UiPath Test Suite series, part 3
UiPath Test Automation using UiPath Test Suite series, part 3
 
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
Empowering NextGen Mobility via Large Action Model Infrastructure (LAMI): pav...
 
The Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and SalesThe Art of the Pitch: WordPress Relationships and Sales
The Art of the Pitch: WordPress Relationships and Sales
 
Key Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdfKey Trends Shaping the Future of Infrastructure.pdf
Key Trends Shaping the Future of Infrastructure.pdf
 
Essentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with ParametersEssentials of Automations: Optimizing FME Workflows with Parameters
Essentials of Automations: Optimizing FME Workflows with Parameters
 
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
GDG Cloud Southlake #33: Boule & Rebala: Effective AppSec in SDLC using Deplo...
 
Elevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object CalisthenicsElevating Tactical DDD Patterns Through Object Calisthenics
Elevating Tactical DDD Patterns Through Object Calisthenics
 
Connector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a buttonConnector Corner: Automate dynamic content and events by pushing a button
Connector Corner: Automate dynamic content and events by pushing a button
 
How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...How world-class product teams are winning in the AI era by CEO and Founder, P...
How world-class product teams are winning in the AI era by CEO and Founder, P...
 
Accelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish CachingAccelerate your Kubernetes clusters with Varnish Caching
Accelerate your Kubernetes clusters with Varnish Caching
 
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdfSmart TV Buyer Insights Survey 2024 by 91mobiles.pdf
Smart TV Buyer Insights Survey 2024 by 91mobiles.pdf
 
FIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdfFIDO Alliance Osaka Seminar: Overview.pdf
FIDO Alliance Osaka Seminar: Overview.pdf
 
Leading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdfLeading Change strategies and insights for effective change management pdf 1.pdf
Leading Change strategies and insights for effective change management pdf 1.pdf
 
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered QualitySoftware Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
Software Delivery At the Speed of AI: Inflectra Invests In AI-Powered Quality
 
Generating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using SmithyGenerating a custom Ruby SDK for your web service or Rails API using Smithy
Generating a custom Ruby SDK for your web service or Rails API using Smithy
 
Epistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI supportEpistemic Interaction - tuning interfaces to provide information for AI support
Epistemic Interaction - tuning interfaces to provide information for AI support
 
Knowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and backKnowledge engineering: from people to machines and back
Knowledge engineering: from people to machines and back
 
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
AI for Every Business: Unlocking Your Product's Universal Potential by VP of ...
 
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...Transcript: Selling digital books in 2024: Insights from industry leaders - T...
Transcript: Selling digital books in 2024: Insights from industry leaders - T...
 

greenplum installation guide - 4 node VM

  • 1. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 Pivotal Greenplum Database INSTALLATION GUIDE ON VM Prepared By Pivotal Korea
  • 2. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 DOCUMENT CONTROL For any questions regarding this document contact: Name: Seungdon Choi E-mail: schoi@pivotal.io Document Revision History Date Version Description Author Reviewer 04/02/2015 0.1 Draft for internal review 09/02/2015 0.9 For Distribution
  • 3. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 Table of Contents DOCUMENT OVERVIEW...............................................................................................4 System configuration.......................................................................................................4 OS PREREQUISITE........................................................................................................5 RUN GREENPLUM INSTALLER...................................................................................7 INSTALL & CONFIG ALL HOSTS.................................................................................9 create data storage areas.............................................................................................12 validate system...............................................................................................................12 initialize greenplum database system.........................................................................14 RUN ERRANDS & CONFIGURATIONS....................................................................20
  • 4. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 DOCUMENT OVERVIEW 본 문서는 Pivotal Greenplum Database 를 Virtual Machine 환경에 설치하는 방법에 대해서 다룬다. i. 문서에서 다루는 가이드들은 Pivotal 의 공식적인 답변이 아니며, 공식적인 설치 문서는 http://docs.pivotal.io 를 참고하도록 한다. ii. 문서에서 다룬 Greenplum Database 의 설정값들은 운영환경을 위한 값이 아니며, 사용자의 PC 상에서 테스트를 위한 최소 설정값이다. 실 운영환경을 위한 설정값은 공식적인 문서와 Pivotal 의 컨설팅을 요한다. 참고 문서 Pivotal Greenplum Installation Guide http://gpdb.docs.pivotal.io/4340/index.html#install_guide/install_guide.html SYSTEM CONFIGURATION 본 문서에서 사용된 VM 정보와 설치 정보는 다음과 같다. Machine: MacBook Pro – CPU 2.6GHz Intel i7, 16GB Mem
  • 5. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 VMWare Fusion 6 VM Name IP 용도 HD1 10.10.10.1 Master HD2 10.10.10.2 Standby Master, Segment HD3 10.10.10.3 Segment HD4 10.10.10.4 Segment 각 VM 노드들은 1 vcpu, 1G Mem 으로 구성. OS: CentOS 6.5 실제 구성은 정식 문서(http://gpdb.docs.pivotal.io/4340/index.html#prep_os- system-req.html#topic2) 를 따른다. OS PREREQUISITE 1.설치에 필요한 Greenplum 설치 패키지 다운로드 http://network.pivotal.io 에서 Pivotal Greenplum Database  4.3.4.0 Database Server Greenplum Database 4.3.4.0 for Red Hat Enterprise Linux 5 and 6 를 선택하여 다운로드한다. HD1 VM 에 업로드 해놓도록 한다. 각 노드에 하기와 같이 OS 설정을 진행한다. 2. Disable Firewall (ip tables) /sbin/chkconfig iptables off /sbin/chkconfig --list iptables
  • 6. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 3. Disable SELinux /etc/selinux/config SELINUX=disabled System reboot 을 수행한다. sestatus 4. /etc/sysctl.conf 의 설정 추가 xfs_mount_options = rw,noatime,inode64,allocsize=16m kernel.shmmax = 500000000 kernel.shmmni = 4096 kernel.shmall = 4000000000 kernel.sem = 250 512000 100 2048 kernel.sysrq = 1 kernel.core_uses_pid = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.msgmni = 2048 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_forward = 0 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_tw_recycle = 1 net.ipv4.tcp_max_syn_backlog = 4096 net.ipv4.conf.all.arp_filter = 1 net.ipv4.ip_local_port_range = 1025 65535 net.core.netdev_max_backlog = 10000 vm.overcommit_memory = 2 #sysctl –p 5. /etc/security/limits.conf soft nofile 65536 hard nofile 65536 soft nproc 131072 hard nproc 131072
  • 7. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 6. NTP 설정 /etc/ntp.conf 에서 master node(HD1) 을 가르키도록 설정 server HD1 prefer server HD2 NTP daemon 을 사용하여 모든 greenplum host 의 system clock 을 sync 맞춤. [root@hd1 greenplum-db]# gpssh -f hostfile_exkeys -v -e 'ntpd' [Reset ...] [INFO] login hd1 [INFO] login hd2 [INFO] login hd3 [INFO] login hd4 [hd1] ntpd [hd2] ntpd [hd3] ntpd [hd4] ntpd [INFO] completed successfully RUN GREENPLUM INSTALLER Master node (HD1) 에 greenplum media 를 설치한다. Root 유저로 -rw-r--r--. 1 gpadmin gpadmin 133791166 Feb 3 23:32 greenplum-db-4.3.4.0-build-1-RHEL5- x86_64.zip [gpadmin@hd1 stage]$ unzip greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.zip Archive: greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.zip inflating: README_INSTALL
  • 8. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 inflating: greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.bin [root@hd1 stage]# ./greenplum-db-4.3.4.0-build-1-RHEL5-x86_64.bin ******************************************************************************** You must read and accept the Pivotal Database license agreement before installing ******************************************************************************** *** IMPORTANT INFORMATION - PLEASE READ CAREFULLY *** PIVOTAL GREENPLUM DATABASE END USER LICENSE AGREEMENT IMPORTANT - READ CAREFULLY: This Software contains computer programs and other proprietary material and information, the use of which is subject to and expressly conditioned upon acceptance of this End User License Agreement ("EULA"). …… 스킵스킵 ******************************************************************************** Do you accept the Pivotal Database license agreement? [yes|no] ******************************************************************************** yes ******************************************************************************** Provide the installation path for Greenplum Database or press ENTER to accept the default installation path: /usr/local/greenplum-db-4.3.4.0 ******************************************************************************** ******************************************************************************** Install Greenplum Database into </usr/local/greenplum-db-4.3.4.0>? [yes|no] ******************************************************************************* yes ******************************************************************************** /usr/local/greenplum-db-4.3.4.0 does not exist. Create /usr/local/greenplum-db-4.3.4.0 ? [yes|no]
  • 9. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 (Selecting no will exit the installer) ******************************************************************************** yes ******************************************************************************** [Optional] Provide the path to a previous installation of Greenplum Database, or press ENTER to skip this step. e.g. /usr/local/greenplum-db-4.1.1.3 This installation step will migrate any Greenplum Database extensions from the provided path to the version currently being installed. This step is optional and can be run later with: gppkg --migrate <path_to_old_gphome> /usr/local/greenplum-db-4.3.4.0 ******************************************************************************** Extracting product to /usr/local/greenplum-db-4.3.4. Skipping migration of Greenplum Database extensions... ******************************************************************************** Installation complete. Greenplum Database is installed in /usr/local/greenplum-db-4.3.4.0 Pivotal Greenplum documentation is available for download at http://docs.gopivotal.com/gpdb ******************************************************************************* INSTALL & CONFIG ALL HOSTS Root 권한으로 접속하여 grenplum 환경 변수를 source 한다. #su – #source /usr/local/greenplum-db/greenplum_path.sh Greenplum 이 설치될 host list 를 기록한 hostfile 을 생성한다. vi hostfile_exkeys
  • 10. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 hd1 hd2 hd3 hd4 설치전에 root 유저의 ssh key exchange 를 수행한다. gpssh-exkeys -f hostfile 모든 노드에 greenplum software 를 설치한다. [root@hd1 greenplum-db]# gpseginstall -f hostfile -u gpadmin -p changeme 20150204:00:31:30:002919 gpseginstall:hd1:root-[INFO]:-Installation Info: link_name greenplum-db binary_path /usr/local/greenplum-db-4.3.4.0 binary_dir_location /usr/local binary_dir_name greenplum-db-4.3.4.0 20150204:00:31:30:002919 gpseginstall:hd1:root-[INFO]:-check cluster password access 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-de-duplicate hostnames 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-master hostname: hd1.vm 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-check for user gpadmin on cluster 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-add user gpadmin on master 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-add user gpadmin on cluster 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-chown -R gpadmin:gpadmin /usr/local/greenplum-db 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-chown -R gpadmin:gpadmin /usr/local/greenplum-db-4.3.4.0 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-rm -f /usr/local/greenplum-db- 4.3.4.0.tar; rm -f /usr/local/greenplum-db-4.3.4.0.tar.gz 20150204:00:31:31:002919 gpseginstall:hd1:root-[INFO]:-cd /usr/local; tar cf greenplum-db- 4.3.4.0.tar greenplum-db-4.3.4.0 20150204:00:31:34:002919 gpseginstall:hd1:root-[INFO]:-gzip /usr/local/greenplum-db-4.3.4.0.tar 20150204:00:31:52:002919 gpseginstall:hd1:root-[INFO]:-remote command: mkdir -p /usr/local 20150204:00:31:52:002919 gpseginstall:hd1:root-[INFO]:-remote command: rm -rf /usr/local/greenplum-db-4.3.4.0 20150204:00:31:53:002919 gpseginstall:hd1:root-[INFO]:-scp software to remote location 20150204:00:31:57:002919 gpseginstall:hd1:root-[INFO]:-remote command: gzip -f -d /usr/local/greenplum-db-4.3.4.0.tar.gz 20150204:00:32:04:002919 gpseginstall:hd1:root-[INFO]:-md5 check on remote location
  • 11. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 20150204:00:32:05:002919 gpseginstall:hd1:root-[INFO]:-remote command: cd /usr/local; tar xf greenplum-db-4.3.4.0.tar 20150204:00:32:08:002919 gpseginstall:hd1:root-[INFO]:-remote command: rm -f /usr/local/greenplum-db-4.3.4.0.tar 20150204:00:32:09:002919 gpseginstall:hd1:root-[INFO]:-remote command: cd /usr/local; rm -f greenplum-db; ln -fs greenplum-db-4.3.4.0 greenplum-db 20150204:00:32:09:002919 gpseginstall:hd1:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /usr/local/greenplum-db 20150204:00:32:09:002919 gpseginstall:hd1:root-[INFO]:-remote command: chown -R gpadmin:gpadmin /usr/local/greenplum-db-4.3.4.0 20150204:00:32:10:002919 gpseginstall:hd1:root-[INFO]:-rm -f /usr/local/greenplum-db- 4.3.4.0.tar.gz 20150204:00:32:10:002919 gpseginstall:hd1:root-[INFO]:-Changing system passwords ... 20150204:00:32:11:002919 gpseginstall:hd1:root-[INFO]:-exchange ssh keys for user root 20150204:00:32:12:002919 gpseginstall:hd1:root-[INFO]:-exchange ssh keys for user gpadmin 20150204:00:32:12:002919 gpseginstall:hd1:root-[INFO]:-/usr/local/greenplum- db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin 20150204:00:32:12:002919 gpseginstall:hd1:root-[INFO]:-remote command: . /usr/local/greenplum-db/./greenplum_path.sh; /usr/local/greenplum-db/./sbin/gpfixuserlimts -f /etc/security/limits.conf -u gpadmin 20150204:00:32:13:002919 gpseginstall:hd1:root-[INFO]:-version string on master: gpssh version 4.3.4.0 build 1 20150204:00:32:13:002919 gpseginstall:hd1:root-[INFO]:-remote command: . /usr/local/greenplum-db/./greenplum_path.sh; /usr/local/greenplum-db/./bin/gpssh --version 20150204:00:32:13:002919 gpseginstall:hd1:root-[INFO]:-remote command: . /usr/local/greenplum-db-4.3.4.0/greenplum_path.sh; /usr/local/greenplum-db-4.3.4.0/bin/gpssh -- version 20150204:00:32:18:002919 gpseginstall:hd1:root-[INFO]:-SUCCESS -- Requested commands completed 설치 validation 을 진행한다. su - gpadmin .bash_profile source /usr/local/greenplum-db/greenplum_path.sh
  • 12. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 gpadmin ssh key exchange gpssh-exkeys -f hostfile install 본 확인 gpssh -f hostfile -e ls -l $GPHOME 설치가 정상적으로 완료되었다면 각 host 에 greenplum S/W 가 설치되고 각 노드에 gpadmin 으로 접속시 password 입력없이 로그인이 가능하다. CREATE DATA STORAGE AREAS 각 노드별로 data directory 를 생성한다. Master Node [root@hd1 home]# mkdir -p /data/master [root@hd1 home]# chown -R gpadmin:gpadmin /data/master [root@hd1 home]# source /usr/local/greenplum-db/greenplum_path.sh [root@hd1 home]# gpssh -h hd2 -e 'mkdir -p /data/master' [hd2] mkdir -p /data/master [root@hd1 home]# gpssh -h hd2 -e 'chown -R gpadmin:gpadmin /data/master' [hd2] chown -R gpadmin:gpadmin /data/master Segment Node [root@hd1 home]# vi hostfile_gpssh_segonly hd2 hd3 hd4 gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/primary' gpssh -f hostfile_gpssh_segonly -e 'mkdir -p /data/mirror' gpssh -f hostfile_gpssh_segonly -e 'chown -R gpadmin:gpadmin /data/primary' gpssh -f hostfile_gpssh_segonly -e 'chown -R gpadmin:gpadmin /data/mirror' VALIDATE SYSTEM
  • 13. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 1. gpcheck 를 사용하여 각 노드들의 OS 셋팅값을 validation 한다. hostfile_gpcheck hd1 hd2 hd3 hd4 [root@hd1 greenplum-db]# gpcheck -f hostfile_exkeys -m hd1 -s hd2 20150208:16:01:27:004841 gpcheck:hd1:root-[INFO]:-dedupe hostnames 20150208:16:01:27:004841 gpcheck:hd1:root-[INFO]:-Detected platform: Generic Linux Cluster 20150208:16:01:27:004841 gpcheck:hd1:root-[INFO]:-generate data on servers 20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-copy data files from servers 20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-delete remote tmp files 20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-Using gpcheck config file: /usr/local/greenplum-db/./etc/gpcheck.cnf 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on device (sr0) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on device (sda) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd1.vm): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on device (sr0) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on device (sda) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384'
  • 14. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd2.vm): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on device (sr0) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on device (sda) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd3.vm): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on device (sr0) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on device (sda) IO scheduler 'cfq' does not match expected value 'deadline' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on device (/dev/sda3) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on device (/dev/sda1) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on device (/dev/sda2) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[ERROR]:-GPCHECK_ERROR host(hd4.vm): on device (/dev/sda) blockdev readahead value '256' does not match expected value '16384' 20150208:16:01:28:004841 gpcheck:hd1:root-[INFO]:-gpcheck completing... 2. Gpcheckperf 를 사용하여 각 노드들의 Hardware Performance 를 채크한다. INITIALIZE GREENPLUM DATABASE SYSTEM
  • 15. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 su - gpadmin cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/ ARRAY_NAME="EMC Greenplum DW" SEG_PREFIX=gpseg PORT_BASE=40000 declare -a DATA_DIRECTORY=(/data/primary /data/primary /data/primary) MASTER_HOSTNAME=hd1 MASTER_DIRECTORY=/data/master MASTER_PORT=5432 TRUSTED SHELL=ssh CHECK_POINT_SEGMENT=8 ENCODING=UNICODE gpinitsystem -c gpconfigs/gpinitsystem_config -h gpconfigs/hostfile_gpinitsystem 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking configuration parameters, please wait... 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Reading Greenplum configuration file gpconfigs/gpinitsystem_config 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Locale has not been set in gpconfigs/gpinitsystem_config, will set to default value 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Locale set to en_US.utf8 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[WARN]:-Master hostname hd1 does not match hostname output 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking to see if hd1 can be resolved on this host 20150208:16:15:42:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Can resolve hd1 to this host 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking configuration parameters, Completed 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Commencing multi-home checks, please wait... .... 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Configuring build for standard array 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Commencing multi-home checks, Completed 20150208:16:15:43:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Building primary segment instance array, please wait... ............ 20150208:16:15:47:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking Master host 20150208:16:15:47:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking new segment hosts, please wait... ............ 20150208:16:15:54:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checking new segment hosts, Completed 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum Database Creation Parameters 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:--------------------------------------- 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master Configuration 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------
  • 16. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master instance name = EMC Greenplum DW 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master hostname = hd1 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master port = 5432 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master instance dir = /data/master/gpseg-1 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master LOCALE = en_US.utf8 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum segment prefix = gpseg 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master Database = 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master connections = 250 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master buffers = 128000kB 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Segment connections = 750 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Segment buffers = 128000kB 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Checkpoint segments = 8 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Encoding = UNICODE 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Postgres param file = Off 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Initdb to be used = /usr/local/greenplum-db/./bin/initdb 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-GP_LIBRARY_PATH is = /usr/local/greenplum-db/./lib 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Ulimit check = Passed 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Array host connect type = Single hostname per node 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [1] = ::1 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [2] = 10.10.10.1 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [3] = 172.16.57.230 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [4] = fe80::20c:29ff:fe89:1623 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Master IP address [5] = fe80::20c:29ff:fe89:162d 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Standby Master = Not Configured 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Primary segment # = 3 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total Database segments = 12 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Trusted shell = ssh 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Number segment hosts = 4 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Mirroring config = OFF 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------- 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum Primary Segment Configuration 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------- 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd1 /data/primary/gpseg0 40000 2 0 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd1 /data/primary/gpseg1 40001 3 1 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd1 /data/primary/gpseg2 40002 4 2 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd2 /data/primary/gpseg3 40000 5 3 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd2 /data/primary/gpseg4
  • 17. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 40001 6 4 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd2 /data/primary/gpseg5 40002 7 5 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd3 /data/primary/gpseg6 40000 8 6 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd3 /data/primary/gpseg7 40001 9 7 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd3 /data/primary/gpseg8 40002 10 8 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd4 /data/primary/gpseg9 40000 11 9 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd4 /data/primary/gpseg10 40001 12 10 20150208:16:15:55:012179 gpinitsystem:hd1:gpadmin-[INFO]:-hd4 /data/primary/gpseg11 40002 13 11 Continue with Greenplum creation Yy/Nn> y 20150208:16:15:58:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Building the Master instance database, please wait... 20150208:16:16:06:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Starting the Master in admin mode 20150208:16:16:15:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Commencing parallel build of primary segment instances 20150208:16:16:15:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait... ............ 20150208:16:16:17:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait... .............................................. 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------------- -- 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Parallel process exit status 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------------- -- 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total processes marked as completed = 12 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total processes marked as killed = 0 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Total processes marked as failed = 0 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------------- -- 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Deleting distributed backout files 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Removing back out file 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-No errors generated from parallel processes 20150208:16:17:05:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Starting gpstop with args: -a -i -m -d /data/master/gpseg-1 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Gathering information and validating the environment... 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Obtaining Segment details from master...
  • 18. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.3.4.0 build 1' 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-There are 0 connections to the database 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='immediate' 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Master host=hd1.vm 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=immediate 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process 20150208:16:17:05:046923 gpstop:hd1:gpadmin-[INFO]:-Terminating processes for segment /data/master/gpseg-1 20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Starting gpstart with args: -a -d /data/master/gpseg-1 20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Gathering information and validating the environment... 20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.3.4.0 build 1' 20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Greenplum Catalog Version: '201310150' 20150208:16:17:06:047010 gpstart:hd1:gpadmin-[INFO]:-Starting Master instance in admin mode 20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information 20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Obtaining Segment details from master... 20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Setting new master era 20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Master Started... 20150208:16:17:07:047010 gpstart:hd1:gpadmin-[INFO]:-Shutting down master 20150208:16:17:08:047010 gpstart:hd1:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait... .. 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-Process results... 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:----------------------------------------------------- 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- Successful segment starts = 12 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- Failed segment starts = 0 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:----------------------------------------------------- 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:- 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-Successfully started 12 of 12 segment instances 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:----------------------------------------------------- 20150208:16:17:10:047010 gpstart:hd1:gpadmin-[INFO]:-Starting Master instance hd1.vm directory /data/master/gpseg-1 20150208:16:17:11:047010 gpstart:hd1:gpadmin-[INFO]:-Command pg_ctl reports Master hd1.vm instance active 20150208:16:17:12:047010 gpstart:hd1:gpadmin-[INFO]:-No standby master configured. skipping... 20150208:16:17:12:047010 gpstart:hd1:gpadmin-[INFO]:-Database successfully started 20150208:16:17:12:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
  • 19. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 20150208:16:17:12:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Loading gp_toolkit... 20150208:16:17:13:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Scanning utility log file for any warning messages 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:- ******************************************************* 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:-Scan of log file indicates that some warnings or errors 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:-were generated during the array creation 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Please review contents of log file 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- /home/gpadmin/gpAdminLogs/gpinitsystem_20150208.log 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To determine level of criticality 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-These messages could be from a previous run of the utility 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-that was called today! 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[WARN]:- ******************************************************* 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Greenplum Database instance successfully created 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------------- --------- 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To complete the environment configuration, please 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-update gpadmin .bashrc file with the following 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-1. Ensure that the greenplum_path.sh file is sourced 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-2. Add "export MASTER_DATA_DIRECTORY=/data/master/gpseg-1" 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- to access the Greenplum scripts for this instance: 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- or, use -d /data/master/gpseg-1 option for the Greenplum scripts 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:- Example gpstate -d /data/master/gpseg-1 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Script log file = /home/gpadmin/gpAdminLogs/gpinitsystem_20150208.log 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To remove instance, run gpdeletesystem utility 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-To initialize a Standby Master Segment for this Greenplum instance 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Review options for gpinitstandby 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------------- --------- 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-The Master /data/master/gpseg- 1/pg_hba.conf post gpinitsystem 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-has been configured to allow all hosts within this new 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-array to intercommunicate. Any hosts external to this 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-new array must be explicitly added to this file 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-Refer to the Greenplum Admin support guide which is 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:-located in the
  • 20. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 /usr/local/greenplum-db/./docs directory 20150208:16:17:14:012179 gpinitsystem:hd1:gpadmin-[INFO]:---------------------------------------------- --------- RUN ERRANDS & CONFIGURATIONS DB ENVIRONMENT CONFIGURATION 1) pg_hba.conf Postgresql/Greenplum 과 동일하게 HAWQ 접속을 위해서는 접속할 데이터베이스, User, IP 등을 등록해야 한다. 만일 모든 client, db 에 접속 제한을 하지 않으려면 host all all 0.0.0.0/0 trust 입력 $ cd $MASTER_DATA_DIRECTORY $ vi pg_hba.conf # TYPE DATABASE USER CIDR-ADDRESS METHOD # Type : local : 마스터 노드에서 접속, host : IP로 접속 # Database : all 은 모든 데이터베이스 접속 허용, 특정 db만 접근할 경우 database 명 기입. # user : all / user명 # CIDR-ADDRESS: IP/0 : 전부 허용, IP/8: A class 허용, IP/16 B Class 허용 # IP/24 C Class 허용, IP/32 IP 일치시 허용 # METHOD : md5 : 비밀번호 필요, trust : 비밀번호 필요치 않음(비밀번호가 틀리더라도 접속 가능 함) # 예제 local all gpadmin trust host all all 172.16.0.0/16 md5 # 172.16.X.X B class, 패스워드 필요 host bmt all 192.168.10.0/28 trust # bmt database에 모든 user가 192.168.10.X C class로
  • 21. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 # 패스워드 없이 접속 가능함 host bmt u1234 192.168.10..101/32 md5 # bmt 데이터 베이스에 u1234 / 해당IP에 대해서 # 패스워드가 일치해야지만 접속할 수 있음 $ gpstop –u # 시스템에 반영(실제 DB Restart 는 하지 않음) 2) postgresql.conf PostgreSQL 의 configuration 을 모아둔 파일이다. $ cd $MASTER_DATA_DIRECTORY $ vi postgresql.conf # 수정할 값 log_duration = on client_encoding = uhc datestyle = 'iso, ymd' gp_external_grant_privileges = on max_connections=200 # 세그먼트는 1000 max_prepared_transactions=200 3) .psqlrc psql 로 접속하였을 때 쿼리 소요시간 , fetch count 를 적용한다. $ cd ~ $ vi .psqlrc timing set FETCH_COUNT 10000 4) .pgpass 5) .bash_profile .bash_profile 상에 MASTER_DATA_DIRECTORY 설정 source /usr/local/greenplum-db/greenplum_path.sh export MASTER_DATA_DIRECTORY=/data/master/gpseg-1 일반적으로 많이 사용하는 모니터링 쿼리와 설정들은 하기와 같다. # Get the aliases and functions
  • 22. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 if [ -f ~/.bashrc ]; then . ~/.bashrc fi PATH=$PATH:$HOME/bin export PATH PS1 PATH=${PATH}:/usr/sfw/bin:/usr/bin:/usr/local/bin:/usr/sbin:/opt/perl- 5.8.8/bin:/usr/local/sbin export PATH export EDITOR=vi ## Greenplum 환경설정 source /usr/local/greenplum-db/greenplum_path.sh source /usr/local/greenplum-perfmon-web-4.0.1.0-build-1/gpperfmon_path.sh export MASTER_DATA_DIRECTORY=/data/gpdb_master/gpseg-1 ## 각 환경설정에 맞도록 셋팅 export LC_ALL=C ## DB 접속 파라미터 셋팅 export PGPORT=5432 ##Greenplum Port export PGDATABASE=stat ##Greenplum Database명 export USER=gpadmin ## 접속할 계정 ## aliases for the tables #-- 세션 확인 alias qq="psql -c " select now()-query_start, usename, client_addr, waiting, procpid, sess_id from pg_stat_activity where current_query not like '%IDLE%' order by 4, 1 desc;"" #--실행 쿼리 확인 alias cq='psql -c "select now()-query_start, procpid, usename, sess_id, current_query from pg_stat_activity where current_query not like '''%IDLE%''' order by 1 desc;"' #-- locks 확인 alias locks="psql -c " SELECT pid, relname, locktype, mode from pg_locks, pg_class where relation=oid and relname not like 'pg_%' order by 3;""
  • 23. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 #리소스 큐 확인 alias rs='psql -c " select a.rsqname, a.rsqcountlimit as countlimit, a.rsqcountvalue as countvalue, a.rsqwaiters as waiters, a.rsqcostlimit as costlimit, a.rsqcostvalue as costvalue, b.rsqignorecostlimit as ignorecostlimit, b.rsqovercommit as overcommit from pg_resqueue_status a, pg_resqueue b where a.rsqname = b.rsqname order by 1;"' DB CREATION AND RUN A TEST [gpadmin@hd1 ~]$ createdb test [gpadmin@hd1 ~]$ export PGDATABASE=test [gpadmin@hd1 ~]$ psql psql (8.2.15) Type "help" for help. test=# l List of databases Name | Owner | Encoding | Access privileges -----------+---------+----------+--------------------- postgres | gpadmin | UTF8 | template0 | gpadmin | UTF8 | =c/gpadmin : gpadmin=CTc/gpadmin template1 | gpadmin | UTF8 | =c/gpadmin : gpadmin=CTc/gpadmin test | gpadmin | UTF8 | BASIC GPDB COMMANDS GPDB STOP, START 구 분 명령어 DB Start gpstart or gpstart -a DB Stop gpstop –af
  • 24. 875 How ard Street, Fifth Floor, San Francisco, CA 94103 DB Restart gpstop -r 환경 정보 읽어오기(Reload) gpstop -u 마스터 노드 DB Start gpstart –m 마스터 노드 DB Stop gpstop –m 유틸리티 모드시 psql 접속 PGOPTIONS='-c gp_session_role=utility' psql GPDB CHECK STATUS 구 분 명령어 DB 상태 $ gpstate DB 상세 상태 $ gpstate -s Primary/Mirror 매핑 정보 $ gpstate -c Mirror정보 $ gpstate -m Standby Master 정보 $ gpstate -f 포트 정보 $ gpstate –p