栏目分类:
子分类:
返回
名师互学网用户登录
快速导航关闭
当前搜索
当前分类
子分类
实用工具
热门搜索
名师互学网 > IT > 系统运维 > 运维 > Linux

greenplum6集群部署安装

Linux 更新时间: 发布时间: IT归档 最新发布 模块sitemap 名妆网 法律咨询 聚返吧 英语巴士网 伯小乐 网商动力

greenplum6集群部署安装

1. 软硬件说明及必要依赖安装

1.1 操作系统版本

• Red Hat Enterprise Linux 64-bit 7.x (See the following Note.)

• Red Hat Enterprise Linux 64-bit 6.x

• CentOS 64-bit 7.x

• CentOS 64-bit 6.x

• Ubuntu 18.04 LTS

GP 6支持在以上的操作系统上进行安装,特别提醒:如果使用redhat6 安装需要确保kernal版本在2.6.32-696 以上;如果使用redhat7 或者 centos 7 版本需要注意使用7.3 以上版本。7.3 以下版本存在bug 会导致gp库hang住;

本次安装的操作系统版本: CENTOS 7.4

1.2硬件配置

主机ip

host

节点规划

硬件配置

192.168.100.168

fastmdw

master

40核 256G 内存5T硬盘

192.168.100.103

fastsg1

2个 seg

40核 256G 内存 12T 硬盘

192.168.100.104

fastsg2

2个 seg

40核 256G 内存 12T 硬盘

192.168.100.105

fastsg3

2个 seg

40核 256G 内存 12T 硬盘

每台seg 主机共5块硬盘 设置1块hotspare 热备盘,防止硬盘损坏造成raid性能下降,增加了服务器硬盘空间的冗余度,其余4块部署raid5阵列

网卡建议万兆网卡

1.3 依赖包

1.依赖包如下:

apr  apr-util  bash  bzip2  curl  krb5  libcurl  libevent  libxml2  libyaml  zlib  openldap  openssh  openssl  openssl-libs (RHEL7/Centos7)  perl  readline  rsync  R  sed (used by gpinitsystem)  tar  zip

批量安装依赖包

yum install -y apr apr-util bash bzip2 curl krb5 libcurl libevent libxml2 libyaml zlib 

openldap openssh openssl openssl-libs perl readline rsync R sed tar zip krb5-devel

    

1.4 部署方式

节点规划1个master主机, 3个segment主机,每台seg主机上按照物理cpu个数安装2个primary seg,采用group mirror模式部署

grouped模式

新增的主机数必须大于等于2,确保新增primary和mirror在不同的机器上。

优缺点:

如果其中一台挂掉,那么拥有该主机mirror的机器负载加重一倍;在segment主机数非常多的情况下,至少允许两台主机挂掉

spread模式

新增主机数至少比每台主机上的primary数大1,确保mirror平均分配在其他主机上(参照图例理解)

优缺点:

如果其中一台挂掉,那么拥有该主机mirror的机器负载均衡,没有grouped的压力大;在segment主机数非常多的情况下,只可以挂掉一台主机

2. 配置系统参数

2.1 防火墙相关

1.关闭防火墙

systemctl stop firewalld

systemctl disable firewalld

  1. 关闭selinux

/etc/selinux/config 设置 ,随后重启系统(可以调节完参数后一并重启)
SELINUX=disabled

2.2 配置host

几台主机添加主机名

192.168.100.168 fastmdw

192.168.100.103 fastsg1

192.168.100.104 fastsg2

192.168.100.105 fastsg3

批量添加

cat >> /etc/hosts << EOF

192.168.100.168 fastmdw

192.168.100.103 fastsg1

192.168.100.104 fastsg2

192.168.100.105 fastsg3

EOF

2.3 系统参数设置sysctl.conf

参数名称

设置值

参数说明

kernel.shmmax

154618822656

表示单个共享内存段的最大值,以字节为单位,此值一般为物理内存的一半,不过大一点也没关系,这里设定的为144,即"185757335552/1024/1024/1024=144G"

kernel.shmmni

8092

表示单个共享内存段的最小值,一般为4kB,即4096bit,也可适当调大,一般为4096的2-3倍

kernel.shmall

154618822656

表示可用共享内存的总量,单位是页,一般此值与kernel.shmmax相等

kernel.sem

1000 10240000 400 10240

该文件用于控制内核信号量,信号量是System VIPC用于进程间通讯的方法。建议设置:250 32000 100 128第一列,表示每个信号集中的最大信号量数目。第二列,表示系统范围内的最大信号量总数目。第三列,表示每个信号发生时的最大系统操作数目。第四列,表示系统范围内的最大信号集总数目。所以,(第一列)*(第四列)=(第二列)

kernel.sysrq

1

内核系统请求调试功能控制,0表示禁用,1表示启用

kernel.core_uses_pid

1

这有利于多线程调试,0表示禁用,1表示启用

kernel.msgmnb

65536

该文件指定一个消息队列的最大长度(bytes)。缺省设置:16384

kernel.msgmax

65536

该文件指定了从一个进程发送到另一个进程的消息的最大长度(bytes)。进程间的消息传递是在内核的内存中进行的,不会交换到磁盘上,所以如果增加该值,则将增加操作系统所使用的内存数量。缺省设置:8192

kernel.msgmni

2048

该文件指定消息队列标识的最大数目,即系统范围内最大多少个消息队列。

net.ipv4.tcp_syncookies

1

表示开启SYN cookies,当SYN等待队列溢出时,启用cookies来处理,可以防范少量的SYN攻击,默认为0,表示关闭。1表示启用

net.ipv4.ip_forward

0

该文件表示是否打开IP转发。0:禁止 1:转发 缺省设置:0

net.ipv4.conf.default.accept_source_route

0

是否允许源地址经过路由。0:禁止 1:打开 缺省设置:0

net.ipv4.tcp_tw_recycle

1

允许将TIME_WAIT sockets快速回收以便利用。0表示禁用,1表示启用

net.ipv4.tcp_max_syn_backlog

4096

增加TCP SYN队列长度,使系统可以处理更多的并发连接。一般为4096,可以调大,必须是4096的倍数,建议是2-3倍

net.ipv4.conf.all.arp_filter

1

表示控制具体应该由哪块网卡来回应arp包,缺省设置0, 建议设置为1

net.ipv4.ip_local_port_range

1025 65535

指定端口范围的一个配置,默认是32768 61000,可调整为1025 65535

net.core.netdev_max_backlog

10000

进入包的最大设备队列.默认是1000,对重负载服务器而言,该值太低,可调整到16384/32768/65535

net.core.rmem_max

2097152

最大socket读buffer,可参考的优化值:1746400/3492800/6985600

net.core.wmem_max

2097152

最大socket写buffer,可参考的优化值:1746400/3492800/6985600

vm.overcommit_memory

2

Linux下overcommit有三种策略,0:启发式策略,1:任何overcommit都会被接受。2:当系统分配的内存超过swap+N%*物理RAM(N%由vm.overcommit_ratio决定)时,会拒绝commit,一般设置为2

vm.swappiness

1

当物理内存超过设置的值是开始使用swap的内存空间,计算公式是100-1=99%表示物理内存使用到99%时开始交换分区使用

kernel.pid_max

655360

用户打开最大进程数,全局配置的参数

/etc/sysctl.conf

kernel.shmmax=154618822656 

kernel.shmall=154618822656 

kernel.shmmni = 8092 

kernel.sem = 1000 10240000 400 10240 

kernel.sysrq = 1 

kernel.core_uses_pid = 1 

kernel.msgmnb = 65536 

kernel.msgmax = 65536 

kernel.msgmni = 2048 

net.ipv4.tcp_syncookies = 1

net.ipv4.ip_forward = 0

net.ipv4.conf.default.accept_source_route = 0 

net.ipv4.tcp_tw_recycle = 1 

net.ipv4.tcp_max_syn_backlog = 4096 

net.ipv4.conf.all.arp_filter = 1 

net.ipv4.ip_local_port_range = 10000 65535 

net.core.netdev_max_backlog = 10000 

net.core.rmem_max = 2097152 

net.core.wmem_max = 2097152 

vm.overcommit_memory = 2

vm.swappiness = 1

kernel.pid_max = 655360

vm.overcommit_ratio=95

2.4系统资源限制

/etc/security/limits.conf

* soft nofile 65536

* hard nofile 65536

* soft nproc 131072

* hard nproc 131072

“*” 星号表示所有用户
noproc     是代表最大进程数 
nofile     是代表最大文件打开数
RHEL / CentOS 6 修改:/etc/security/limits.d/90-nproc.conf 文件的nproc 为131072
RHEL / CentOS 7 修改:/etc/security/limits.d/20-nproc.conf 文件的nproc 为131072

Xfs磁盘挂载参数优化

/etc/fstab

UUID=2bf61abe-1e06-427e-82e4-a563cc98bb0c /data1 xfs rw,nodev,noatime,nobarrier,inode64,allocsize=16m 0 0

UUID=6475269c-51a3-4f76-85ba-d4cc5b72d74c /data2 xfs rw,nodev,noatime,nobarrier,inode64,allocsize=16m 0 0

blockdev --setra预读优化和IO请求的大小

一般文件系统 16384 如果挂载硬盘 XFS文件系统 值为65536

/sbin/blockdev --setra 65536 /dev/sdb1

/sbin/blockdev --setra 65536 /dev/sdb2

加到 /etc/rc.local

/sbin/blockdev --getra /dev/sdb1

/sbin/blockdev --getra /dev/sdb2

 设置linux调度器规则

I / O调度程序用于优化磁盘上的读/写。

RHEL 7中有三种类型的I / O调度程序(也称为I / O电梯):

  • CFQ(C fullly F air Q ueuing)促进来自实时流程的I / O并使用历史数据来预测应用程序是否会在不久的将来发出更多I / O请求(导致轻微的空闲趋势)。
  • deadline 尝试为请求提供有保证的延迟,尤其适用于读取操作比写入操作更频繁的情况(一个队列用于读取,一个用于写入,I / O根据队列中花费的时间进行调度)。
  • Noop实现了一种简单的FIFO(先进先出)调度算法,并且CPU成本最低。

对于RHEL 7,默认的I / O调度程序现在是SATA驱动器的CFQ和  其他所有内容的截止日期。 这是因为截止日期优于CFQ以获得比SATA驱动器更快的存储空间。

临时修改

cat /sys/block/sdb/queue/scheduler

echo deadline > /sys/block/sdb/queue/scheduler

永久修改

grubby --update-kernel=ALL --args="elevator=deadline"

禁用 Transparent Huge Pages (THP)

临时修改

cat /sys/kernel/mm/transparent_hugepage/defrag [always] madvise never

echo never > /sys/kernel/mm/transparent_hugepage/defrag

永久修改

grubby --update-kernel=ALL --args="transparent_hugepage=never"

禁用 removeIPC

/etc/systemd/logind.conf

RemoveIPC=no

systemctl restart systemd-logind

2.5 SSH连接阈值

cat >> /etc/ssh/sshd_config  << EOF

MaxStartups 200

MaxSessions 200

EOF

Systemctl restart sshd

2.6 同步集群时钟(NTP)

2.7 检查字符集

检查字符集

[root@kpianalyzers01 ~]$ pssh -h txt10 -l root -P "echo $LANG"            

gpsg4: en_US.UTF-8

[1] 14:06:27 [SUCCESS] gpsg4

gpmdw: en_US.UTF-8

gpsg2: en_US.UTF-8

[2] 14:06:27 [SUCCESS] gpmdw

[3] 14:06:27 [SUCCESS] gpsg2

gpsg3: en_US.UTF-8

[4] 14:06:27 [SUCCESS] gpsg3

gpsg1: en_US.UTF-8

[5] 14:06:27 [SUCCESS] gpsg1

否则,修改配置 /etc/sysconfig/language 增加 RC_LANG=en_US.UTF-8

2.8创建gpadmin用户

主节点创建gpadmin

groupadd gpadmin

useradd gpadmin -r -m -g gpadmin

echo "Nbiot2021_" | passwd --stdin gpadmin

3. 集群软件安装

3.1 执行安装程序

主节点安装  

yum install -y ./greenplum-db-6.12.0-rhel7-x86_64.rpm

3.2 创建hostfile_exkeys

[root@gpmdw greenplum-db]# pwd

/usr/local/greenplum-db

[root@gpmdw greenplum-db]# cat all_host

fastmdw

fastsg1

fastsg2

fastsg3

[root@gpmdw greenplum-db]# cat seg_host

fastsg1

fastsg2

fastsg3

chown -R gpadmin:gpadmin /usr/local/greenplum*

3.3 集群互信,免密登陆

生成本机公钥,分发到各节点

ssh-keygen

ssh-copy-id fastsg1

ssh-copy-id fastsg2

ssh-copy-id fastsg3

ssh-copy-id fastsg4

使用gpssh-exkeys 工具,打通n-n的免密登陆

source /usr/local/greenplum-db/greenplum_path.sh

gpssh-exkeys -f all_host

[root@fastmdw greenplum-db]# gpssh-exkeys -f all_host

[STEP 1 of 5] create local ID and authorize on local host

  ... /root/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] retrieving credentials from remote hosts

  ... send to fastsg1

  ... send to fastsg2

  ... send to fastsg3

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts

  ... finished key exchange with fastsg1

  ... finished key exchange with fastsg2

  ... finished key exchange with fastsg3

[INFO] completed successfully

[root@gpmdw greenplum-db]#

测试结果  

[root@gpmdw greenplum-db]# gpssh -f /usr/local/greenplum-db/all_host -e 'ls /usr/local/'

[gpsg4] ls /usr/local/

[gpsg4] bin  etc  games  include  lib  lib64  libexec  sbin  share  src

[gpmdw] ls /usr/local/

[gpmdw] bin  games         greenplum-db-6.12.0  lib    libexec  share

[gpmdw] etc  greenplum-db  include              lib64  sbin     src

[gpsg2] ls /usr/local/

[gpsg2] bin  etc  games  include  lib  lib64  libexec  sbin  share  src

[gpsg3] ls /usr/local/

[gpsg3] bin  etc  games  include  lib  lib64  libexec  sbin  share  src

[gpsg1] ls /usr/local/

[gpsg1] bin  etc  games  include  lib  lib64  libexec  sbin  share  src

3.4 同步master 配置到各个主机

3.4.1 批量添加gpadmin用户

gpssh -f seg_host -e 'groupadd gpadmin;useradd gpadmin -r -m -g gpadmin;echo "Nbiot2020_" | passwd --stdin gpadmin;'

[root@fastmdw greenplum-db]# gpssh -f seg_host -e 'groupadd gpadmin;useradd gpadmin -r -m -g gpadmin;echo "Nbiot2021_" | passwd --stdin gpadmin;'

[fastsg3] groupadd gpadmin;useradd gpadmin -r -m -g gpadmin;echo "Nbiot2021_" | passwd --stdin gpadmin;

[fastsg3] Changing password for user gpadmin.

[fastsg3] passwd: all authentication tokens updated successfully.

[fastsg2] groupadd gpadmin;useradd gpadmin -r -m -g gpadmin;echo "Nbiot2021_" | passwd --stdin gpadmin;

[fastsg2] Changing password for user gpadmin.

[fastsg2] passwd: all authentication tokens updated successfully.

[fastsg1] groupadd gpadmin;useradd gpadmin -r -m -g gpadmin;echo "Nbiot2021_" | passwd --stdin gpadmin;

[fastsg1] Changing password for user gpadmin.

[fastsg1] passwd: all authentication tokens updated successfully.

3.4.2 打通gpadmin 用户免密登录

su - gpadmin

source  /usr/local/greenplum-db/greenplum_path.sh

ssh-keygen

ssh-copy-id gpsg1

ssh-copy-id gpsg2

ssh-copy-id gpsg3

ssh-copy-id gpsg4

gpssh-exkeys -f /usr/local/greenplum-db/all_host

[gpadmin@fastmdw ~]$ gpssh-exkeys -f /usr/local/greenplum-db/all_host

[STEP 1 of 5] create local ID and authorize on local host

  ... /home/gpadmin/.ssh/id_rsa file exists ... key generation skipped

[STEP 2 of 5] keyscan all hosts and update known_hosts file

[STEP 3 of 5] retrieving credentials from remote hosts

  ... send to fastsg1

  ... send to fastsg2

  ... send to fastsg3

[STEP 4 of 5] determine common authentication file content

[STEP 5 of 5] copy authentication files to all remote hosts

  ... finished key exchange with fastsg1

  ... finished key exchange with fastsg2

  ... finished key exchange with fastsg3

[INFO] completed successfully

3.4.3 批量设置greenplum在gpadmin用户的环境变量

主节点 gpadmin 用户

cat >> /home/gpadmin/.bash_profile << EOF

source /usr/local/greenplum-db/greenplum_path.sh

EOF

cat >> /home/gpadmin/.bashrc << EOF

source /usr/local/greenplum-db/greenplum_path.sh

EOF

分发到其他节点

source /usr/local/greenplum-db/greenplum_path.sh

gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bash_profile  gpadmin@=:/home/gpadmin/.bash_profile

gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc

3.5 集群节点安装

root  用户执行

link_name='greenplum-db'                    #软连接名

binary_dir_location='/usr/local'            #安装路径

binary_dir_name='greenplum-db-6.12.0'        #安装目录

binary_path='/usr/local/greenplum-db-6.12.0'  #全目录

打包

chown -R gpadmin:gpadmin $binary_path

cd $binary_dir_location; tar cf ${binary_dir_name}.tar ${binary_dir_name}

gzip ${binary_path}.tar

# 分发到segment

gpssh -f ${binary_path}/seg_host -e "mkdir -p ${binary_dir_location};rm -rf ${binary_path};rm -rf ${binary_path}.tar;rm -rf ${binary_path}.tar.gz"

gpscp -f ${binary_path}/seg_host ${binary_path}.tar.gz root@=:${binary_path}.tar.gz

gpssh -f ${binary_path}/seg_host -e "cd ${binary_dir_location};gzip -f -d ${binary_path}.tar.gz;tar xf ${binary_path}.tar"

gpssh -f ${binary_path}/seg_host -e "rm -rf ${binary_path}.tar;rm -rf ${binary_path}.tar.gz;rm -f ${binary_dir_location}/${link_name}"

gpssh -f ${binary_path}/seg_host -e ln -fs ${binary_dir_location}/${binary_dir_name} ${binary_dir_location}/${link_name}

gpssh -f ${binary_path}/seg_host -e "chown -R gpadmin:gpadmin ${binary_dir_location}/${link_name};chown -R gpadmin:gpadmin ${binary_dir_location}/${binary_dir_name}"

gpssh -f ${binary_path}/seg_host -e "source ${binary_path}/greenplum_path.sh"

gpssh -f ${binary_path}/seg_host -e "cd ${binary_dir_location};ll"

3.5.2 创建集群数据目录

3.5.2.1 创建master 数据目录

mkdir -p /home/greenplum/master

chown -R gpadmin:gpadmin /home/greenplum

3.5.2.2 创建segment 数据目录

本次计划每个主机安装2个 primary segment,2个mirror.

source /usr/local/greenplum-db/greenplum_path.sh

gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /data1/primary'

gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /data1/mirror'

gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /data2/primary'

gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /data2/mirror'

gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /data1/standby'

gpssh -f /usr/local/greenplum-db/seg_host -e 'mkdir -p /data2/standby'

gpssh -f /usr/local/greenplum-db/seg_host -e 'chown -R gpadmin:gpadmin /data*'

3.6 集群性能测试

网络性能测试

[root@gpmdw local]# gpcheckperf -f /usr/local/greenplum-db/seg_host -r N -d /tmp

/usr/local/greenplum-db-6.12.0/bin/gpcheckperf -f /usr/local/greenplum-db/seg_host -r N -d /tmp

-------------------

--  NETPERF TEST

-------------------

NOTICE: -t is deprecated, and has no effect

NOTICE: -f is deprecated, and has no effect

NOTICE: -t is deprecated, and has no effect

NOTICE: -f is deprecated, and has no effect

NOTICE: -t is deprecated, and has no effect

NOTICE: -f is deprecated, and has no effect

NOTICE: -t is deprecated, and has no effect

NOTICE: -f is deprecated, and has no effect

====================

==  RESULT 2020-12-29T15:00:12.458977

====================

Netperf bisection bandwidth test

gpsg1 -> gpsg2 = 1116.550000

gpsg3 -> gpsg4 = 1121.200000

gpsg2 -> gpsg1 = 1120.100000

gpsg4 -> gpsg3 = 1113.760000

Summary:

sum = 4471.61 MB/sec

min = 1113.76 MB/sec

max = 1121.20 MB/sec

avg = 1117.90 MB/sec

median = 1120.10 MB/sec

IO 测试

gpcheckperf -f /usr/local/greenplum-db/seg_host -r ds -D   -d /data1/primary

校验时间

gpssh -f /usr/local/greenplum-db/all_host -e 'date'

4.  集群初始化

4.1  编写初始化配置文件

拷贝模板

su - gpadmin

mkdir -p /home/gpadmin/gpconfigs

cp $GPHOME/docs/cli_help/gpconfigs/gpinitsystem_config /home/gpadmin/gpconfigs/gpinitsystem_config

注意:To specify PORT_base, review the port range specified in the net.ipv4.ip_local_port_range parameter in the /etc/sysctl.conf file.

主要修改的参数:

ARRAY_NAME="Greenplum Data Platform"

SEG_PREFIX=gpseg

PORT_base=6000

declare -a DATA_DIRECTORY=(/data1/primary  /data2/primary )

MASTER_HOSTNAME=mdw

MASTER_DIRECTORY=/opt/greenplum/data/master

MASTER_PORT=5432

TRUSTED_SHELL=ssh

CHECK_POINT_SEGMENTS=8

ENCODING=UNICODE

MIRROR_PORT_base=7000

declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror /data2/mirror)

DATAbase_NAME=gpdw

[gpadmin@gpmdw gpconfigs]$ cat /home/gpadmin/gpconfigs/gpinitsystem_config

# FILE NAME: gpinitsystem_config

# Configuration file needed by the gpinitsystem

################################################

#### REQUIRED PARAMETERS

################################################

#### Name of this Greenplum system enclosed in quotes.

ARRAY_NAME="Greenplum Data Platform"

#### Naming convention for utility-generated data directories.

SEG_PREFIX=gpseg

#### base number by which primary segment port numbers

#### are calculated.

PORT_base=16000

#### File system location(s) where primary segment data directories

#### will be created. The number of locations in the list dictate

#### the number of primary segments that will get created per

#### physical host (if multiple addresses for a host are listed in

#### the hostfile, the number of segments will be spread evenly across

#### the specified interface addresses).

declare -a DATA_DIRECTORY=(/data1/primary /data2/primary  )

#### OS-configured hostname or IP address of the master host.

MASTER_HOSTNAME=fastmdw

#### File system location where the master data directory

#### will be created.

MASTER_DIRECTORY=/home/greenplum/master

#### Port number for the master instance.

MASTER_PORT=5432

#### Shell utility used to connect to remote hosts.

TRUSTED_SHELL=ssh

#### Maximum log file segments between automatic WAL checkpoints.

CHECK_POINT_SEGMENTS=8

#### Default server-side character set encoding.

ENCODING=UNICODE

################################################

#### OPTIonAL MIRROR PARAMETERS

################################################

#### base number by which mirror segment port numbers

#### are calculated.

MIRROR_PORT_base=17000

#### File system location(s) where mirror segment data directories

#### will be created. The number of mirror locations must equal the

#### number of primary locations as specified in the

#### DATA_DIRECTORY parameter.

declare -a MIRROR_DATA_DIRECTORY=(/data1/mirror  /data2/mirror)

################################################

#### OTHER OPTIonAL PARAMETERS

################################################

#### Create a database of this name after initialization.

DATAbase_NAME=fastgp6

#### Specify the location of the host address file here instead of

#### with the -h option of gpinitsystem.

#MACHINE_LIST_FILE=/home/gpadmin/gpconfigs/hostfile_gpinitsystem

4.2 集群初始化

4.2.1 集群初始化命令参数

gpinitsystem -c /home/gpadmin/gpconfigs/gpinitsystem_config -h /usr/local/greenplum-db/seg_host -S /data1/standby /data2/standby 

4.2.2 设置环境变量

source /usr/local/greenplum-db/greenplum_path.sh

export MASTER_DATA_DIRECTORY=/home/greenplum/master/gpseg-1

export PGPORT=5432     

export PGUSER=gpadmin  

export PGDATAbase=fastgp6

cat >> /home/gpadmin/.bash_profile << EOF

export MASTER_DATA_DIRECTORY=/home/greenplum/master/gpseg-1

export PGPORT=5432

export PGUSER=gpadmin

export PGDATAbase=fastgp6

EOF

cat >> /home/gpadmin/.bashrc << EOF

export MASTER_DATA_DIRECTORY=/home/greenplum/master/gpseg-1

export PGPORT=5432

export PGUSER=gpadmin

export PGDATAbase=fastgp6

EOF

分发到其他节点

gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bash_profile  gpadmin@=:/home/gpadmin/.bash_profile

gpscp -f /usr/local/greenplum-db/seg_host /home/gpadmin/.bashrc gpadmin@=:/home/gpadmin/.bashrc

gpssh -f /usr/local/greenplum-db/all_host -e 'source /home/gpadmin/.bash_profile;source /home/gpadmin/.bashrc;'

4.2.3 客户端登陆gp

配置 pg_hba.conf

vi  /home/greenplum/master/gpseg-1/pg_hba.conf

# TYPE  DATAbase        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only

# IPv4 local connections:

# IPv6 local connections:

local    all         gpadmin         ident

host     all         gpadmin         127.0.0.1/28    trust

host     all         gpadmin         172.28.25.204/32       trust

host     all         gpadmin         0.0.0.0/0   md5  # 新增规则允许任意ip 密码登陆

host     all         gpadmin         ::1/128       trust

host     all         gpadmin         fe80::250:56ff:fe91:63fc/128       trust

local    replication gpadmin         ident

host     replication gpadmin         samenet       trust

5.2.2. 修改postgresql.conf

postgresql.conf里的监听地址设置为:

listen_addresses = '*'   # 允许监听任意ip gp6.0 默认会设置这个参数为 listen_addresses = '*'  

vi /home/greenplum/master/gpseg-1/postgresql.conf

5.增加segement (后期选做)

将需要添加节点的host 放到 new_hosts_file

[gpadmin@gpmdw ~]$  gpexpand -f new_hosts_file

20201229:16:33:38:029256 gpexpand:gpmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.12.0 build commit:4c176763c7619fb678ce38095e6b3e8fb9548186 Open Source'

20201229:16:33:38:029256 gpexpand:gpmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.12.0 build commit:4c176763c7619fb678ce38095e6b3e8fb9548186 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Nov  3 2020 01:14:05'

20201229:16:33:38:029256 gpexpand:gpmdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

System Expansion is used to add segments to an existing GPDB array.

gpexpand did not detect a System Expansion that is in progress.

Before initiating a System Expansion, you need to provision and burn-in

the new hardware.  Please be sure to run gpcheckperf to make sure the

new hardware is working properly.

Please refer to the Admin Guide for more information.

Would you like to initiate a new System Expansion Yy|Nn (default=N):

> Y

You must now specify a mirroring strategy for the new hosts.  Spread mirroring places

a given hosts mirrored segments each on a separate host.  You must be

adding more hosts than the number of segments per host to use this.

Grouped mirroring places all of a given hosts segments on a single

mirrored host.  You must be adding at least 2 hosts in order to use this.

What type of mirroring strategy would you like?

 spread|grouped (default=grouped):

>

** No hostnames were given that do not already exist in the **

** array. Additional segments will be added existing hosts. **

    By default, new hosts are configured with the same number of primary

    segments as existing hosts.  Optionally, you can increase the number

    of segments per host.

    For example, if existing hosts have two primary segments, entering a value

    of 2 will initialize two additional segments on existing hosts, and four

    segments on new hosts.  In addition, mirror segments will be added for

    these new primary segments if mirroring is enabled.

    

How many new primary segments per host do you want to add? (default=0):

> 1

Enter new primary data directory 1:

> /data2/primary

Enter new mirror data directory 1:

> /data1/mirror                      

Generating configuration file...

20201229:16:34:31:029256 gpexpand:gpmdw:gpadmin-[INFO]:-Generating input file...

Input configuration file was written to 'gpexpand_inputfile_20201229_163431'.

Please review the file and make sure that it is correct then re-run

with: gpexpand -i gpexpand_inputfile_20201229_163431

                

20201229:16:34:31:029256 gpexpand:gpmdw:gpadmin-[INFO]:-Exiting...

[gpadmin@gpmdw ~]$ ls -lrt

total 8

drwxrwxr-x. 2 gpadmin gpadmin  33 Dec 29 15:48 gpconfigs

-rw-rw-r--. 1 gpadmin gpadmin  24 Dec 29 16:28 new_hosts_file

drwxrwxr-x. 2 gpadmin gpadmin 215 Dec 29 16:32 gpAdminLogs

-rw-rw-r--. 1 gpadmin gpadmin 372 Dec 29 16:34 gpexpand_inputfile_20201229_163431

[gpadmin@gpmdw ~]$ less gpexpand_inputfile_20201229_163431

gpsg1|gpsg1|16001|/data2/primary/gpseg4|10|4|p

gpsg2|gpsg2|17001|/data1/mirror/gpseg4|15|4|m

gpsg2|gpsg2|16001|/data2/primary/gpseg5|11|5|p

gpsg3|gpsg3|17001|/data1/mirror/gpseg5|16|5|m

gpsg3|gpsg3|16001|/data2/primary/gpseg6|12|6|p

gpsg4|gpsg4|17001|/data1/mirror/gpseg6|17|6|m

gpsg4|gpsg4|16001|/data2/primary/gpseg7|13|7|p

gpsg1|gpsg1|17001|/data1/mirror/gpseg7|14|7|m

[gpadmin@gpmdw ~]$ gpexpand -i  gpexpand_inputfile_20201229_163431

20201229:16:38:28:029319 gpexpand:gpmdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 6.12.0 build commit:4c176763c7619fb678ce38095e6b3e8fb9548186 Open Source'

20201229:16:38:28:029319 gpexpand:gpmdw:gpadmin-[INFO]:-master Greenplum Version: 'PostgreSQL 9.4.24 (Greenplum Database 6.12.0 build commit:4c176763c7619fb678ce38095e6b3e8fb9548186 Open Source) on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 6.4.0, 64-bit compiled on Nov  3 2020 01:14:05'

20201229:16:38:28:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Querying gpexpand schema for current expansion state

20201229:16:38:29:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Heap checksum setting consistent across cluster

20201229:16:38:29:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Syncing Greenplum Database extensions

20201229:16:38:30:029319 gpexpand:gpmdw:gpadmin-[INFO]:-The packages on gpsg4 are consistent.

20201229:16:38:31:029319 gpexpand:gpmdw:gpadmin-[INFO]:-The packages on gpsg1 are consistent.

20201229:16:38:31:029319 gpexpand:gpmdw:gpadmin-[INFO]:-The packages on gpsg2 are consistent.

20201229:16:38:32:029319 gpexpand:gpmdw:gpadmin-[INFO]:-The packages on gpsg3 are consistent.

20201229:16:38:32:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Locking catalog

20201229:16:38:33:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Locked catalog

20201229:16:38:33:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Creating segment template

20201229:16:38:34:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Copying postgresql.conf from existing segment into template

20201229:16:38:35:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Copying pg_hba.conf from existing segment into template

20201229:16:38:36:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Creating schema tar file

20201229:16:38:36:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Distributing template tar file to new hosts

20201229:16:38:37:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Configuring new segments (primary)

20201229:16:38:37:029319 gpexpand:gpmdw:gpadmin-[INFO]:-{'gpsg4': '/data2/primary/gpseg7:16001:true:false:13:7::-1:', 'gpsg1': '/data2/primary/gpseg4:16001:true:false:10:4::-1:', 'gpsg2': '/data2/primary/gpseg5:16001:true:false:11:5::-1:', 'gpsg3': '/data2/primary/gpseg6:16001:true:false:12:6::-1:'}

20201229:16:38:45:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Cleaning up temporary template files

20201229:16:38:45:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Cleaning up databases in new segments.

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Unlocking catalog

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Unlocked catalog

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Creating expansion schema

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database template1

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database postgres

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Populating gpexpand.status_detail with data from database gp6db

20201229:16:38:47:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Starting new mirror segment synchronization

20201229:16:38:57:029319 gpexpand:gpmdw:gpadmin-[INFO]:-************************************************

20201229:16:38:57:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Initialization of the system expansion complete.

20201229:16:38:57:029319 gpexpand:gpmdw:gpadmin-[INFO]:-To begin table expansion onto the new segments

20201229:16:38:57:029319 gpexpand:gpmdw:gpadmin-[INFO]:-rerun gpexpand

20201229:16:38:57:029319 gpexpand:gpmdw:gpadmin-[INFO]:-************************************************

20201229:16:38:57:029319 gpexpand:gpmdw:gpadmin-[INFO]:-Exiting...

  1. 数据库优化
  1. 查看内存分配信息

gpconfig -s gp_vmem_protect_limit

[gpadmin@gpmdw ~]$ gpconfig -s gp_vmem_protect_limit

Values on all segments are consistent

GUC          : gp_vmem_protect_limit

Master  value: 8192

Segment value: 8192

在以上可以看出segment使用了系统默认的内存配置8192MB,改参数按照机器的内存大小可

以适当的调大,详见计算如下:

1、计算公式可参考如下:(mem+swap)* 0.9 /单个节点 segment 数量

2、例如 master 节点上有 254G 的内存,segment 个数为 1 个,分配最高的内存为:

254*0.9 / 1 ≈ 228 GB(233472 MB)

3、例如数据节点上有 254G 的内存,segment 个数为 4 个,分配最高的内存为:

254*0.9/4 ≈ 57 GB(58520 MB)

登录到 master 节点上执行以下命令即可

gpconfig -c gp_vmem_protect_limit -m 233472 -v 58520

-c : 改变参数的名称

-m : 修改主备 master 的内存的大小一般的和-v 一块使用

-v : 此值用于所有的 segments,mirrors 和 master 的修改

[gpadmin@gpmdw ~]$ gpconfig -c gp_vmem_protect_limit -m 233472 -v 58520

20201230:09:04:25:073428 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c gp_vmem_protect_limit -m 233472 -v 58520'

[gpadmin@gpmdw ~]$

  1. shared_buffers 共享区内存修改

[gpadmin@gpmdw ~]$ gpconfig -s shared_buffers

Values on all segments are consistent

GUC          : shared_buffers

Master  value: 125MB

Segment value: 125MB

默认125M

只能配置 segment 节点,用作磁盘读写的内存缓冲区,开始可以设置一个较小的值,比如总

内存的 15%,然后逐渐增加,过程中监控性能提升和 swap 的情况。以上的缓冲区的参数为

125MB,此值不易设置过大,过大或导致以下错误

总内存254 G   254*0.15 ≈ 38G  (38912M)

gpconfig -c shared_buffers -v 38912MB

  1. 最大连接数修改

gpconfig -s max_connections

如下 master 为 250   ,seg 为750

[gpadmin@gpmdw ~]$ gpconfig -s max_connections

Values on all segments are consistent

GUC          : max_connections

Master  value: 250

Segment value: 750

最大连接数,Segment 建议设置成 Master 的 5-10 倍。

max_connections = 200 #(master、standby)

max_connections = 1200 #(segment)

此处修改为 master 500  seg 2500
gpconfig -c max_connections -v 4000 -m 800

gpconfig -c max_prepared_transactions -v 800

[gpadmin@gpmdw ~]$ gpconfig -c max_connections -v 4000 -m 800

20201230:09:20:30:074081 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c max_connections -v 2500 -m 500'

[gpadmin@gpmdw ~]$ gpconfig -c max_prepared_transactions -v 4000 -m 800

20201230:09:24:22:074301 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c max_prepared_transactions -v 500 -m 2500'

max_prepared_transactions至少与max_connections一样大,以便每个会话都可以有一个待处理的预备事务。

运行备用服务器时,必须将此参数设置为与主服务器上相同或更高的值。否则,备用服务器将不允许查询。

[gpadmin@gpmdw ~]$ gpconfig -s max_prepared_transactions

Values on all segments are consistent

GUC          : max_prepared_transactions

Master  value: 250

Segment value: 250

  1. 磁盘块大小

gpconfig -s block_size

[gpadmin@gpmdw ~]$ gpconfig -s block_size

Values on all segments are consistent

GUC          : block_size

Master  value: 32768

Segment value: 32768

此参数表示表中的数据以默认的参数 32768 KB 作为一个文件,参数的范围 8192KB - 2MB ,

范围在 8192 - 2097152 ,值必须是 8192 的倍数,暂不修改。

  1. work_mem 修改

work_mem 在 segment 用作 sort,hash 操作的内存大小当 PostgreSQL 对大表进行排序时,数

据库会按照此参数指定大小进行分片排序,将中间结果存放在临时文件中,这些中间结果的

临时文件最终会再次合并排序,所以增加此参数可以减少临时文件个数进而提升排序效率。

当然如果设置过大,会导致 swap 的发生,所以设置此参数时仍需谨慎。刚开始可设置总内

存的 5%

[gpadmin@gpmdw ~]$ gpconfig -s work_mem

Values on all segments are consistent

GUC          : work_mem

Master  value: 32MB

Segment value: 32MB

254G * 0.05 = 12G (12288M)

gpconfig -c work_mem -v 128MB

[gpadmin@gpmdw ~]$ gpconfig -c work_mem -v 128MB

20201230:09:38:19:074729 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c work_mem -v 128MB'

  1. statement_mem 修改

[gpadmin@gpmdw ~]$ gpconfig -s statement_mem

Values on all segments are consistent

GUC          : statement_mem

Master  value: 125MB

Segment value: 125MB

max_statement_mem
设置每个查询最大使用的内存量,该参数是防止statement_mem参数设置的内存过大导致的内存溢出

statement_mem
设置每个查询在segment主机中可用的内存,该参数设置的值不能超过max_statement_mem设置的值,如果配置了资源队列,则不能超过资源队列设置的值,其计算公式为( gp_vmem_protect_limitGB * .9 ) / max_expected_concurrent_queries
例如:
seg :gp_vmem_protect_limit 设置为 58520MB (57GB) ,查询的最大并发量为150,其中10%为缓存空间,则statement_mem计算如下:
(50GB * .9) / 150= .3GB = 300MB

Master:220G * 0.9/450=0.44GB= 450MB

Master   233472   220G * 0.9/100

Seg  58520M

[gpadmin@gpmdw ~]$ gpconfig -s statement_mem

Values on all segments are consistent

GUC          : statement_mem

Master  value: 256MB

Segment value: 256MB

gpconfig -c statement_mem -v 256MB -m 256MB

[gpadmin@gpmdw ~]$ gpconfig -c statement_mem -v 256MB -m 256MB

20201230:09:55:37:075235 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c statement_mem -v 256MB -m 256MB'

  1. gpconfig -s gp_workfile_limit_files_per_query  参数

SQL 查询分配的内存不足,Greenplum 数据库会创建溢出文件(也叫工作文件)。在默

认情况下,一个 SQL 查询最多可以创建 100000 个溢出文件,这足以满足大多数查询。

该参数决定了一个查询最多可以创建多少个溢出文件。0 意味着没有限制。限制溢出文件数据可以防止失控查询破坏整个系统。 一般不做修改

[gpadmin@gpmdw ~]$ gpconfig -s gp_workfile_limit_files_per_query

Values on all segments are consistent

GUC          : gp_workfile_limit_files_per_query

Master  value: 100000

Segment value: 100000

8.gpconfig -s gp_resqueue_priority_cpucores_per_segment

[gpadmin@gpmdw ~]$ gpconfig -s gp_resqueue_priority_cpucores_per_segment

Values on all segments are consistent

GUC          : gp_resqueue_priority_cpucores_per_segment

Master  value: 4

Segment value: 4

每个 segment 分配的分配的 cpu 的个数,例如:在一个 20 核的机器上有 4 个 segment,则每个

segment 有 5 个核,而对于 master 节点则是 20 个核,master 节点上不运行 segment 的信息,

因此 master 反映了 cpu 的使用情况

按照不同集群的核数以及 segment 修改此参数即可,

gpconfig -c gp_resqueue_priority_cpucores_per_segment -v 8 -m 30

[gpadmin@gpmdw ~]$ gpconfig -c gp_resqueue_priority_cpucores_per_segment -v 30 -m 8

20201230:10:01:18:075459 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c gp_resqueue_priority_cpucores_per_segment -v 8 -m 30'

9.gpconfig -s gp_interconnect_setup_timeout

[gpadmin@gpmdw ~]$ gpconfig -s gp_interconnect_setup_timeout

Values on all segments are consistent

GUC          : gp_interconnect_setup_timeout

Master  value: 2h

Segment value: 2h

9.effective_cache_size

[gpadmin@gpmdw ~]$ gpconfig -s effective_cache_size

Values on all segments are consistent

GUC          : effective_cache_size

Master  value: 16GB

Segment value: 16GB

这个参数告诉 PostgreSQL 的优化器有多少内存可以被用来缓存数据,以及帮助决定是否应

该使用索引。这个数值越大,优化器使用索引的可能性也越大。 因此这个数值应该设置成

shared_buffers 加上可用操作系统缓存两者的总量。通常这个数值会超过系统内存总量的

50%

254G * 0.5=127G

gpconfig -c gp_resqueue_priority_cpucores_per_segment -v 30 -m 8

gpconfig -c effective_cache_size -v 127GB -m 127GB

[gpadmin@gpmdw ~]$ gpconfig -c effective_cache_size -v 127GB -m 127GB

20201230:10:05:12:075631 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c effective_cache_size -v 127GB -m 127GB'

10.temp_buffers

即临时缓冲区,拥有数据库访问临时数据,GP 中默认值为 1M,在访问比较到大的临时表时,

对性能提升有很大帮助。

gpconfig -c temp_buffers -v 2GB

[gpadmin@gpmdw ~]$ gpconfig -s temp_buffers

Values on all segments are consistent

GUC          : temp_buffers

Master  value: 32MB

Segment value: 32MB

[gpadmin@gpmdw ~]$ gpconfig -c temp_buffers -v 2GB -m 2GB

20201230:10:06:40:075789 gpconfig:gpmdw:gpadmin-[INFO]:-completed successfully with parameters '-c temp_buffers -v 2GB -m 2GB'

重启数据库

  1.  用户表空间创建

1、创建队列

CREATE RESOURCE QUEUE fast_res WITH

(

active_statements=150,

max_cost=-1,

min_cost=-1,

cost_overcommit=0,

priority=medium,

memory_limit=-1

);

2、创建表空间

root执行

gpssh -f  /usr/local/greenplum-db/all_host  -e 'mkdir -p /home/greenplum/mastertbs'

gpssh -f  /usr/local/greenplum-db/all_host  -e 'chown -R gpadmin:gpadmin /home/greenplum'

gpadmin执行

gpssh -f  /usr/local/greenplum-db/seg_host  -e 'mkdir -p /data1/fastdata'

gpssh -f  /usr/local/greenplum-db/seg_host  -e 'mkdir -p /data2/fastdata'

Gpadmin登录sql 命令行后执行

CREATE TABLESPACE threea_tas LOCATION '/home/greenplum/mastertbs' WITH (content0='/data1/fastdata', content1='/data2/fastdata',content2='/data1/fastdata', content3='/data2/fastdata',content4='/data1/fastdata', content5='/data2/fastdata');

获取content id 方法

[gpadmin@fastmdw ~]$ gpstate -s | grep -B 2 '/primary/'

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Hostname                          = fastsg1

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Address                           = fastsg1

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Datadir                           = /data1/primary/gpseg0

--

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Hostname                          = fastsg1

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Address                           = fastsg1

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Datadir                           = /data2/primary/gpseg1

--

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Hostname                          = fastsg2

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Address                           = fastsg2

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Datadir                           = /data1/primary/gpseg2

--

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Hostname                          = fastsg2

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Address                           = fastsg2

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Datadir                           = /data2/primary/gpseg3

--

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Hostname                          = fastsg3

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Address                           = fastsg3

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Datadir                           = /data1/primary/gpseg4

--

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Hostname                          = fastsg3

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Address                           = fastsg3

20210521:09:20:33:219727 gpstate:fastmdw:gpadmin-[INFO]:-      Datadir                           = /data2/primary/gpseg5

获取content id 方法

3、创建role

CREATE ROLE fast LOGIN

ENCRYPTED PASSWORD '12344555'

NOSUPERUSER INHERIT NOCREATEDB CREATEROLE RESOURCE QUEUE fast_res;

GRANT gpadmin TO fast WITH ADMIN OPTION;

CREATE ROLE fast_sqmdb LOGIN

ENCRYPTED PASSWORD '1234444'

NOSUPERUSER INHERIT NOCREATEDB NOCREATEROLE RESOURCE QUEUE fast_res;

GRANT fast TO fast_sqmdb WITH ADMIN OPTION;

4、创建schema 赋权设置默认表空间

CREATE SCHEMA fast AUTHORIZATION fast_sqmdb;

GRANT ALL ON SCHEMA fast TO fast_sqmdb;

ALTER ROLE fast_sqmdb WITH CREATEEXTTABLE;

ALTER ROLE fast_sqmdb SET search_path = sqmdb_3a,do3a_tas_master,public;

ALTER ROLE fast_sqmdb SET default_tablespace = 'threea_tas';

转载请注明:文章转载自 www.mshxw.com
本文地址:https://www.mshxw.com/it/676230.html
我们一直用心在做
关于我们 文章归档 网站地图 联系我们

版权所有 (c)2021-2022 MSHXW.COM

ICP备案号:晋ICP备2021003244-6号