segment start failed:PG_CTL

2,609 views
Skip to first unread message

周梦想

unread,
Feb 28, 2016, 11:21:22 PM2/28/16
to Greenplum Users
hello,

When I init greenplum, I encountered a problem, I can't start segment.

My topology is master mdw1, backup master mdw2, segment is sdw3,sdw4,sdw5

OS:Centos 7

Can any one help me? 

Thanks!
Andy Zhou

command is :

gpinitsystem -c gpinitsystem_config -s mdw2

The error message is:

20160225:18:54:12:028212 gpstart:mdw1:gpadmin-[INFO]:-Starting Master instance in admin mode
20160225:18:54:13:028212 gpstart:mdw1:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20160225:18:54:13:028212 gpstart:mdw1:gpadmin-[INFO]:-Obtaining Segment details from master...
20160225:18:54:13:028212 gpstart:mdw1:gpadmin-[INFO]:-Setting new master era
20160225:18:54:13:028212 gpstart:mdw1:gpadmin-[INFO]:-Master Started...
20160225:18:54:13:028212 gpstart:mdw1:gpadmin-[INFO]:-Shutting down master
20160225:18:54:14:028212 gpstart:mdw1:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait...
................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-Process results...
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[ERROR]:-No segment started for content: 4.
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-dumping success segments: ['sdw5:/home/gpadmin/gpdata/gpdatam1/gpseg2:content=2:dbid=10:mode=s:status=u', 'sdw5:/home/gpadmin/gpdata/gpdatam2/gpseg3:content=3:dbid=11:mode=s:status=u', 'sdw4:/home/gpadmin/gpdata/gpdatap1/gpseg2:content=2:dbid=4:mode=s:status=u', 'sdw4:/home/gpadmin/gpdata/gpdatam1/gpseg0:content=0:dbid=8:mode=s:status=u', 'svr3:/home/gpadmin/gpdata/gpdatap1/gpseg0:content=0:dbid=2:mode=s:status=u']
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-----------------------------------------------------
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:7  FAILED  host:'sdw5' datadir:'/home/gpadmin/gpdata/gpdatap2/gpseg5' with reason:'PG_CTL failed.'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:6  FAILED  host:'sdw5' datadir:'/home/gpadmin/gpdata/gpdatap1/gpseg4' with reason:'PG_CTL failed.'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:5  FAILED  host:'sdw4' datadir:'/home/gpadmin/gpdata/gpdatap2/gpseg3' with reason:'PG_CTL failed.'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:9  FAILED  host:'sdw4' datadir:'/home/gpadmin/gpdata/gpdatam2/gpseg1' with reason:'PG_CTL failed.'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:12  FAILED  host:'svr3' datadir:'/home/gpadmin/gpdata/gpdatam1/gpseg4' with reason:'PG_CTL failed.'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:13  FAILED  host:'svr3' datadir:'/home/gpadmin/gpdata/gpdatam2/gpseg5' with reason:'PG_CTL failed.'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-DBID:3  FAILED  host:'svr3' datadir:'/home/gpadmin/gpdata/gpdatap2/gpseg1' with reason:'Failure in segment mirroring; check segment logfile'
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-----------------------------------------------------


20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-----------------------------------------------------
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-   Successful segment starts                                                     = 5
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[WARNING]:-Failed segment starts, from mirroring connection between primary and mirror   = 1   <<<<<<<<
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[WARNING]:-Other failed segment starts                                                   = 6   <<<<<<<<
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-   Skipped segment starts (segments are marked down in configuration)            = 0
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-----------------------------------------------------
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-Successfully started 5 of 12 segment instances <<<<<<<<
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-----------------------------------------------------
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[WARNING]:-Segment instance startup failures reported
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[WARNING]:-Failed start 7 of 12 segment instances <<<<<<<<
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[WARNING]:-Review /home/gpadmin/gpAdminLogs/gpstart_20160225.log
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-----------------------------------------------------
20160225:19:04:23:028212 gpstart:mdw1:gpadmin-[INFO]:-Commencing parallel segment instance shutdown, please wait...
........

20160225:19:04:38:028212 gpstart:mdw1:gpadmin-[ERROR]:-gpstart error: Do not have enough valid segments to start the array.
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:-Failed to start Greenplum instance; review gpstart output to
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:- determine why gpstart failed and reinitialize cluster after resolving
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:- issues.  Not all initialization tasks have completed so the cluster
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:- should not be used.
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:-gpinitsystem will now try to stop the cluster
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:
20160225:19:04:43:029052 gpstop:mdw1:gpadmin-[INFO]:-Starting gpstop with args: -a -i -d /home/gpadmin/gpdata/gpmaster/gpseg-1
20160225:19:04:43:029052 gpstop:mdw1:gpadmin-[INFO]:-Gathering information and validating the environment...
20160225:19:04:43:029052 gpstop:mdw1:gpadmin-[ERROR]:-gpstop error: postmaster.pid file does not exist.  is Greenplum instance already stopped?
20160225:19:04:43:gpinitsystem:mdw1:gpadmin-[WARN]:-Failed to stop new Greenplum instance Script Exiting!
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:-Run command /bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20160225_184719 to remove these changes
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[INFO]:-Start Function BACKOUT_COMMAND
20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[INFO]:-End Function BACKOUT_COMMAND 

Ivan Novick

unread,
Feb 28, 2016, 11:22:38 PM2/28/16
to 周梦想, Greenplum Users
Can you check the pg_log and startup log for failed segments?



--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.
To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

周梦想

unread,
Feb 29, 2016, 1:07:27 AM2/29/16
to Greenplum Users, ablo...@gmail.com
hi  inovick ,
below is the log of segment sdw3.
Thanks.
Andy 

gpsegstart.py_sdw3:gpadmin_20160226.log

20160226:15:40:29:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Validating directory: /home/gpadmin/gpdata/gpdatap2/gpseg1
20160226:15:40:29:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Validating directory: /home/gpadmin/gpdata/gpdatap1/gpseg0
20160226:15:40:29:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Validating directory: /home/gpadmin/gpdata/gpdatam2/gpseg5
20160226:15:40:29:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Validating directory: /home/gpadmin/gpdata/gpdatam1/gpseg4
20160226:15:40:29:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Starting segments... (mirroringMode quiescent)
20160226:15:40:31:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Marking failed /home/gpadmin/gpdata/gpdatap2/gpseg1, PG_CTL failed.
stdout:waiting for server to start......pg_ctl: PID file "/home/gpadmin/gpdata/gpdatap2/gpseg1/postmaster.pid" does not exist
 stopped waiting
pg_ctl: could not start server
Examine the log output.

stderr:
, 8
20160226:15:40:31:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Marking failed /home/gpadmin/gpdata/gpdatap1/gpseg0, PG_CTL failed.
stdout:waiting for server to start......pg_ctl: PID file "/home/gpadmin/gpdata/gpdatap1/gpseg0/postmaster.pid" does not exist
 stopped waiting
pg_ctl: could not start server
Examine the log output.

stderr:
, 8
20160226:15:40:31:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Marking failed /home/gpadmin/gpdata/gpdatam2/gpseg5, PG_CTL failed.
@                                                                                                                                                                                               19,1           4%

0160226:15:40:31:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-Postmaster /home/gpadmin/gpdata/gpdatam1/gpseg4 is running (pid 10716)
20160226:15:40:31:010666 gpsegstart.py_sdw3:gpadmin:sdw3:gpadmin-[INFO]:-
COMMAND RESULTS
STATUS--DIR:/home/gpadmin/gpdata/gpdatap2/gpseg1--STARTED:False--REASONCODE:8--REASON:PG_CTL failed.
stdout:waiting for server to start......pg_ctl: PID file "/home/gpadmin/gpdata/gpdatap2/gpseg1/postmaster.pid" does not exist
 stopped waiting
pg_ctl: could not start server
Examine the log output.

stderr:

STATUS--DIR:/home/gpadmin/gpdata/gpdatap1/gpseg0--STARTED:False--REASONCODE:8--REASON:PG_CTL failed.
stdout:waiting for server to start......pg_ctl: PID file "/home/gpadmin/gpdata/gpdatap1/gpseg0/postmaster.pid" does not exist
 stopped waiting
pg_ctl: could not start server
Examine the log output.

stderr:

STATUS--DIR:/home/gpadmin/gpdata/gpdatam2/gpseg5--STARTED:False--REASONCODE:8--REASON:PG_CTL failed.
stdout:waiting for server to start......pg_ctl: PID file "/home/gpadmin/gpdata/gpdatam2/gpseg5/postmaster.pid" does not exist
 stopped waiting
pg_ctl: could not start server
Examine the log output.

stderr:

STATUS--DIR:/home/gpadmin/gpdata/gpdatam1/gpseg4--STARTED:True--REASONCODE:0--REASON:Start Succeeded

在 2016年2月29日星期一 UTC+8下午12:22:38,inovick写道:

Ivan Novick

unread,
Feb 29, 2016, 1:46:54 AM2/29/16
to 周梦想, Greenplum Users
How about:

/home/gpadmin/gpdata/gpdatap2/gpseg1/pg_log/startup.log 
/home/gpadmin/gpdata/gpdatap2/gpseg1/pg_log/*

Any errors in these files?


周梦想

unread,
Feb 29, 2016, 2:18:43 AM2/29/16
to Greenplum Users, ablo...@gmail.com
It seems that if I don't set systemctl or ulimit, it won't work?

/home/gpadmin/gpdata/gpdatap2/gpseg1/pg_log/startup.log is

2016-02-26 15:39:46.053219 CST,,,p7530,th-1445783488,,,,0,,,seg-1,,,,,"LOG","00000","removing all temporary files",,,,,,,,"RemovePgTempFiles","fd.c",1897,
2016-02-26 15:39:46.053816 CST,,,p7530,th-1445783488,,,,0,,,seg-1,,,,,"LOG","00000","temporary files using default filespace",,,,,,,,"primaryMirrorPopulateFilespaceInfo","primary_mirror_mode.c",2569,
2016-02-26 15:39:46.053898 CST,,,p7530,th-1445783488,,,,0,,,seg-1,,,,,"LOG","00000","transaction files using default pg_system filespace",,,,,,,,"primaryMirrorPopulateFilespaceInfo","primary_mirror_mode.c",2629,
2016-02-26 15:40:29.578928 CST,,,p10705,th-1917384640,,,,0,,,seg-1,,,,,"LOG","00000","removing all temporary files",,,,,,,,"RemovePgTempFiles","fd.c",1897,
2016-02-26 15:40:29.579476 CST,,,p10705,th-1917384640,,,,0,,,seg-1,,,,,"LOG","00000","temporary files using default filespace",,,,,,,,"primaryMirrorPopulateFilespaceInfo","primary_mirror_mode.c",2569,
2016-02-26 15:40:29.579563 CST,,,p10705,th-1917384640,,,,0,,,seg-1,,,,,"LOG","00000","transaction files using default pg_system filespace",,,,,,,,"primaryMirrorPopulateFilespaceInfo","primary_mirror_mode.c",2629,
2016-02-26 15:40:30.048581 CST,,,p10705,th-1917384640,,,,0,,,seg-1,,,,,"FATAL","XX000","could not create semaphores: No space left on device (pg_sema.c:129)","Failed system call was semget(40001049, 17, 03600).","This error does *not* mean that you have run out of disk space.
It occurs when either the system limit for the maximum number of semaphore sets (SEMMNI), or the system wide maximum number of semaphores (SEMMNS), would be exceeded.  You need to raise the respective kernel parameter.  Alternatively, reduce PostgreSQL's consumption of semaphores by reducing its max_connections parameter (currently 750).
The PostgreSQL documentation contains more information about configuring your system for PostgreSQL.",,,,,,"InternalIpcSemaphoreCreate","pg_sema.c",129,1    0x8bc698 postgres errstart + 0x278
2    0x75f955 postgres PGSemaphoreCreate + 0x205
3    0x96ec07 postgres FileRepIpc_ShmemInit + 0x67
4    0x7b31df postgres CreateSharedMemoryAndSemaphores + 0x57f
5    0x775ae0 postgres PostmasterMain + 0x5b0
6    0x485dbb postgres main + 0x3bb
7    0x7ff08c426b15 libc.so.6 __libc_start_main + 0xf5
8    0x485ed9 postgres <symbol not found> + 0x485ed9

thanks.
在 2016年2月29日星期一 UTC+8下午2:46:54,inovick写道:

Ivan Novick

unread,
Feb 29, 2016, 2:25:06 AM2/29/16
to 周梦想, Greenplum Users
Maybe you need to lower your max_connections (to reduce shared memory requirement) in postgresql.conf

Try this:

gpstart -a -m
gpconfig -c max_connections -v 75 -m 10
gpstop -a -m
gpstart -a

Cheers,
Ivan

zhh

unread,
Feb 29, 2016, 2:29:01 AM2/29/16
to Ivan Novick, ablozhou, Greenplum Users
Than you, Ivan!

But I’m using "gpinitsystem -c gpinitsystem_config -s mdw2” to initialize system.
How do I set these configs to init system successfully?

Best regards,
Andy

Ivan Novick

unread,
Feb 29, 2016, 2:40:00 AM2/29/16
to zhh, ablozhou, Greenplum Users
Maybe try setting using -p parameter:

-p postgresql_conf_param_file
Optional. The name of a file that contains postgresql.conf parameter settings that you want to set for Greenplum Database. These settings will be used when the individual master and segment instances are initialized. You can also set parameters after initialization using the gpconfig utility.

Cheers,
Ivan

zhh

unread,
Feb 29, 2016, 2:40:37 AM2/29/16
to Ivan Novick, ablozhou, Greenplum Users
there is some error to set max_connections.
It caused by my not init correctly?

[gpadmin@mdw2 bin]$ gpstart -m
20160229:15:37:31:005861 gpstart:mdw2:gpadmin-[INFO]:-Setting new master era
20160229:15:37:31:005861 gpstart:mdw2:gpadmin-[INFO]:-Master Started...
[gpadmin@mdw2 bin]$
[gpadmin@mdw2 bin]$ gpconfig -c max_connections -v 75 -m 10
20160229:15:36:23:005825 gpconfig:mdw2:gpadmin-[ERROR]:-FATAL:  database "mydb" does not exist

20160229:15:36:23:005825 gpconfig:mdw2:gpadmin-[ERROR]:-Failed to connect to database, exiting without action. This script can only be run when the database is up.

[gpadmin@mdw2 bin]$ ps -ef | grep postgres
gpadmin   5771     1  0 15:33 ?        00:00:00 /usr/local/gpdb/bin/postgres -D /home/gpadmin/gpdata/gpmaster/gpseg-1 -p 5432 -b 1 -z 0 --silent-mode=true -i -M master -C -1 -x 0 -c gp_role=utility
gpadmin   5772  5771  0 15:33 ?        00:00:00 postgres: port  5432, logger process
gpadmin   5775  5771  0 15:33 ?        00:00:00 postgres: port  5432, stats collector process
gpadmin   5776  5771  0 15:33 ?        00:00:00 postgres: port  5432, writer process
gpadmin   5777  5771  0 15:33 ?        00:00:00 postgres: port  5432, checkpoint process
gpadmin   5778  5771  0 15:33 ?        00:00:00 postgres: port  5432, sweeper process
gpadmin   5835  4344  0 15:36 pts/5    00:00:00 grep --color=auto postgres

Ivan Novick

unread,
Feb 29, 2016, 2:44:24 AM2/29/16
to zhh, ablozhou, Greenplum Users
Are you having PGDATABASE=mydb?

Maybe need to unset that or set PGDATABASE=template1 ?

Cheers,
Ivan

zhh

unread,
Feb 29, 2016, 2:52:21 AM2/29/16
to Ivan Novick, ablozhou, Greenplum Users
Yes, I’ve set DATABASE_NAME=mydb in gpinitsystem_config.
But I failed to start segments.

zhh

unread,
Feb 29, 2016, 4:06:18 AM2/29/16
to Ivan Novick, ablozhou, Greenplum Users
Hi Ivan,Thank you very much.
I copy a postgresql.conf from /home/gpadmin/gpdata/gpmaster/gpseg-1/postgresql.conf and modify max_connections from 250 to 100.
Then I using gpinitsystem -c init.conf -p postgresql.conf to init system.

but it says:
0160229:16:33:20:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Building the Master instance database, please wait...
20160229:16:33:33:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Found more than 1 instance of port in /home/gpadmin/gpdata/gpmaster/gpseg-1/postgresql.conf, will append
20160229:16:33:34:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Found more than 1 instance of max_connections in /home/gpadmin/gpdata/gpmaster/gpseg-1/postgresql.conf, will append

and it appears two max_connections in all postgresql.conf in gpdata directory.
one of the value is my value 100, and the other is default 250.

how does these postgresql.conf files come from? 

[gpadmin@mdw2 gpdata]$ find . | grep postgresql.conf
./gpmaster/gpseg-1/postgresql.conf
[gpadmin@sdw4 gpdata]$ find . | grep postgresql.conf
./gpdatap1/gpseg2/postgresql.conf
./gpdatap2/gpseg3/postgresql.conf
./gpdatam1/gpseg0/postgresql.conf
./gpdatam2/gpseg1/postgresql.conf

[gpadmin@sdw5 gpdata]$ find . | grep postgresql.conf
./gpdatap1/gpseg4/postgresql.conf
./gpdatam1/gpseg2/postgresql.conf
./gpdatam2/gpseg3/postgresql.conf

[gpadmin@sdw3 gpdata]$ find . | grep postgresql.conf
./gpdatap1/gpseg0/postgresql.conf
./gpdatap2/gpseg1/postgresql.conf
./gpdatam1/gpseg4/postgresql.conf
./gpdatam2/gpseg5/postgresql.conf

the init command took a long time about 20 minutes, and it failed:
0160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Parallel process exit status
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:------------------------------------------------
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Total processes marked as completed           = 5
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Total processes marked as killed              = 0
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[WARN]:-Total processes marked as failed              = 1 <<<<<
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:------------------------------------------------
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-End Function PARALLEL_SUMMARY_STATUS_REPORT
FAILED:sdw3~50001~/home/gpadmin/gpdata/gpdatam2/gpseg5~13~5~51001
COMPLETED:sdw3~50000~/home/gpadmin/gpdata/gpdatam1/gpseg4~12~4~51000
COMPLETED:sdw5~50001~/home/gpadmin/gpdata/gpdatam2/gpseg3~11~3~51001
COMPLETED:sdw5~50000~/home/gpadmin/gpdata/gpdatam1/gpseg2~10~2~51000
COMPLETED:sdw4~50000~/home/gpadmin/gpdata/gpdatam1/gpseg0~8~0~51000
COMPLETED:sdw4~50001~/home/gpadmin/gpdata/gpdatam2/gpseg1~9~1~51001
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-End Function CREATE_QES_MIRROR
INSERT 0 1
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[FATAL]:-Errors generated from parallel processes
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Dumped contents of status file to the log file
20160229:16:48:38:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Building composite backout file
20160229:16:48:39:014936 gpinitsystem:mdw2:gpadmin-[INFO]:-Start Function ERROR_EXIT
20160229:16:48:39:gpinitsystem:mdw2:gpadmin-[FATAL]:-Failures detected, see log file /home/gpadmin/gpAdminLogs/gpinitsystem_20160229.log for more detail Script Exiting!

but there is no error in segments pg_log
[gpadmin@sdw3 pg_log]$ cat startup.log
2016-02-29 16:42:00.434524 CST,,,p27026,th899405888,,,,0,,,seg-1,,,,,"LOG","00000","removing all temporary files",,,,,,,,"RemovePgTempFiles","fd.c",1897,
2016-02-29 16:42:00.444017 CST,,,p27026,th899405888,,,,0,,,seg-1,,,,,"LOG","00000","temporary files using default filespace",,,,,,,,"primaryMirrorPopulateFilespaceInfo","primary_mirror_mode.c",2569,
2016-02-29 16:42:00.444102 CST,,,p27026,th899405888,,,,0,,,seg-1,,,,,"LOG","00000","transaction files using default pg_system filespace",,,,,,,,"primaryMirrorPopulateFilespaceInfo","primary_mirror_mode.c",2629,
[gpadmin@sdw3 pg_log]$

Keaton Adams

unread,
Feb 29, 2016, 8:31:43 AM2/29/16
to Greenplum Users
Hello Andy,

Here are a few ideas.

1.) When the database failed to initialize, did you run the backout script, as noted in the gpinitsystem output?  If a gpinitsystem fails for some reason, it is important to review the content of /home/gpadmin/gpAdminLogs and $MASTER_DATA_DIRECTORY/pg_log to see what went wrong, hopefully fix the problem and then, before attempting another gpinitsystem, run the backout_gpinitsystem script to properly clean up from the last failed run:

20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:-Run command /bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20160225_184719 to remove these changes


2.) Are you using the Greenplum Database documentation available on the Pivotal.io Website to help get through the installation process?  There are certain OS and kernel parameters that need to be set on each machine in the GPDB cluster before Greenplum Database will initialize and operate properly. All relevant documentation can be found found at the URL below.  Of particular interest should be the "Clustering Concepts" guide, the "Best Practices" guide, the "Administrator" guide, and of course, the "Installation" guide.  The installation guide will walk step-by-step on how to properly configure the OS and cluster, and actually go though the process of initializing a Greenplum Database:



3.) You are also attempting to use CentOS 7, which Greenplum Database just recently became certified on.  Compared to CentOS 6.x, version 7 has major changes to several key aspects of the Operating System, which many GPDB DBA's are still familiarizing themselves with.  I have attached my personal notes that I have been working on with some additional / supplemental commands on how to properly set up a Cent 7 system to run Greenplum Database. Use the official "GPDB Install" guide first, with the addition of the attached notes to help properly configure CentOS 7 before attempting a gpinitsystem for the first time.


4.) If, after reviewing the docs on the Pivotal site and what is attached here, GPDB will still not initialize successfully, then I would need more information about the environment to assist further.  VMs or physical hardware?  What type of network for the GPDB interconnect?  Amount of RAM and number of CPU cores per server? Disk configuration? Are you using XFS for the data volumes with the tuning parameters GPDB requires? For the hardware requirements of GPDB, please to refer to the "Cluster Configuration" guide, as well as to this blog post on the Pivotal P.O.V. site: https://blog.pivotal.io/big-data-pivotal/features/how-to-build-a-hardware-cluster-for-pivotal-greenplum-database


Thanks,

Keaton
GPDB_CentOS7_Supplemental.pdf

Scott Kahler

unread,
Feb 29, 2016, 10:17:52 AM2/29/16
to Keaton Adams, Greenplum Users
In the new install docs there are notes which talk about the gpadmin account needing to be a system account or there is some changes you need to make to systemd in order for some of the variables set by sysctl to stick and not dialed back any time the gpadmin account logs out (even from a backend ssh connection). Not sure that is your issue, but I was seeing similar errors until we got those hammered out.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.
To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.



--

Scott Kahler | Pivotal, R&D, Platform Engineering  | ska...@pivotal.io | 816.237.0610

zhh

unread,
Feb 29, 2016, 7:17:44 PM2/29/16
to Keaton Adams, Greenplum Users
Hello Keaton,
Thank you for your kindly reply. 

I’ll try to do use gpdb installer if I can’t init the system today. 

The newly error message seems that the memory is not enough?

Best regards,
Andy

[gpadmin@mdw2 ~]$ cat /etc/redhat-release
CentOS Linux release 7.2.1511 (Core) 
[gpadmin@sdw3 gpseg4]$ free -mh
             total        used        free      shared  buff/cache   available
Mem:           1.8G        352M        923M         56M        564M        1.3G
Swap:          2.0G          0B        2.0G

[gpadmin@mdw2 ~]$ cat hostfile_segonly
sdw3
sdw4
sdw5

[gpadmin@mdw2 ~]$ gpinitsystem -c gpinitsystem_config --max_connections=60

errorlogs:
20160229:18:36:49:025435 gpinitsystem:mdw2:gpadmin-[INFO]:-Dumping gpinitsystem_config to logfile for reference



ARRAY_NAME="MyGP"

SEG_PREFIX=gpseg

PORT_BASE=40000

declare -a DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatap1 /home/gpadmin/gpdata/gpdatap2)

MASTER_HOSTNAME=mdw2

MASTER_DIRECTORY=/home/gpadmin/gpdata/gpmaster

MASTER_PORT=5432

TRUSTED_SHELL=ssh

CHECK_POINT_SEGMENTS=8

ENCODING=UNICODE


MIRROR_PORT_BASE=50000

REPLICATION_PORT_BASE=41000

MIRROR_REPLICATION_PORT_BASE=51000

declare -a MIRROR_DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatam1 /home/gpadmin/gpdata/gpdatam2)
MACHINE_LIST_FILE=/home/gpadmin/hostfile_segonly

20160229:18:39:59:025435 gpinitsystem:mdw2:gpadmin-[INFO]:-Parallel process exit status
20160229:18:39:59:025435 gpinitsystem:mdw2:gpadmin-[INFO]:------------------------------------------------
20160229:18:39:59:025435 gpinitsystem:mdw2:gpadmin-[INFO]:-Total processes marked as completed           = 5
20160229:18:39:59:025435 gpinitsystem:mdw2:gpadmin-[INFO]:-Total processes marked as killed              = 0
20160229:18:39:59:025435 gpinitsystem:mdw2:gpadmin-[WARN]:-Total processes marked as failed              = 1 <<<<<
20160229:18:40:31:025435 gpinitsystem:mdw2:gpadmin-[FATAL]:-Errors generated from parallel processes
20160229:18:40:31:025435 gpinitsystem:mdw2:gpadmin-[INFO]:-Dumped contents of status file to the log file
20160229:18:40:31:025435 gpinitsystem:mdw2:gpadmin-[INFO]:-Building composite backout file20160229:18:39:04:010697 gpcreateseg.sh:mdw2:gpadmin-[FATAL][5]:-Failed to start segment instance database sdw5 /home/gpadmin/gpdata/gpdatap2/gpseg5
20160229:18:40:31:gpinitsystem:mdw2:gpadmin-[FATAL]:-Failures detected, see log file /home/gpadmin/gpAdminLogs/gpinitsystem_20160229.log for more detail Script Exiting!
20160229:18:40:31:025435 gpinitsystem:mdw2:gpadmin-[WARN]:-Script has left Greenplum Database in an incomplete state..
creating directory /home/gpadmin/gpdata/gpdatap2/gpseg5 ... ok
creating subdirectories ... ok
selecting default max_connections ... 180
selecting default shared_buffers/max_fsm_pages ... 125MB/200000
creating configuration files ... ok
creating template1 database in /home/gpadmin/gpdata/gpdatap2/gpseg1/base/1 ... 2016-02-29 10:39:03.762725 GMT,,,p19508,th1459763264,,,,0,,,seg-1,,,,,"WARNING","01000","""fsync"": can not be set by the user and will be ignored.",,,,,,,,"set_config_option","guc.c",4336,
initdb: error 256 from: "/usr/local/gpdb/bin/postgres" --boot -x0 -F -c max_connections=180 -c shared_buffers=4000 -c max_fsm_pages=200000 < "/dev/null" > "/home/gpadmin/gpdata/gpdatap2/gpseg5.initdb" 2>&1
initdb: removing data directory "/home/gpadmin/gpdata/gpdatap2/gpseg5"
180
selecting default shared_buffers/max_fsm_pages ... 125MB/200000
creating configuration files ... ok
creating template1 database in /home/gpadmin/gpdata/gpdatap2/gpseg3/base/1 ... 2016-02-29 10:39:04.055714 GMT,,,p12218,th804448320,,,,0,,,seg-1,,,,,"WARNING","01000","""fsync"": can not be set by the user and will be ignored.",,,,,,,,"set_config_option","guc.c",4336,
2016-02-29 10:39:03.765776 GMT,,,p27759,th-28538816,,,,0,,,seg-1,,,,,"WARNING","01000","""fsync"": can not be set by the user and will be ignored.",,,,,,,,"set_config_option","guc.c",4336,
2016-02-29 18:39:03.933586 CST,,,p27759,th-28538816,,,,0,,,seg-1,,,,,"FATAL","XX000","could not create shared memory segment: 无法分配内存 (pg_shmem.c:183)","Failed system call was shmget(key=2, size=170729952, 03600).","This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory or swap space. To reduce the request size (currently 170729952 bytes), reduce PostgreSQL's shared_buffers parameter (currently 4000) and/or its max_connections parameter (currently 180).
The PostgreSQL documentation contains more information about shared memory configuration.",,,,,,"InternalIpcMemoryCreate","pg_shmem.c",183,1    0x8bc698 postgres errstart + 0x278
2    0x76032f postgres PGSharedMemoryCreate + 0x16f
3    0x7b2fc9 postgres CreateSharedMemoryAndSemaphores + 0x369
4    0x8cd729 postgres BaseInit + 0x19
5    0x53bbbf postgres AuxiliaryProcessMain + 0x31f
20160229:18:39:04:010697 gpcreateseg.sh:mdw2:gpadmin-[FATAL][5]:-Failed to start segment instance database sdw5 /home/gpadmin/gpdata/gpdatap2/gpseg5
ok

在 2016年2月29日,21:31,Keaton Adams <kad...@pivotal.io> 写道:

Hello Andy,

Here are a few ideas.

1.) When the database failed to initialize, did you run the backout script, as noted in the gpinitsystem output?  If a gpinitsystem fails for some reason, it is important to review the content of /home/gpadmin/gpAdminLogs and $MASTER_DATA_DIRECTORY/pg_log to see what went wrong, hopefully fix the problem and then, before attempting another gpinitsystem, run the backout_gpinitsystem script to properly clean up from the last failed run:

20160225:19:04:43:025577 gpinitsystem:mdw1:gpadmin-[WARN]:-Run command /bin/bash /home/gpadmin/gpAdminLogs/backout_gpinitsystem_gpadmin_20160225_184719 to remove these changes

some times I run the scripts, some times I just killed the processes and remove all dirs of each nodes. 


2.) Are you using the Greenplum Database documentation available on the Pivotal.io Website to help get through the installation process?  There are certain OS and kernel parameters that need to be set on each machine in the GPDB cluster before Greenplum Database will initialize and operate properly. All relevant documentation can be found found at the URL below.  Of particular interest should be the "Clustering Concepts" guide, the "Best Practices" guide, the "Administrator" guide, and of course, the "Installation" guide.  The installation guide will walk step-by-step on how to properly configure the OS and cluster, and actually go though the process of initializing a Greenplum Database:


I’m using the documents. but I haven’t read the documents completely.  I’ll finish this.


3.) You are also attempting to use CentOS 7, which Greenplum Database just recently became certified on.  Compared to CentOS 6.x, version 7 has major changes to several key aspects of the Operating System, which many GPDB DBA's are still familiarizing themselves with.  I have attached my personal notes that I have been working on with some additional / supplemental commands on how to properly set up a Cent 7 system to run Greenplum Database. Use the official "GPDB Install" guide first, with the addition of the attached notes to help properly configure CentOS 7 before attempting a gpinitsystem for the first time.

ok, thank you for your information. I compiled the gpdb from source. I’ll try to run the installer of the gpdb.


4.) If, after reviewing the docs on the Pivotal site and what is attached here, GPDB will still not initialize successfully, then I would need more information about the environment to assist further.  VMs or physical hardware?  What type of network for the GPDB interconnect?  Amount of RAM and number of CPU cores per server? Disk configuration? Are you using XFS for the data volumes with the tuning parameters GPDB requires? For the hardware requirements of GPDB, please to refer to the "Cluster Configuration" guide, as well as to this blog post on the Pivotal P.O.V. site: https://blog.pivotal.io/big-data-pivotal/features/how-to-build-a-hardware-cluster-for-pivotal-greenplum-database


ok, I’ve got this.  
-- 
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.
To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.
提示图标 邮件带有附件预览链接,若您转发或回复此邮件时不希望对方预览附件,建议您手动删除链接。
共有 1 个附件
GPDB_CentOS7_Supplemental.pdf(90K)
极速下载 在线预览
<GPDB_CentOS7_Supplemental.pdf>

zhh

unread,
Feb 29, 2016, 7:26:17 PM2/29/16
to Scott Kahler, Keaton Adams, Greenplum Users
Hello Scott,

Thanks for your information.
I’ll try to make the gpadmin be a system account and try again.

Best regards,
Andy

Scott Kahler

unread,
Feb 29, 2016, 7:32:29 PM2/29/16
to zhh, Greenplum Users

What systen are you running on that has 2G of memory? That is an extremely small amount and you will probably need to tune down a bunch of things in order to run. Is it possible to get the systems in the cluster more memory?

zhh

unread,
Feb 29, 2016, 7:46:45 PM2/29/16
to Scott Kahler, Greenplum Users
I’m running it on several virtual box machines. The OS is cents 7 without any tuning.
The host machine total memory is 16GB, OS is ubuntu 14.04.
Thanks,
Andy

Keaton Adams

unread,
Feb 29, 2016, 8:07:31 PM2/29/16
to Greenplum Users, ska...@pivotal.io
To test out GPDB I would suggest downloading the Greenplum Database Sandbox VM:


Once you have exposure to the software along with the docs such as the getting started guide, then you might consider three CentOS VMs with 4 GB RAM each, leaving 4 GB for the host OS.  Configure one VM for the Master Segment, and one Data Segment for each of the other two VMs to test out the install process. Having only 2 GB of RAM per VM is not really sufficient to start a GPDB cluster.

Regards,

Keaton

Andy
Hello Scott,
Andy
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+unsubscribe@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.



--

Scott Kahler | Pivotal, R&D, Platform Engineering  | ska...@pivotal.io | 816.237.0610

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+unsubscribe@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+unsubscribe@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+unsubscribe@greenplum.org.

zhh

unread,
Feb 29, 2016, 9:28:09 PM2/29/16
to Keaton Adams, Greenplum Users, ska...@pivotal.io
Thank you,Keaton.

I’m trying to download the VM, but there is a GFW(Greate Fire Wall), cause my downloading always failed.
I’ll increase the VM memory and try again.

Best regards,
Andy

To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

zhh

unread,
Feb 29, 2016, 11:45:53 PM2/29/16
to Keaton Adams, Greenplum Users, ska...@pivotal.io
Hello,

I’ve increase the memory to 4G of each node. But still can’t work.

mdw1 is master, mdw2,sdw3 are  segments.


mdw2:/home/gpadmin/gpdata/gpdatam1/gpseg2/pg_log
this dir is empty.

gpinitsystem -c gpinitsystem_config failed, master log is:

0160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:-Parallel process exit status
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:------------------------------------------------
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:-Total processes marked as completed           = 2
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:-Total processes marked as killed              = 0
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[WARN]:-Total processes marked as failed              = 2 <<<<<
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:------------------------------------------------
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:-End Function PARALLEL_SUMMARY_STATUS_REPORT
FAILED:mdw2~50000~/home/gpadmin/gpdata/gpdatam1/gpseg2~8~2~51000
FAILED:mdw2~50001~/home/gpadmin/gpdata/gpdatam2/gpseg3~9~3~51001
COMPLETED:sdw3~50001~/home/gpadmin/gpdata/gpdatam2/gpseg1~7~1~51001
COMPLETED:sdw3~50000~/home/gpadmin/gpdata/gpdatam1/gpseg0~6~0~51000
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[INFO]:-End Function CREATE_QES_MIRROR
INSERT 0 1
20160301:12:05:32:016719 gpinitsystem:mdw1:gpadmin-[FATAL]:-Errors generated from parallel processes

zhh

unread,
Mar 1, 2016, 1:24:42 AM3/1/16
to Keaton Adams, Greenplum Users, ska...@pivotal.io
Hello,
The config file is below:

20160301:11:48:41:005177 gpinitsystem:mdw1:gpadmin-[INFO]:-Dumping gpinitsystem_config to logfile for reference

ARRAY_NAME="MyGP"

SEG_PREFIX=gpseg

PORT_BASE=40000

declare -a DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatap1 /home/gpadmin/gpdata/gpdatap2)

MASTER_HOSTNAME=mdw1

MASTER_DIRECTORY=/home/gpadmin/gpdata/gpmaster

MASTER_PORT=5432

TRUSTED_SHELL=ssh

CHECK_POINT_SEGMENTS=8

ENCODING=UNICODE


MIRROR_PORT_BASE=50000REPLICATION_PORT_BASE=41000

MIRROR_REPLICATION_PORT_BASE=51000

declare -a MIRROR_DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatam1 /home/gpadmin/gpdata/gpdatam2)

MACHINE_LIST_FILE=/home/gpadmin/hostfile_segonly

[gpadmin@mdw1 ~]$ cat hostfile_segonly
mdw2
sdw3

Keaton Adams

unread,
Mar 1, 2016, 8:23:40 AM3/1/16
to Greenplum Users
See the notes below.  Thanks.


20160301:11:48:41:005177 gpinitsystem:mdw1:gpadmin-[INFO]:-Dumping gpinitsystem_config to logfile for reference
ARRAY_NAME="MyGP"
SEG_PREFIX=gpseg
PORT_BASE=40000


The init routine will create a primary segment for every data directory listed.  You really only have enough resources on the VMs for a single data segment per host, so change this:

declare -a DATA_DIRECTORY=(/home/gpadmin/gpdata/gpdatap1)

MASTER_HOSTNAME=mdw1
MASTER_DIRECTORY=/home/gpadmin/gpdata/gpmaster
MASTER_PORT=5432
TRUSTED_SHELL=ssh
CHECK_POINT_SEGMENTS=8
ENCODING=UNICODE


There will also not be enough resources for the mirror segments, so disable building the mirrors altogether:

#MIRROR_PORT_BASE=50000REPLICATION_PORT_BASE=41000
#MIRROR_REPLICATION_PORT_BASE=51000
#declare -a MIRROR_DATA_DIRECTORY=()


The naming convention should be: master: MDW1  segments: SDW1, SDW2.  Having a segment server called “mdw” is somewhat confusing.  This won’t stop the instance from initializing, but if you decide to rebuild the VMs from scratch at some point, consider altering the host names.

Scott Kahler

unread,
Mar 1, 2016, 1:42:48 PM3/1/16
to Greenplum Users
Andy,

Some of what you have going on there seems to be a "can I make it run" exercise. We can probably get it to deploy and run, but the practical usage will be fairly slim.

One change you will want to make is vm.overcommit in our install docs is recommended to be set to 2 and make the system a lot more picky about memory management and accounting that memory is available to do things. In the extremely limited environment you are running you will probably want to set this to 0 so the system can be a little more loose with it's memory config. This could mean you run into the OOM if you start running queries through the system or do things that use up all the memory.

As Keaton said, remove your mirrors and cut it down to 1 primary per host. You may want to tune down the max_connections and drop the value of statement_mem as those could help with some of the memory requirements process will take out of the gate.

If you want to follow the host naming convention we use most often it is mdw (master ), smdw (standby master), sdw1..X (segment servers 1 to X). 

Andy
Hello Scott,
Andy
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.



--

Scott Kahler | Pivotal, R&D, Platform Engineering  | ska...@pivotal.io | 816.237.0610

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

--
You received this message because you are subscribed to the Google Groups "Greenplum Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

To post to this group, send email to gpdb-...@greenplum.org.
Visit this group at https://groups.google.com/a/greenplum.org/group/gpdb-users/.
For more options, visit https://groups.google.com/a/greenplum.org/d/optout.

zhh

unread,
Mar 2, 2016, 5:58:01 AM3/2/16
to Keaton Adams, Greenplum Users
Thank you very much, Keaton.
I think it really was the resource limit caused the problem. 
After I reconfiged the nodes without mirror and only one data segment per host, it runs ok.

Best regards,
Andy

To unsubscribe from this group and stop receiving emails from it, send an email to gpdb-users+...@greenplum.org.

zhh

unread,
Mar 2, 2016, 6:03:29 AM3/2/16
to Scott Kahler, Greenplum Users
Hi, Scott.
Thank you for your advice.
Maybe my question can help others who are newbie of greenplum. That’s the value:)
The mdw2 I planned to use as backup master. But I don’t have enough memory, so I close other segment, and use the mdw2 as segment node temporary.
After I increase memory to the host, I will reuse it as backup master.
Reply all
Reply to author
Forward
0 new messages