Performing a minor upgrade v7.4
Perform a minor version upgrade to 7.4. Because minor versions are binary-compatible, this process replaces the software binaries without requiring a full data migration.
An upgrade involves stopping the WHPG cluster, updating the software binaries across all nodes, and restarting the services.
Note
Starting from WarehousePG 7.4, the installation directory has been consolidated to /usr/edb/whpg7. For backward compatibility, the system automatically maintains a symbolic link at /usr/local/greenplum-db and /usr/local/greenplum-db-clients.
In this guide:
7.originrefers to your current version.7.targetrefers to the version you are upgrading to.
Process overview
The following high-level steps summarize the upgrade lifecycle:
- Prepare for maintenance: Isolate the cluster and secure a full backup.
- Clear automation: Disable scheduled tasks and coordinate infrastructure snapshots.
- Validate baseline: Verify current software versions and run smoke tests.
- Quiesce the database: Stop peripheral services and the WHPG cluster.
- Replace binaries: Install the new software package across all cluster nodes.
- Restore services: Restart the cluster and verify health.
- Re-establish connectivity: Restore automation, connection pooling, and federation.
- Hand back: Resume normal business operations and monitoring.
Preparing for maintenance
These steps isolate the cluster from external traffic and automated scripts before the system handover.
Stop all monitoring applications: Shut down all monitoring agents to prevent false alerts during the maintenance window.
Stop scheduled tasks: Disable external schedulers that trigger jobs against the database.
Stop applications: Notify users and shut down application servers, web services, and BI tools. Ensure no new connections are being initiated.
Perform a backup of all databases: Execute a full backup. This is your final safety point before modifying the environment.
gpbackup --leaf-partition-data
Clearing automation and system handover
Once data is secure and traffic has ceased, prepare the operating system environment.
Hand over the for infrastructure maintenance: Coordinate with your Infrastructure or Platform team to perform any required pre-upgrade safety measures. This may include creating environment snapshots, initiating virtual machine checkpoints, or performing hardware-level health checks to ensure the underlying platform is stable.
Save and remove crontab entries: Perform these steps as the
rootuser on coordinator and standby nodes to prevent maintenance scripts from firing during the binary swap.Back up your
gpadmintasks from both the coordinator and standby nodes:crontab -l -u gpadmin > /home/gpadmin/crontab_backup_$(date +%F)
Check for other user crontabs:
crontab -u <username> -l for user in $(cut -f1 -d: /etc/passwd); do echo $user; crontab -u $user -l; done >crontab_all 2>crontab_all_no_cron
Remove current entries:
crontab -r
Verify the list is empty (should return
no crontab for gpadmin):crontab -l
Establishing a pre-upgrade baseline
Before beginning the update, capture the current state of your environment. This baseline serves as a reference point to ensure that all software versions, configurations, and connectivity are restored correctly once the update is complete.
Inventory installed packages: Document the exact versions of the WHPG packages currently running on the coordinator and segments.
For RHEL 8, RHEL 9:
sudo rpm -qa | grep whpgFor RHEL 7:
sudo yum list installed | grep whpg
Record component versions: Document the versions of any additionally installed components, such as PXF, or PgBouncer. For example:
Check PXF version:
pxf --versionCheck pgBouncer version: Connect to the admin console with
psql -p 6543 -U pgbouncer -d pgbouncerand runSHOW VERSION;.
Perform a PXF smoke test: Ensure pre-upgrade connectivity is functional to confirm any connectivity issues found after the upgrade were not pre-existing. For example:
psql -x demo -c 'SELECT * FROM public.jsonb_events LIMIT 2;'
Quiescing the database
Perform a final cleanup of the running database instance.
Stop exernal services: Shut down the PXF service on all nodes and stop the PgBouncer connection pooler.
Stop PXF:
pxf cluster stopStop pgBouncer: Connect to the admin console with
psql -p 6543 -U pgbouncer -d pgbouncerand runSHUTDOWN;.
Terminate sessions and check queries: Ensure no active user queries remain. Forcefully terminate any remaining client backends.
# View active queries psql -d postgres -c "SELECT * FROM pg_stat_activity where backend_type = 'client backend' and state <> 'idle';" # Terminate a specific PID psql -d postgres -c 'SELECT pg_cancel_backend(<procpid>);'
Verify segment health: Run
gpstate -m. All segments must be inSynchronizedstatus. If not, rungprecoverseg -aand wait for completion.Stop the WHPG cluster:
gpstop -a -M fast
Replacing binaries and restarting WHPG
Perform the following steps as the gpadmin user. When upgrading, the package manager will automatically remove old files in /usr/local/greenplum-db-<version> and install new files to /usr/edb/whpg7.
On the coordinator, back up old WHPG binaries:
sudo cp -R /usr/local/greenplum-db-7.origin-WHPG /usr/local/greenplum-db-7.origin-WHPG_save sudo chown -R gpadmin /usr/local/greenplum-db-7.origin-WHPG_save
Download your
7.targetpackage. See Downloading the WarehousePG Server Software for details on how to download the packages. From your jump host, copy the new package to the coordinator:scp <warehouse-pg-7.target.rpm> gpadmin@coordinator_host:/tmp scp <warehouse-pg-7.target-clients.rpm> gpadmin@coordinator_host:/tmp
Where
<warehouse-pg-7.target.rpm>is the package of the WarehousePG version to install.On the coordinator, create a file
hostfile_without_coordthat lists all hosts in the cluster except the coordinator. For example:scdw sdw1 sdw2 sdw3
From the coordinator, create a directory for the binaries on all hosts and transfer the new package:
gpssh -f hostfile_without_coord -e "mkdir -p /tmp/binary" gpsync -f hostfile_without_coord <warehouse-pg-7.target.rpm> =:/tmp/binary
Install WHPG
7.targeton the coordinator:Important
Starting with WarehousePG 7.4, the
warehouse-pg-7-clientspackage is mandatory and must be installed alongside the server package.For RHEL 8, RHEL 9:
sudo rpm -U <warehouse-pg-7.target.rpm> -v sudo chown -R gpadmin. /usr/edb/whpg7
For RHEL 7:
sudo yum install -y <warehouse-pg-7.target.rpm> sudo chown -R gpadmin. /usr/edb/whpg7
Install WHPG
7.targeton segments: From the coordinator, run:For RHEL 8, RHEL 9:
source /home/gpadmin/.bashrc gpssh -f hostfile_without_coord -e "sudo rpm -U <warehouse-pg-7.target.rpm> -v" gpssh -f hostfile_without_coord -e "sudo chown -R gpadmin:gpadmin /usr/edb/whpg7"
For RHEL 7:
source /home/gpadmin/.bashrc gpssh -f hostfile_without_coord -e "sudo yum install -y <warehouse-pg-7.target.rpm>" gpssh -f hostfile_without_coord -e "sudo chown -R gpadmin:gpadmin /usr/edb/whpg7"
Verify symbolic links: Ensure
/usr/local/greenplum-dband/usr/local/greenplum-db-clientspoint to/usr/edb/whpg7across the cluster and thatgpadminis the owner of the installation directories:ll /usr/local/'| grep 'greenplum-db ->' | sort gpssh -f hostfile_without_coord -e "ll /usr/local/'| grep 'greenplum-db ->' | sort"
Restoring services and finalizing the upgrade
Start the WHPG cluster on the target version. After the cluster is back online, restart the extension services and restore the operational environment.
Update environment: Ensure
/home/gpadmin/.bashrcon coordinator and standby points to the new path, then reload:source /home/gpadmin/.bashrcStart WHPG:
gpstart -aPerform system health check: Run
gpstate -fandgpstate -mto ensure all segments are up and mirrors are synchronized.Verify the installed version:
psql -d postgres -c "select version();" psql -d postgres -c "select distinct version() from gp_dist_random('gp_id');"
Restoring automation and connectivity
Once the core database services are back online, re-establish the secondary layers to return the cluster to a full operational state.
Restore crontabs: Reload the saved files for
gpadminon coordinator and standby hosts. For example:crontab -u gpadmin /home/gpadmin/crontab_backup_YYYY-MM-DD
Start PXF and re-run the smoke test query:
pxf cluster start psql -x demo -c 'SELECT * FROM public.jsonb_events LIMIT 2;'
Start PgBouncer: Restart the connection pooler and verify client connections.
pgbouncer -d /data/gpdata/pgbouncer/pgbouncer.ini psql -p 6543 -U pgbouncer -d pgbouncer -c "SHOW CLIENTS;"
Handing back the system
With the upgrade verified and the infrastructure services restored, the final phase involves transitioning the environment back to the end-users and resuming normal business operations.
- Applications: Re-enable application traffic and BI tools.
- Schedules: Re-enable external schedulers.
- Monitoring: Resume all monitoring agents.
- Handover: Formally notify the Infrastructure or Platform teams that the cluster is stable on version
7.target.