1 场景描述

最近使用 postgresql 命令行比较多,就找了个类似 mycli 工具,pgcli,这个是针对 postgresql 的,兴冲冲的安装了

brew install pgcli

没想到这货自动帮我升级了 postgresql,我原来使用的是 9.5.7, 自动帮我升级到了 9.6.3

查看升级日志 pg_upgrade_server.log

pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 9.5, which is not compatible with this version 9.6.3.
 stopped waiting
pg_ctl: could not start server
Examine the log output.

这是需要迁移数据的节奏啊,迁移之前需要的明白点什么

postgresql 中的升级,如果针对小版本的升级,比如 9.6.1 升级到 9.6.3(当前的最新版本),只需要用 9.6.3 版本的软件替换 9.6.1 版本的软件即可,不需要做额外的操作,因为整个大版本是相互兼容的,内部存储形式也是兼容的。但如果涉及到跨大版本升级比如 9.5.7 升级到 9.6.3,这种直接替换软件就不行了,因为跨版本的内部存储形式发生了变化

官方对于升级提供了 3 种方法,这里迁移我使用了 pg_upgrade 来进行处理, 细节可以参考官方文档。

再进行下一步之前,把 postgresql 服务关掉,执行如下命令

brew services stop postgresql

2 如何迁移数据

2.1 备份数据

mv /usr/local/var/postgresql /usr/local/var/postgresql.old

2.2 使用新版本初始化新的数据库

initdb /usr/local/var/postgres -E utf8 --locale=zh_CN.UTF-8

运行结果

The files belonging to this database system will be owned by user "allen".
This user must also own the server process.

The database cluster will be initialized with locale "zh_CN.UTF-8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".

Data page checksums are disabled.

creating directory /usr/local/var/postgres ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /usr/local/var/postgres -l logfile start

2.3 迁移旧数据

pg_upgrade 语法简单如下:

pg_upgrade -b 旧版本的bin目录 -B 新版本的bin目录 -d 旧版本的数据目录 -D 新版本的数据目录 [其他选项...]

然后运行迁移命令

pg_upgrade -b /usr/local/Cellar/postgresql@9.5/9.5.7/bin -B /usr/local/bin  -d /usr/local/var/postgres.old -D /usr/local/var/postgres  

运行结果

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for roles starting with 'pg_'                      ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

最后提示生成两个脚本,一个是 analyze_new_cluster.sh,需要在新版本中执行,用来收集统计信息,另一个是 delete_old_cluster.sh,用来删除旧版本集群数据,当然为了安全起见可以等系统运行几天没问题再来删除。

运行下 analyze_new_cluster.sh 脚本

./analyze_new_cluster.sh

运行结果如下:

This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/usr/local/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "activity_tool": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "allen": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "cw": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "djexample": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "finance": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "iop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "learn": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "servicemall": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "store": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "test_store": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "uio": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "vbdev": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "vbdevelop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "xiuzan": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "xzdevelop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "activity_tool": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "allen": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "cw": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "djexample": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "finance": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "iop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "learn": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "servicemall": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "store": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "test_store": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "uio": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "vbdev": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "vbdevelop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "xiuzan": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "xzdevelop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "activity_tool": Generating default (full) optimizer statistics
vacuumdb: processing database "allen": Generating default (full) optimizer statistics
vacuumdb: processing database "cw": Generating default (full) optimizer statistics
vacuumdb: processing database "djexample": Generating default (full) optimizer statistics
vacuumdb: processing database "finance": Generating default (full) optimizer statistics
vacuumdb: processing database "iop": Generating default (full) optimizer statistics
vacuumdb: processing database "learn": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "servicemall": Generating default (full) optimizer statistics
vacuumdb: processing database "store": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "test_store": Generating default (full) optimizer statistics
vacuumdb: processing database "uio": Generating default (full) optimizer statistics
vacuumdb: processing database "vbdev": Generating default (full) optimizer statistics
vacuumdb: processing database "vbdevelop": Generating default (full) optimizer statistics
vacuumdb: processing database "xiuzan": Generating default (full) optimizer statistics
vacuumdb: processing database "xzdevelop": Generating default (full) optimizer statistics

Done

3 重启玩耍

brew services start postgresql

查看下服务状态

brew services list

运行结果

Name           Status  User  Plist
postgresql     started allen /Users/allen/Library/LaunchAgents/homebrew.mxcl.postgresql.plist

接着就可以愉快的使用 pgcli 了。

参考借鉴文章和文档


青阳半雪
1.6k 声望24 粉丝

现实与完美之间