升级 postgresql

1 场景描述

最近使用 postgresql 命令行比较多,就找了个类似 mycli 工具,pgcli,这个是针对 postgresql 的,兴冲冲的安装了

brew install pgcli

没想到这货自动帮我升级了 postgresql,我原来使用的是 9.5.7, 自动帮我升级到了 9.6.3

查看升级日志 pg_upgrade_server.log

pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 9.5, which is not compatible with this version 9.6.3.
 stopped waiting
pg_ctl: could not start server
Examine the log output.

这是需要迁移数据的节奏啊,迁移之前需要的明白点什么

postgresql 中的升级,如果针对小版本的升级,比如 9.6.1 升级到 9.6.3(当前的最新版本),只需要用 9.6.3 版本的软件替换 9.6.1 版本的软件即可,不需要做额外的操作,因为整个大版本是相互兼容的,内部存储形式也是兼容的。但如果涉及到跨大版本升级比如 9.5.7 升级到 9.6.3,这种直接替换软件就不行了,因为跨版本的内部存储形式发生了变化

官方对于升级提供了 3 种方法,这里迁移我使用了 pg_upgrade 来进行处理, 细节可以参考官方文档。

再进行下一步之前,把 postgresql 服务关掉,执行如下命令

brew services stop postgresql

2 如何迁移数据

2.1 备份数据

mv /usr/local/var/postgresql /usr/local/var/postgresql.old

2.2 使用新版本初始化新的数据库

initdb /usr/local/var/postgres -E utf8 --locale=zh_CN.UTF-8

运行结果

The files belonging to this database system will be owned by user "allen".
This user must also own the server process.

The database cluster will be initialized with locale "zh_CN.UTF-8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".

Data page checksums are disabled.

creating directory /usr/local/var/postgres ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /usr/local/var/postgres -l logfile start

2.3 迁移旧数据

pg_upgrade 语法简单如下:

pg_upgrade -b 旧版本的bin目录 -B 新版本的bin目录 -d 旧版本的数据目录 -D 新版本的数据目录 [其他选项...]

然后运行迁移命令

pg_upgrade -b /usr/local/Cellar/postgresql@9.5/9.5.7/bin -B /usr/local/bin  -d /usr/local/var/postgres.old -D /usr/local/var/postgres  

运行结果

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for roles starting with 'pg_'                      ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

最后提示生成两个脚本,一个是 analyze_new_cluster.sh,需要在新版本中执行,用来收集统计信息,另一个是 delete_old_cluster.sh,用来删除旧版本集群数据,当然为了安全起见可以等系统运行几天没问题再来删除。

运行下 analyze_new_cluster.sh 脚本

./analyze_new_cluster.sh

运行结果如下:

This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/usr/local/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "activity_tool": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "allen": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "cw": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "djexample": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "finance": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "iop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "learn": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "servicemall": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "store": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "test_store": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "uio": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "vbdev": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "vbdevelop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "xiuzan": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "xzdevelop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "activity_tool": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "allen": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "cw": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "djexample": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "finance": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "iop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "learn": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "servicemall": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "store": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "test_store": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "uio": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "vbdev": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "vbdevelop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "xiuzan": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "xzdevelop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "activity_tool": Generating default (full) optimizer statistics
vacuumdb: processing database "allen": Generating default (full) optimizer statistics
vacuumdb: processing database "cw": Generating default (full) optimizer statistics
vacuumdb: processing database "djexample": Generating default (full) optimizer statistics
vacuumdb: processing database "finance": Generating default (full) optimizer statistics
vacuumdb: processing database "iop": Generating default (full) optimizer statistics
vacuumdb: processing database "learn": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "servicemall": Generating default (full) optimizer statistics
vacuumdb: processing database "store": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "test_store": Generating default (full) optimizer statistics
vacuumdb: processing database "uio": Generating default (full) optimizer statistics
vacuumdb: processing database "vbdev": Generating default (full) optimizer statistics
vacuumdb: processing database "vbdevelop": Generating default (full) optimizer statistics
vacuumdb: processing database "xiuzan": Generating default (full) optimizer statistics
vacuumdb: processing database "xzdevelop": Generating default (full) optimizer statistics

Done

3 重启玩耍

brew services start postgresql

查看下服务状态

brew services list

运行结果

Name           Status  User  Plist
postgresql     started allen /Users/allen/Library/LaunchAgents/homebrew.mxcl.postgresql.plist

接着就可以愉快的使用 pgcli 了。

参考借鉴文章和文档


黑月亮
点滴记录,步步成长

现实与完美之间

1.6k 声望
24 粉丝
0 条评论
推荐阅读
centos | 修改静态 IP
设置 Centos 为使用静态 IP1 修改网络配置 {代码...} 修改后的内容如下 {代码...} 2 重启网络服务 {代码...} 3 查看地址 {代码...} 参考来源:[链接]

青阳半雪阅读 1.8k评论 3

译文 | A poor man's API
在 API 日渐流行的年代,越来越多的非技术人员也希望能从 API 的使用中获利,而创建一套成熟的 API 方案需要时间成本和金钱两方面的资源加持。在这个过程中,你需要考虑模型、设计、REST 原则等,而不仅仅是编写...

API7_技术团队2阅读 1k

PostgreSQL 插入时间与更新时间(qbit)
PostgreSQL 在数据库层面不能像 MySQL 一样设置自动创建 create_time/update_time,自动更新 update_time

qbit1阅读 875

开务数据库delete流程解读
delete主要分为两个部分,一个部分为scan过程,拉取表中的数据,第二部分,根据过滤条件,调用b.Del()函数删除对应的数据。相关逻辑计划对象图为:

KaiwuDB阅读 1k

封面图
【Postgresql】索引类型(btree、hash、GIST、GIN)
Postgresql 存在许多特定的索引查询类型,和大部分的Btree为基础架构的关系型数据库一样,在创建索引缺省的时候会把btree作为默认值。

Xander阅读 943

近期Psql相关业务的收获:agg函数对于null值的处理/ array_agg()/ Unmarshal的性能消耗和工作原理
【case 1】项目背景:需要返回一些GC的统计数据。相关数据存在frame这张表内,表中的数据一行就是一帧的数据,可以理解为记录了这一帧内的性能信息。与需求相关的col是GcChartSample,是一个json类型的数据,里面...

灰灰阅读 850

索引选择度问题优化整理
之前在搞宜搭元数据底层索引优化的时候,针对一些查询时快时慢,以及一些索引选择的问题,研究过,也基于看过的一些案例以及自身归纳思考,下面整理分享下;

edagarli阅读 776

现实与完美之间

1.6k 声望
24 粉丝
宣传栏