升级 postgresql

1 场景描述

最近使用 postgresql 命令行比较多,就找了个类似 mycli 工具,pgcli,这个是针对 postgresql 的,兴冲冲的安装了

brew install pgcli

没想到这货自动帮我升级了 postgresql,我原来使用的是 9.5.7, 自动帮我升级到了 9.6.3

查看升级日志 pg_upgrade_server.log

pg_ctl: another server might be running; trying to start server anyway
waiting for server to start....FATAL:  database files are incompatible with server
DETAIL:  The data directory was initialized by PostgreSQL version 9.5, which is not compatible with this version 9.6.3.
 stopped waiting
pg_ctl: could not start server
Examine the log output.

这是需要迁移数据的节奏啊,迁移之前需要的明白点什么

postgresql 中的升级,如果针对小版本的升级,比如 9.6.1 升级到 9.6.3(当前的最新版本),只需要用 9.6.3 版本的软件替换 9.6.1 版本的软件即可,不需要做额外的操作,因为整个大版本是相互兼容的,内部存储形式也是兼容的。但如果涉及到跨大版本升级比如 9.5.7 升级到 9.6.3,这种直接替换软件就不行了,因为跨版本的内部存储形式发生了变化

官方对于升级提供了 3 种方法,这里迁移我使用了 pg_upgrade 来进行处理, 细节可以参考官方文档。

再进行下一步之前,把 postgresql 服务关掉,执行如下命令

brew services stop postgresql

2 如何迁移数据

2.1 备份数据

mv /usr/local/var/postgresql /usr/local/var/postgresql.old

2.2 使用新版本初始化新的数据库

initdb /usr/local/var/postgres -E utf8 --locale=zh_CN.UTF-8

运行结果

The files belonging to this database system will be owned by user "allen".
This user must also own the server process.

The database cluster will be initialized with locale "zh_CN.UTF-8".
initdb: could not find suitable text search configuration for locale "zh_CN.UTF-8"
The default text search configuration will be set to "simple".

Data page checksums are disabled.

creating directory /usr/local/var/postgres ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    pg_ctl -D /usr/local/var/postgres -l logfile start

2.3 迁移旧数据

pg_upgrade 语法简单如下:

pg_upgrade -b 旧版本的bin目录 -B 新版本的bin目录 -d 旧版本的数据目录 -D 新版本的数据目录 [其他选项...]

然后运行迁移命令

pg_upgrade -b /usr/local/Cellar/postgresql@9.5/9.5.7/bin -B /usr/local/bin  -d /usr/local/var/postgres.old -D /usr/local/var/postgres  

运行结果

Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for reg* system OID user data types                ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for roles starting with 'pg_'                      ok
Creating dump of global objects                             ok
Creating dump of database schemas
                                                            ok
Checking for presence of required libraries                 ok
Checking database user is the install user                  ok
Checking for prepared transactions                          ok

If pg_upgrade fails after this point, you must re-initdb the
new cluster before continuing.

Performing Upgrade
------------------
Analyzing all rows in the new cluster                       ok
Freezing all rows on the new cluster                        ok
Deleting files from new pg_clog                             ok
Copying old pg_clog to new server                           ok
Setting next transaction ID and epoch for new cluster       ok
Deleting files from new pg_multixact/offsets                ok
Copying old pg_multixact/offsets to new server              ok
Deleting files from new pg_multixact/members                ok
Copying old pg_multixact/members to new server              ok
Setting next multixact ID and offset for new cluster        ok
Resetting WAL archives                                      ok
Setting frozenxid and minmxid counters in new cluster       ok
Restoring global objects in the new cluster                 ok
Restoring database schemas in the new cluster
                                                            ok
Copying user relation files
                                                            ok
Setting next OID for new cluster                            ok
Sync data directory to disk                                 ok
Creating script to analyze new cluster                      ok
Creating script to delete old cluster                       ok

Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
    ./analyze_new_cluster.sh

Running this script will delete the old cluster's data files:
    ./delete_old_cluster.sh

最后提示生成两个脚本,一个是 analyze_new_cluster.sh,需要在新版本中执行,用来收集统计信息,另一个是 delete_old_cluster.sh,用来删除旧版本集群数据,当然为了安全起见可以等系统运行几天没问题再来删除。

运行下 analyze_new_cluster.sh 脚本

./analyze_new_cluster.sh

运行结果如下:

This script will generate minimal optimizer statistics rapidly
so your system is usable, and then gather statistics twice more
with increasing accuracy.  When it is done, your system will
have the default level of optimizer statistics.

If you have used ALTER TABLE to modify the statistics target for
any tables, you might want to remove them and restore them after
running this script because they will delay fast statistics generation.

If you would like default statistics as quickly as possible, cancel
this script and run:
    "/usr/local/bin/vacuumdb" --all --analyze-only

vacuumdb: processing database "activity_tool": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "allen": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "cw": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "djexample": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "finance": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "iop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "learn": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "postgres": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "servicemall": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "store": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "template1": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "test_store": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "uio": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "vbdev": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "vbdevelop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "xiuzan": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "xzdevelop": Generating minimal optimizer statistics (1 target)
vacuumdb: processing database "activity_tool": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "allen": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "cw": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "djexample": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "finance": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "iop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "learn": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "postgres": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "servicemall": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "store": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "template1": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "test_store": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "uio": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "vbdev": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "vbdevelop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "xiuzan": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "xzdevelop": Generating medium optimizer statistics (10 targets)
vacuumdb: processing database "activity_tool": Generating default (full) optimizer statistics
vacuumdb: processing database "allen": Generating default (full) optimizer statistics
vacuumdb: processing database "cw": Generating default (full) optimizer statistics
vacuumdb: processing database "djexample": Generating default (full) optimizer statistics
vacuumdb: processing database "finance": Generating default (full) optimizer statistics
vacuumdb: processing database "iop": Generating default (full) optimizer statistics
vacuumdb: processing database "learn": Generating default (full) optimizer statistics
vacuumdb: processing database "postgres": Generating default (full) optimizer statistics
vacuumdb: processing database "servicemall": Generating default (full) optimizer statistics
vacuumdb: processing database "store": Generating default (full) optimizer statistics
vacuumdb: processing database "template1": Generating default (full) optimizer statistics
vacuumdb: processing database "test_store": Generating default (full) optimizer statistics
vacuumdb: processing database "uio": Generating default (full) optimizer statistics
vacuumdb: processing database "vbdev": Generating default (full) optimizer statistics
vacuumdb: processing database "vbdevelop": Generating default (full) optimizer statistics
vacuumdb: processing database "xiuzan": Generating default (full) optimizer statistics
vacuumdb: processing database "xzdevelop": Generating default (full) optimizer statistics

Done

3 重启玩耍

brew services start postgresql

查看下服务状态

brew services list

运行结果

Name           Status  User  Plist
postgresql     started allen /Users/allen/Library/LaunchAgents/homebrew.mxcl.postgresql.plist

接着就可以愉快的使用 pgcli 了。

参考借鉴文章和文档


青阳半雪
点滴记录,步步成长

现实与完美之间

1.6k 声望
24 粉丝
0 条评论
推荐阅读
Django | 信号使用思考
重拾些许关于信号模块使用的记忆,记录对于 Django 信号使用的思考。本文使用的 Django 的版本是 4.21 源码注释 {代码...} 2 函数清单2.1 _make_id 方法 {代码...} 首先认真分析下其业务实现,target 参数是接收...

青阳半雪阅读 465

特性分析 | GreenPlum 的并行查询优化策略详解
GreenPlum 采用 Share Nothing 的架构,良好的发挥了廉价PC的作用。自此I/O不在是 DW(data warehouse) 的瓶颈,相反网络的压力会大很多。但是 GreenPlum 的查询优化策略能够避免尽量少的网络交换。对于初次接触 G...

dbkernel阅读 1.6k

KaiwuDB delete流程解读
delete主要分为两个部分,一个部分为scan过程,拉取表中的数据,第二部分,根据过滤条件,调用b.Del()函数删除对应的数据。相关逻辑计划对象图为:

KaiwuDB阅读 1.1k

腾讯云 Ubuntu 20.4 配置 PostgreSQL 14 远程访问
ubuntu 添加腾讯和阿里的源后,PostgreSQL 的版本是12,如果想安装12以后的版本,需要按pg 官网的方法,把pg的下载地址加到源列表中,代码如下:

today阅读 779

Zino开发框架简介
Zino定位为企业级应用框架,奉行“约定优于配置”的原则,借鉴Node的Egg.js、Java的Spring Boot、Gloang的Beego,提供与axum(已实现)、actix-web(计划中)等框架的集成,目前仍在快速迭代开发中。

photino阅读 781

如何使用码匠连接 PostgreSQL
PostgreSQL 是一种特性非常齐全的自由软件的对象-关系型数据库管理系统(ORDBMS),它具有许多强大的功能,PostgreSQL 支持大部分的 SQL 标准并且提供了很多其他现代特性,如复杂查询、外键、触发器、视图、事务...

码匠Majiang阅读 726

封面图
KaiwuDB协议代码解析(2)
- 数据请求阶段 -Part 1 - 简单查询客户端发送 Query (‘Q’) 消息给服务端,包含了一条字符串类型的 SQL 语句。 {代码...} 服务端收到 Query 消息,解析 SQL 语句,生成抽象语法树 (AST),并传给执行器执行,获得...

KaiwuDB阅读 599

封面图

现实与完美之间

1.6k 声望
24 粉丝
宣传栏