Chinaunix首页 | 论坛 | 博客
  • 博客访问: 4562737
  • 博文数量: 1214
  • 博客积分: 13195
  • 博客等级: 上将
  • 技术积分: 9105
  • 用 户 组: 普通用户
  • 注册时间: 2007-01-19 14:41
个人简介

C++,python,热爱算法和机器学习

文章分类

全部博文(1214)

文章存档

2021年(13)

2020年(49)

2019年(14)

2018年(27)

2017年(69)

2016年(100)

2015年(106)

2014年(240)

2013年(5)

2012年(193)

2011年(155)

2010年(93)

2009年(62)

2008年(51)

2007年(37)

分类: 服务器与存储

2014-12-08 11:51:40

Description

pg_dump is a utility for backing up a PostgreSQL database. It makes consistent backups even if the database is being used concurrently. pg_dump does not block other users accessing the database (readers or writers).



回答来自:I have a 3GB database that is constantly modified and I need to make backups without stopping the server (Postgres 8.3).
My pg_dump runs for 5 minutes. What if the data is modified during the process? Do I get consistent backups? I don't want to find out when disaster strikes.
Postgres documentation  doesn't say anything about this.


From the manual:
It makes consistent backups even if the database is being used concurrently.
So yes, you can trust the backup. Of course, it's PostgreSQL, you can trust your data in PostgreSQL.


pg_dump starts a transaction, similarly to how any other long running query will work. The consistency guarantees there come from the MVCC implementation. The dump will always be self-consistent within those rules.
All the fuzzy parts of MVCC are around around things like what order UPDATE transactions become visible to other clients and how the locks are acquired. pg_dump is strict about the ordering and acquires a read lock on the whole database to dump it. For most people, that's what they expect, and the mechanism used never causes any trouble. The main concurrency risk is that clients trying to change the database structure will be blocked while the dump is running. That doesn't impact the quality of the dump though.

阅读(862) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~