PostgreSQL 内部结构:有关更新语句的知识
PostgreSQL internals: Things to know about update statements

原始链接: https://patrick.engineering/posts/postgres-internals/

最近的一篇文章讨论了 PostgreSQL 执行更新语句的行为,包括写放大、更新丢失和死锁等概念。 作者解释了某些技术(例如使用 TOAST 函数和 HOT 更新)如何影响写入放大和账本大小。 此外,文章还强调了避免丢失更新的重要性,并指出虽然死锁不太常见,但它们仍然给数据库管理员带来了潜在的问题。 总的来说,本文强调开发人员需要考虑 PostgreSQL 独特实现中这些行为的具体含义,而不是仅仅根据理论知识来假设它们的影响。 与往常一样,仔细注意这些细微差别可以极大地提高应用程序的效率和可靠性。 不过,作者确实针对某些场景提出了替代解决方案,例如 SELECT FOR UPDATE 或重试偶尔出错的事务。 最后,作者推荐《PostgreSQL 14 Internals》一书,作为任何想要更好地理解这些主题的人的优秀资源。

这篇文章强调了使用“UPDATE … LIMIT”语法作为处理数据库中并发访问问题的更简单的替代方案。 然而,作者建议一些 RDBMS 系统(例如 PostgresQL)已经在 SELECT FOR UPDATE 命令中包含此功能。 此外,作者指出某些 ORM 工具对类似功能的支持有限。 Ultimately, these approaches help eliminate problems such as "lost updates," and ensure data consistency regardless of the transaction being executed simultaneously by multiple users. 总的来说,这个问题通常可以通过应用悲观或乐观并发控制机制来解决,具体取决于具体的约束和要求。 乐观控制的并发更新在涉及无关紧要的值的情况下可能会很有效,例如计算追随者。 或者,悲观锁定解决方案(包括 SELECT FOR UPDATE 方法)可以解决关键数据的问题。 然而,无论选择哪种方法,都必须认识到并承认性能、弹性、复杂性和数据正确性之间的潜在权衡。 Finally, it's crucial to prioritize scalability and fault tolerance throughout database architecture and design processes. 关于 PHP、Ruby、Python 和 Java 等 Web 开发人员流行的语言中不存在 SELECT ... LOCK 关键字的建议 - 这是事实。 这些关键字通常在数据库管理界面中可用,包括像 pgAdmin 这样的控制台客户端或具有直接 SQL 界面的图形前端。 不幸的是,绝大多数在流行的 Web 框架中开发的数据库支持的应用程序严重依赖对象关系映射器 (ORM),这些映射器屏蔽了有关实际底层数据库操作的详细信息。 虽然这种方法有助于防止数据库模式和操作特定的实现差异,但它也可能导致效率低下或性能瓶颈。 因此,作者建议在可行的情况下考虑采用低级、直接、本机数据库支持库,特别是在使用分布式或实时系统时。 这些优势抵消了较小的学习曲线障碍。 总体而言,作者鼓励开发人员直接熟悉特定的关系数据库后端,而不是主要依赖于 ORM 抽象。 通过了解实际 SQL 语句和数据库原语背后的基本概念,开发人员可以显着提高应用程序的整体性能和正确性。
相关文章

原文

I recently finished reading PostgreSQL 14 Internals by Egor Rogov. It is one of the great books that teach applied computer science without watering down the difficult parts.

One particularly interesting topic is PostgreSQL’s execution of UPDATE statements. Updates in relational databases are inherently complex. They can be affected by conflicts, anomalies, and deadlocks. However, some behavior is specific to PostgreSQL. This can lead to surprises.

1. Write Amplification

PostgreSQL generally does not update rows in-place. This is a design decision in PostgreSQL’s implementation of multi-version concurrency control (MVCC). The design can lead to write amplification: a small update can lead to a larger amount of data being written to disk.

Consider a table that stores user profiles in a social networking service:

CREATE TABLE IF NOT EXISTS user_profile (
    id UUID,
    last_active TIMESTAMPTZ,
    description TEXT
);

Each column has a different size:

  • id takes 16 bytes
  • last_active takes 8 bytes
  • description has a variable length, which could be hundreds of bytes .

If the description column takes 200 bytes on average, the write amplification factor when updating last_active is approximately 28. For each logical byte that needs to be updated, 28 bytes are written to disk . This is because PostgreSQL writes the whole updated row to a different location on disk. Considering row metadata, the real write amplification factor is even higher.

Write amplification can lead to increased disk I/O and wasted storage space. There are more sources of write amplification:

  • Indexes may have to be updated. This can be the case even if the updated column is not indexed .
  • PostgreSQL writes all changes to a write-ahead log (WAL) to ensure durability .

If disk I/O is approaching limits, it may be worth investigating row sizes, update statements, and unused indexes .

2. Lost Updates

The SQL standard mandates that lost updates be prevented at any isolation level . Nonetheless, it is trivial to simulate a lost update in PostgreSQL. Consider an additional followers_count column and two transactions (Read Committed) that perform the following steps:

  1. BEGIN
  2. Read the row (e.g., SELECT followers_count AS curr FROM user_profile WHERE id = '...').
  3. Compute a new value in application logic (e.g., curr + 1).
  4. Write the new value to the database (e.g., UPDATE user_profile SET followers_count = WHERE id = '...').
  5. COMMIT

In step 2, two concurrent transactions may read the original followers_count value. This causes a lost update because the count is effectively only incremented once.

This behavior may well be interpreted as a violation of the SQL standard. However, the application logic which calculates the new value is not considered part of the transaction. The database merely stores the new value as requested. Still, the behavior is clearly an anomaly: it cannot occur if the transactions run sequentially .

When using a higher isolation level, PostgreSQL fully prevents lost updates. Consider two concurrent transactions executing the following statements using the Repeatable Read isolation level:

BEGIN;
SELECT followers_count FROM user_profile WHERE id = '...';
-- Returns 300. Compute new value in application: 300 + 1 = 301
UPDATE user_profile SET followers_count = 301 WHERE id = '...';

Now, only one of the transactions can commit.

-- Session 1
COMMIT;
-- => COMMIT

-- Session 2
-- => ERROR:  could not serialize access due to concurrent update

The application can handle this error by retrying the transaction. A simpler solution using the weaker Read Committed level is to let the database compute the new value:

UPDATE user_profile
SET followers_count = followers_count + 1
WHERE id = '...';

This statement prevents lost updates at the Read Committed isolation level . The second transaction waits for the first transaction to abort or commit, and then proceeds with execution. Therefore, no error is raised, and the overhead of retrying transactions can be avoided .

3. Deadlocks

Deadlocks are another source of confusion in PostgreSQL. In theory, deadlocks are often explained using the four required conditions: mutual exclusion, hold and wait, no preemption, and circular wait. This would be the case if

  • Transaction A updates Row 1, then wants to update Row 2.
  • Transaction B updates Row 2, then wants to update Row 1.

Until one of the transactions aborts, there is a circular dependency and hence a deadlock. The textbook scenario requires two update statements in each transaction, but in reverse order. Surprisingly, deadlocks can also occur with two transactions that each contain only a single update statement. The reason is that row locks are acquired in a non-deterministic order.

Consider the following transactions:

-- Session 1
UPDATE user_profile
SET followers_count = 0
WHERE followers_count IS NULL;

-- Session 2
UPDATE user_profile
SET last_active = NOW()
WHERE id = ANY (/* list of ids */);

The first transaction runs a sequential scan and updates all rows that match the WHERE clause. Affected rows are locked in the order of the sequential scan. The second transaction can use the index on id to find the rows to update. An index scan might access affected rows in a different order. Hence, it is possible that a circular dependency between locks arises.

This particular scenario is most likely to occur when updating a large number of rows in a single transaction. Updating rows in smaller batches can help to avoid deadlocks. Batching can also fix performance issues caused by holding locks for an unnecessarily long time. If performance is not an issue, it may be enough to retry transactions that have been aborted because of a deadlock.

Conclusion

Many engineers have a good theoretical understanding of isolation levels, anomalies, and deadlocks. However, PostgreSQL—like any other database—is a specific implementation with its own quirks.

PostgreSQL 14 Internals offers many more insights beyond the execution of update statements. The book is a valuable resource for anyone working with PostgreSQL. It covers data storage, query execution, caching, and indexing in great detail. I recommend it to anyone who uses PostgreSQL in production.


Discuss this post on Hacker News.

Liked this post? Get in touch!
联系我们 contact @ memedata.com