消息队列:一个简单的指南,附带类比
Message Queues: A Simple Guide with Analogies (2024)

原始链接: https://www.cloudamqp.com/blog/message-queues-exaplined-with-analogies.html

## 消息队列:简单解释 消息队列促进系统间的通信,作为数据传输的中介。可以将它们想象成**邮局**:数据(“包裹”)会被短暂存储、排序和传递,不同于**数据库**(“仓库”)这种用于长期数据*存储*的设计。 消息队列从“生产者”(发送信息的系统)接收数据(称为“消息”),并将其传递给“消费者”(接收信息的系统)。这个过程是*异步的*,意味着生产者不需要等待响应,从而提高效率。它们使用 AMQP、MQTT 或 STOMP 等协议进行通信。 这种异步特性在**微服务架构**中特别有用,在这种架构中,应用程序被分解为更小、独立的的服务。 与直接的、可能导致过载的请求(同步通信)不同,消息队列缓冲数据,允许服务以自己的节奏处理工作负载。 这提高了可靠性和可扩展性,隔离了故障并能够独立扩展单个服务。 本质上,消息队列实现了数据流,使其成为构建健壮且响应迅速的应用程序的强大工具。

## Hacker News 讨论:消息队列 - 简单指南 一篇 CloudAMQP 发表的关于消息队列基础介绍的文章,在 Hacker News 上引发了热烈讨论。许多人认为这篇文章对初学者来说是一个好的起点,但也有一些评论员指出它的简单性,并呼吁更深入地探讨实际考虑因素。 一个关键点是*何时*从直接 HTTP 请求过渡到使用队列,建议探讨诸如流量强度(以 Erlang 损耗系统类比)和潜在权衡等指标。讨论还涉及选择合适的队列/交换机类型(直接、扇出等)以及队列和流之间的区别。 一些用户警告不要过度使用队列,强调它们与更简单的 HTTP 请求相比,复杂性和调试挑战性更高。他们强调队列不仅适用于微服务,而且对于异步处理和卸载任务也很有价值。其他人则提倡使用持久执行框架作为替代方案,尤其是在复杂的流程中,同时承认潜在的延迟开销。最终,这次对话强调了重新审视基本概念以及随时提供常见技术术语的解释的重要性。
相关文章

原文

I find stories and analogies very fascinating and — to explain message queues in a super approachable way, we will use some analogies: databases, warehouses and post offices.

Stay with me …

Databases are primarily used for data persistence — think Postgres or MongoDB. Like databases, message queues also perform some storage function. But why use message queues for data storage when there are databases? Think of databases and message queues in terms of warehouses and post offices.

  • Databases are like warehouses - they are designed to hold a lot of different things, most times, over a long period of time.
  • Message queues on the other hand are like post offices — Where letters and packages stop briefly on their way to being delivered. The packages don't stay there long; they're just sorted and sent off to where they need to go.

Essentially, databases are primarily designed for scenarios where you need to store and manage some state over a long period of time. In contrast, you would want to use a message queue for data that you do not want to keep around for very long— A message queue holds information just long enough to send it to the next stop.

If you look at message queues from this post office perspective, then you will begin to appreciate the fact that a message queue is simply a medium through which data flows from a source system to a destination system.

Looking at message queues as medium of communication is just one perspective, but it’s sufficient to help you get started with message queues — Let’s double down on that perspective.

A message queue is a technology that simply receives data, formally called messages in the message queueing world from a source system(s) (producer), lines up these messages in the order they arrive, then sends each message to some final destination, usually another system called the consumer.

Note that both the producer and consumer could also just be modules in the same application.

Now that we understand the core essence of message queues, let’s explore how they work.

How a Message Queue Works

Typically, producers and consumers would connect and communicate with a message queue via some protocol that the message queue supports.

In other words, a message queue would implement a protocol or some set of protocols. To communicate with a message queue, a producer or consumer would leverage some client library that also implements the protocol or one of the protocols supported by the broker.

Most message brokers commonly implement at least one of these protocols : AMQP, MQTT and STOMP. You can learn more about these protocols in our AMQP vs MQTT guide or the AMQP, MQTT and STOMP guide.

When to Use a Message Queue

We’ve already seen how message queues allow messages to flow from a source system to a destination system. This inherent nature of message queues makes them perfect for communication between systems in a microservice architecture.

What is the microservice architecture? Again, let’s start with something you are familiar with — Monoliths.

A monolith is characterized by the entire codebase being inside one application. This is a great approach for smaller projects, and many new applications start out as a monolith. This is because on a smaller monoliths are faster to develop, easier to test, and easier to deploy.

However when an application starts to grow, the more problems you will see with this architecture. Even with a structured approach, the code often starts to feel messy and the development experience becomes inconvenient. Changes become more difficult to implement, and the risk of introducing bugs is higher.

Many times the solution to these problems is to break up your monolith application into microservices. And microservices are smaller, more modular services that focus on a single area of responsibility.

The microservice approach has some benefits:

  • With microservices, there is fault isolation— if one service is buggy, that bug is isolated to just that service. This in turn makes your application more reliable compared to a monolith where a single component error could take down the entire application.
  • There is also the opportunity of being able to diversify the technology stack from service to service, which helps you optimize your services for its purpose. For example, a performance critical service has the chance to make certain performance trade-offs, without putting limits to the rest of the services.
  • Naturally, scaling becomes much easier because you can just scale one of your services instead of scaling the entire application and save a lot of resources.

Now that we understand what microservices are, let’s cycle back to: Using message queues for communication between systems in a microservice architecture.

But before we get to that, note that message queueing isn’t the only way to get services to communicate — There is one other common way:

Synchronous communication, where network requests are sent directly from one service to another via REST API calls, for example. Service A will initiate a request and then wait for Service B to finish handling the request and send a response back before it continues on with the activity it was doing.

With message queueing, the communication is asynchronous — In this case, Service A can send messages to a message broker and instead of waiting for Service B, it will receive a super quick acknowledgement back from the broker and then it can carry on doing what it was doing while Service B fetches the message from the queue and handles it.

This will save your service from overloading if there is a suddenly increased workload, instead the messages are buffered by the queue and your services can just handle them when they have the capacity.

There you have it, a very gentle introduction to message queues. Now, let’s do a recap.

Conclusion

In summary, message queues are like post offices for your data, moving messages from one place to another. They work by receiving messages from producers, lining them up in the order they arrive, and sending them to consumers. This makes them perfect for situations where systems need to communicate without waiting— think microservice architectures.

Understanding how message queues work and when to use them can help you build more reliable and scalable applications.

联系我们 contact @ memedata.com