自然多次尝试进化出一种Nostr。
Nature's many attempts to evolve a Nostr

原始链接: https://newsletter.squishy.computer/p/natures-many-attempts-to-evolve-a

## 现代应用程序的问题与潜在解决方案 当前应用程序架构从根本上是集中的:少数大型服务器控制用户数据、账户,以及关键的用于保护它们的加密密钥——本质上是一种数字封建系统。这种集中化导致控制、寻租和最终的停滞,反映了当今应用商店甚至电子邮件中存在的问题。 试图通过**联合**(如 Mastodon)来解决这个问题——服务器之间相互通信——最终会由于网络效应和扩展挑战而重现寡头垄断。 同样,**点对点 (P2P)** 网络虽然承诺用户所有权,但不可避免地会通过处理网络负载的“超级节点”走向集中化。 核心问题在于,*所有* 网络在规模化时,都倾向于集中化。 然而,一种名为 **Relay**(以 Nostr 协议为例)的新架构提供了一种不同的方法。 Relay 是简单的、不可信的服务器,充当用户签名消息的哑管道。 用户拥有他们的密钥和数据,选择多个 Relay 以实现冗余并避免单点故障。 这避免了扩展问题,利用了现有基础设施,并承认了服务器的必然需求,*而不会* 赋予它们控制权。 Relay 有效地切中要害——构建无论如何都会自然出现的系统,但有意识地这样做。

## Nostr:一种新的社交网络方法(摘要) 文章讨论了Nostr,一种最近开发的社交网络协议,旨在实现简单和去中心化。与传统平台不同,Nostr依赖于“中继”(本质上是数据存储),用户连接到这些中继,而不是连接到中心化服务器。用户可以选择多个中继,从而增强自主性,但理想数量似乎有限。 讨论的重点在于Nostr是否真正实现了去中心化,并对围绕受欢迎的中继可能出现的中心化以及中继运营商缺乏激励措施表示担忧。“出站箱模型”被提出作为一种解决方案,允许消息推送到多个中继,确保冗余和可发现性。 批评者指出,类似的概念之前已经存在,并质疑Nostr对主流用户的实用性,原因在于密钥管理和潜在的无过滤内容。另一些人认为,它的简单性和密码学验证提供了独特的优势。最终,争论的焦点在于Nostr是否能够克服去中心化系统固有的挑战,并为现有的社交媒体巨头提供可行的替代方案。
相关文章

原文

Here is the architecture of a typical app: a big centralized server in the cloud supporting many clients. The web works this way. So do apps.

This architecture grants the server total control over users. The server owns your data, owns your account, and owns the cryptographic keys used to secure it.

That last bit is obscure, but important. Cryptographic keys are how we enforce security, privacy, ownership, and control in software. Not your keys, not your data.

The architecture of apps is fundamentally feudal. Apps own the keys and use them to erect a cryptographic wall around the hoard of data us peasants produce. You “sign in” to cross the drawbridge, and the castle can pull up the drawbridge at any time, shutting you out.

"Centralization" is the state of affairs where a single entity or a small group of them can observe, capture, control, or extract rent from the operation or use of an Internet function exclusively.
(RFC 9518: Centralization, Decentralization, and Internet Standards)

Powerful network effects build up inside those castle walls. These network effects can be leveraged to generate further centralization, extract rents, and shut down competition.

We are seeing the consequences of this centralized architecture play out today, as platforms like the App Store enter their late-stage phase. When growth slows, the kings of big castles become bad emperors.

The Internet has succeeded in no small part because of its purposeful avoidance of any single controlling entity.
(RFC 9518: Centralization, Decentralization, and Internet Standards)

So, apps are centralized. How might we fix this? Well, the first thing we could do is bridge the gap between apps.

This is called federation. Users talk to the server, and servers talk to each other, trading messages so you can talk to users on other servers. Now you have the benefit of choice: which castle do you want to live in?

Email works this way. So do Mastodon and Matrix. My email is @gmail.com, yours @protonmail.com. We live on different domains, use different apps run by different companies, yet we can freely email each other.

The great thing about federation is that it’s easy to implement. It’s just an ordinary client-server architecture with a protocol bolted onto the back. We don’t have to build exotic technology, just exapt existing infrastructure. That’s why Mastodon, for example, is just an ordinary Ruby on Rails app.

But there’s a wrinkle…

Why does this happen? Well, networks centralize over time, converging toward an exponential distribution of size, power, wealth. This centralization is inevitable. You see it on the web, in social networks, airline routes, power grids, trains, banks, Bitcoin mining, protein interactions, ecological food webs, neural networks, and oligarchies. Network theory tells us why:

  • Preferential attachment: more connections means more network effect means more connections, leading to the emergence of densely-connected hub nodes.

  • N^2 scaling: if every fed has to talk to every other fed to exchange messages, the number of connections will scale exponentially with each additional node (n * (n -1)). This leads to the emergence of large hubs that aggregate and relay world state.

  • Fitness pressure: Small nodes get taken down by large spikes in traffic, while large nodes stick around. Small nodes have fewer resources, large nodes have lots. Unreliable nodes attract fewer connections, while reliable nodes attract connections just by virtue of staying alive.

  • Efficiency: exponentially-distributed networks are ultra-small worlds. You can get from anywhere to anywhere in just a few hops through hubs.

  • Resilience: exponential networks survive random failures, because the chances are exponential that the node that fails will be from the long tail.

This is called the scale-free property, and it emerges in all evolving networks. Federated networks are no exception. Take email for example:

Email is not distributed anymore. You just cannot create another first-class node of this network.

Email is now an oligopoly, a service gatekept by a few big companies which does not follow the principles of net neutrality.

I have been self-hosting my email since I got my first broadband connection at home in 1999. I absolutely loved having a personal web+email server at home, paid extra for a static IP and a real router so people could connect from the outside. I felt like a first-class citizen of the Internet and I learned so much.

Over time I realized that residential IP blocks were banned on most servers. I moved my email server to a VPS. No luck. I quickly understood that self-hosting email was a lost cause. Nevertheless, I have been fighting back out of pure spite, obstinacy, and activism. In other words, because it was the right thing to do.

But my emails are just not delivered anymore. I might as well not have an email server.

(After self-hosting my email for twenty-three years I have thrown in the towel, Carlos Fenollosa, 2022)

We can see the outlines of a similar consolidation beginning to emerge in the Fediverse. In 2023, Facebook Threads implemented ActivityPub and it instantly became the largest node in the Fediverse. This made some people angry and lead to demands for defederation. But Threads is already over 10x larger than the rest of the Fediverse. Defederation is hardly an effective blockade. The network has consolidated. Network science strikes again.

At scale, federated systems experience many of the same problems as centralized apps. That’s because feds are still feudal. They own your data, they own your account, they own your keys.

Large feds occupy a strategically central location in the network topology, and they have powerful influence over the rest of the network. They can leverage their network effect to pull up the drawbridge, by inventing new features that don’t federate, or cutting off contact with other feds.

So, federated networks become oligopolies. We can choose our server, as long as it’s blessed by the oligopoly. Still, an oligopoly is better than a dictatorship, email better than Facebook. But can we do even better?

Ok, forget servers. What if we could connect to each other directly? This is called peer-to-peer networking.

In a P2P network, each participant runs a peer that can find other peers and send them messages. Users own their keys, and use them to sign, verify, and encrypt messages. This is great! We have all the ingredients for credible exit and minimal user agency.

However, P2P presents some tricky engineering challenges. There is no central source of truth, so various peers will will have different points of view of the network state. That means we need to design for eventual consistency and the ability to merge potentially conflicting states. Other things, like timestamps, are also hard. Decentralized protocols are hard! All of this is headwind compared to ordinary app engineering.

We also run into some practical networking challenges. We no longer have centralized servers, so many requests take several hops, from peer-to-peer-to-peer, to get to their destination.

Also, peers are unreliable. They are bandwidth-constrained and blink in and out of existence. Close your laptop, your peer disappears. This adds a cost to peer discovery. You dial a previously available peer, but it’s gone now, so you dial another, and another. Unreliable peers plus multiple hops equals long delays, and occasionally, the inability to reach portions of the network.

The same evolutionary pressures that apply to other networks apply to P2P networks, and some of them, like fitness pressure on reliability, are exaggerated by peer availability. This leads to the evolution of superpeers: high-bandwidth, high-availability peers who’s job is to serve other peers on the network.

Peer-to-Peer (P2P) networks have grown to such a massive scale that performing an efficient search in the network is non-trivial. Systems such as Gnutella were initially plagued with scalability problems as the number of users grew into the tens of thousands. As the number of users has now climbed into the millions, system designers have resorted to the use of supernodes to address scalability issues and to perform more efficient searches.
(Hadaller, Regan, Russell, 2012. The Necessity of Supernodes)

Instead of connecting directly, we connect to one of the high-bandwidth, high-availability superpeers. Peer discovery is no longer a problem, and everything is just one or two hops away… an ultra-small world.

Wait… That just sounds like centralization with extra steps!

Like feds, superpeers occupy a strategically central location in the network topology, and have powerful influence over the rest of the network. Our P2P network has converged toward an exponential distribution. Network science strikes again.

Well, but on a P2P network we do own our keys, and this is a big improvement. Trustless protocols are better than trustful ones, and by owning our keys we have the foundations for minimal user agency.

Still, we’ve done a lot of hard engineering to support a flat P2P network that will never exist in the end. Is there a simpler way?

Let’s start at the end and work backwards.

  • All networks require large servers at scale

  • Not your keys, not your data

Can we design a distributed architecture that admits these two facts? What might such an architecture look like?

Take some ordinary, off-the-shelf servers. Treat them as dumb, untrusted pipes. Their job is just to relay information. They don’t own the keys—you own your keys. You sign messages with your key, then post them to one or more relays. Other users follow one or more relays. When they get a message, they use your key to verify you sent it. That’s it!

This is the Nostr protocol. I want to claim that Nostr has discovered a new fundamental architecture for distributed protocols. Not federated, not P2P… Relay.

Relays cut to the chase:

  • Relays are simple. They use boring technology, like plain old servers. You benefit from all of the tailwinds of traditional app development.

  • Relays take advantage of economies of scale. Big dumb servers in the cloud have high availability and high uptime, and they’re commodity infrastructure.

  • Relays sidestep the N^2 scaling problem: Relays don’t talk to each other, and users only need to join a small number of relays to gain autonomy—at least two, and certainly less than a dozen. We never really hit the scale where the n^2 scaling problem matters.

  • Relays support user-ownership. You own your data, your account, and most importantly, your keys. Relays are large, but they aren’t in charge. If a relay goes down or shuts you down, no problem! Your account doesn’t change, and your data is already mirrored to other relays. Credible exit!

…Most importantly, relays are what you would get in the end anyway. It’s fewer steps for the same result.

联系我们 contact @ memedata.com