为什么屏蔽广告是可以的 (2015)
It’s OK to block ads (2015)

原始链接: https://blog.practicalethics.ox.ac.uk/2015/10/why-its-ok-to-block-ads/

## 注意力与广告拦截的伦理 最近关于广告拦截的争论凸显了一个更大的伦理问题: “注意力经济”。这个系统优先获取用户注意力以最大化利润,常常通过利用心理偏见进行操纵性设计。虽然反对广告拦截的观点集中在其可能损害依赖广告支持的互联网的潜力,但支持者认为它能促使更好的广告实践,并保护用户数据和时间。 然而,核心问题不仅仅是令人讨厌的广告。即使*使用*广告拦截器,用户仍然会受到为参与度优化(而非福祉)的设计影响——平台优先考虑“网站停留时间”等指标,而非用户目标。当前的数字环境激励那些争夺我们有限注意力的设计,可能阻碍反思和自主决定。 目前,缺乏将用户*意图*置于注意力之上的动力,以及直接支持内容创作者的选择很少。因此,广告拦截不仅仅是避免广告,它还是一种反抗将我们的注意力商品化的系统,并要求设计真正服务于*我们*需求的工具。问题不在于广告拦截是否合乎道德,而在于它是否是一种道德上的必需。

一个黑客新闻的讨论围绕着阻止广告的合理性,引用了一篇2015年关于该主题的文章。用户普遍支持广告拦截,理由是隐私问题和令人厌烦。一位广告技术公司的员工分享了他们的公司如何应对被拦截——将他们的域名提交到黑名单,而不是试图规避。 许多评论者表示难以置信,竟然有人*想要*定向广告,甚至觉得这个想法令人不安,认为相关性应该基于网站上下文,而不是个人跟踪。一个关键的结论是,不必过度思考这个问题;阻止广告是可以接受的,仅仅因为它们具有侵入性。 一些用户澄清他们并非阻止广告本身,而是阻止“无法问责的第三方广告网络”以及它们通过允许任意代码执行带来的安全风险。总体情绪强烈支持隐私和反对跟踪。
相关文章

原文

Over the past couple of months, the practice of ad blocking has received heightened ethical scrutiny. (1,2,3,4)

If you’re unfamiliar with the term, “ad blocking” refers to software—usually web browser plug-ins, but increasingly mobile apps—that stop most ads from appearing when you use websites or apps that would otherwise show them.

Arguments against ad blocking tend to focus on the potential economic harms. Because advertising is the dominant business model on the internet, if everyone used ad-blocking software then wouldn’t it all collapse? If you don’t see (or, in some cases, click on) ads, aren’t you getting the services you currently think of as “free”—actually for free? By using ad-blocking, aren’t you violating an agreement you have with online service providers to let them show you ads in exchange for their services? Isn’t ad blocking, as the industry magazine AdAge has called it, “robbery, plain and simple”?

In response, defenders of ad blocking tend to counter with arguments that ads are often “annoying,” and that blocking them is a way to force advertising to get better. Besides, they say, users who block ads wouldn’t have bought the advertisers’ products anyway. Many users also object to having data about their browsing and other behavioral habits tracked by advertising companies. Some also choose to block ads in hopes of speeding up page load times or reducing their overall data usage.

What I find remarkable is the way both sides of this debate seem to simply assume the large-scale capture and exploitation of human attention to be ethical and/or inevitable in the first place. This demonstrates how utterly we have all failed to understand the role of attention in the digital age—as well as the implications of spending most of our lives in an environment designed to compete for it.

In the 1970’s, Herbert Simon pointed out that when information becomes abundant, attention becomes the scarce resource. In the digital age, we’re living through the pendulum swing of that reversal—yet we consistently overlook its implications.

Think about it: the attention you’re deploying in order to read this sentence right now (an attention for which, by the way, I am grateful)—an attention that includes, among other things, the saccades of your eyeballs, the information flows of your executive control function, your daily stockpile of willpower, and the goals you hope reading this blog post will help you achieve—these and other processes you use to navigate your life are literally the object of competition among most of the technologies you use every day. There are literally billions of dollars being spent to figure out how to get you to look at one thing over another; to buy one thing over another; to care about one thing over another. This is the way we are now monetizing most of the information in the world.

The large-scale effort that has emerged to capture and exploit your attention as efficiently as possible is often referred to as the “attention economy.” In the attention economy, winning means getting as many people as possible to spend as much time and attention as possible with your product or service. (Although, as it’s often said, in the attention economy “the user is the product.”) Because there’s so much competition for people’s attention, this inevitably means you have to appeal to the impulsive parts of people’s brains and exploit the catalog of irrational biases that psychologists and behavioral economists have been diligently compiling over the last few decades. (In fact, there’s a burgeoning industry of authors and consultants helping designers draw on the latest research in behavioral science to exploit these vulnerabilities as effectively and as reliably as possible.)

We experience the externalities of the attention economy in little drips, so we tend to describe them with words of mild bemusement like “annoying” or “distracting.” But this is a grave misreading of their nature. In the short term, distractions can keep us from doing the things we want to do. In the longer term, however, they can accumulate and keep us from living the lives we want to live, or, even worse, undermine our capacities for reflection and self-regulation, making it harder, in the words of Harry Frankfurt, to “want what we want to want.” Thus there are deep ethical implications lurking here for freedom, wellbeing, and even the integrity of the self.

Design ethics in the digital age has almost totally focused on how technologies manage our information—think privacy, surveillance, censorship, etc.—largely because our conceptual tool sets emerged in environments where information was the scarce and valuable thing. But far less analysis has focused on the way our technologies manage our attention, and it’s long past time to forge new ethical tools for this brave new world.

It’s important to note that the essential question here is not whether we as users are being manipulated by design. That is precisely what design is. The question is whether or not the design is on our side.

Think about the websites, apps, or communications platforms you use most. What behavioral metric do you think they’re trying to maximize in their design of your attentional environment? I mean, what do you think is actually on the dashboards in their weekly product design meetings?

Whatever metric you think they’re nudging you toward—how do you know? Wouldn’t you like to know? Why shouldn’t you know? Isn’t there an entire realm of transparency and corporate responsibility going undemanded here?

I’ll give you a hint, though: it’s probably not any of the goals you have for yourself. Your goals are things like “spend more time with the kids,” “learn to play the zither,” “lose twenty pounds by summer,” “finish my degree,” etc. Your time is scarce, and you know it.

Your technologies, on the other hand, are trying to maximize goals like “Time on Site,” “Number of Video Views,” “Number of Pageviews,” and so on. Hence clickbait, hence auto-playing videos, hence avalanches of notifications. Your time is scarce, and your technologies know it.

But these design goals are petty and perverse. They don’t recognize our humanity because they don’t bother to ask about it in the first place. In fact, these goals often clash with the mission statements and marketing claims that technology companies craft for themselves.

These petty and perverse goals exist largely because they serve the goals of advertising. Most advertising incentivizes design that optimizes for our attention rather than our intentions. (Where advertising does respect & support user intent, it’s arguable whether “advertising” is even the right thing to call it.) And because digital interfaces are far more malleable (by virtue of their basis in software) than “traditional” media such as TV and radio ever were, digital environments can be bent more fully to the design logic of advertising. Before software, advertising was always the exception to the rule—but now, in the digital world, advertising has become the rule.

I often hear people say, “I use AdBlock, so the ads don’t affect me at all.” How head-smackingly wrong they are. (I know, because I used to say this myself.) If you use products and services whose fundamental design logic is rooted in maximizing advertising performance—that is to say, in getting you to spend as much of your precious time and attention using the product as possible—then even if you don’t see the ads, you still see the ad for the ad (i.e. the product itself). You still get design that exploits your non-rational psychological biases in ways that work against you. You still get the flypaper even if you don’t get the swatter. A product or service does not magically redesign itself around your goals just because you block it from reaching its own.

So if you wanted to cast a vote against the attention economy, how would you do it?

There is no paid version of Facebook. Most websites don’t give you the option to pay them directly. Meaningful governmental regulation is unlikely. And the “attention economy” can’t fix itself: players in the ecosystem don’t even measure the things they’d need to measure in order to monetize our intentions rather than our attention. Ultimately, the ethical challenge of the attention economy is not one of individual actors but rather the system as a whole (a perspective Luciano Floridi has termed “infraethics”).

In reality, ad blockers are one of the few tools that we as users have if we want to push back against the perverse design logic that has cannibalized the soul of the Web.

If enough of us used ad blockers, it could help force a systemic shift away from the attention economy altogether—and the ultimate benefit to our lives would not just be “better ads.” It would be better products: better informational environments that are fundamentally designed to be on our side, to respect our increasingly scarce attention, and to help us navigate under the stars of our own goals and values. Isn’t that what technology is for?

Given all this, the question should not be whether ad blocking is ethical, but whether it is a moral obligation. The burden of proof falls squarely on advertising to justify its intrusions into users’ attentional spaces—not on users to justify exercising their freedom of attention.

联系我们 contact @ memedata.com