每个构建AI助手​​的公司现在都是广告公司。
Every company building your AI assistant is now an ad company

原始链接: https://juno-labs.com/blogs/every-company-building-your-ai-assistant-is-an-ad-company

## 迫在眉睫的广告驱动人工智能未来与边缘计算解决方案 大型人工智能公司,如OpenAI(现已在ChatGPT中展示广告),越来越依赖广告收入,同时也在开发始终在线的设备——摄像头、麦克风、可穿戴设备——旨在持续监控我们的生活。这造成了一种危险的碰撞:无处不在的监控,由为定向广告收集数据所驱动。 当前语音助手需要“唤醒词”,限制了它们作为真正*主动*助手的效用。未来需要能够通过持续观察理解上下文的人工智能,但这些数据极其敏感。依赖云端处理意味着信任众多方掌握这些信息,尽管承诺了加密和匿名化——这些政策很容易改变。 解决方案?**本地、设备端人工智能推理。** 近期的进步意味着强大的AI流程现在可以在本地运行,无需将数据发送到云端。这种架构转变保证了隐私,因为没有数据会离开用户网络。虽然本地模型*尚未*像云端模型那样强大,但重点应该放在构建强大的内存系统,并优先考虑基于硬件/软件销售的商业模式,*而非*数据剥削。 选择“本地”和“边缘”计算对于构建一种亲密、有用的、尊重用户隐私的人工智能至关重要——一种无所不知,却什么都不分享的人工智能。

一篇近期文章(juno-labs.com)认为,开发人工智能助手的公司正逐渐成为广告业务。 这在Hacker News上引发了讨论,用户指出Anthropic是经常被忽视的关键参与者。 人们对“始终在线”的人工智能与GDPR等数据隐私法规的兼容性表示担忧。 许多评论者提倡对人工智能中的广告进行积极且严格的监管,并将之与Facebook和加密货币等平台过去的问题相提并论,担心形成根深蒂固的做法。 虽然有些人认为广告比限制性法律更好,但普遍存在对人工智能公司关于数据隐私的承诺的不信任感,许多人对数据安全保证表示怀疑。 这次讨论凸显了人们对人工智能商业化及其对用户隐私的潜在影响日益增长的焦虑。
相关文章

原文
Every Company Building Your AI Assistant Is Now an Ad Company - Juno Pre-orders for the Juno Pioneer Edition now open, reserve your Juno today!

On January 16, OpenAI quietly announced that ChatGPT would begin showing advertisements. By February 9th, ads were live. Eight months earlier, OpenAI spent $6.5 billion to acquire Jony Ive's hardware startup io. They're building a pocket-sized, screenless device with built-in cameras and microphones -- "contextually aware," designed to replace your phone.

But this isn't a post about OpenAI. They're just the latest. The problem is structural.

Every single companyWe can quibble about Apple. building AI assistants is now funded by advertising. And every one of them is building hardware designed to see and hear everything around you, all day, every day. These two facts are on a collision course, and local on-device inference is the only way off the track.

The always-on future is inevitable

Before we talk about who's building it, let's be clear about what's being built.

Every mainstream voice assistant today works behind a gate. You say a magic word -- "Hey Siri," "OK Google," "Alexa" -- and only then does the system listen. Everything before the wake word is theoretically discarded.

This was a reasonable design in 2014. It is a dead end for where AI assistance needs to go.

Here's what happens in a real kitchen at 6:30am:Anonymized from one of our test homes. The real version was messier and included a toddler screaming about Cheerios.

"Are we out of eggs again? I'm thinking frittata tonight but we also need to -- oh wait, did the school email about Thursday? I think there's a early release. Anyway, if we don't have eggs, I'll get them from Target and also that dish soap, the blue one."

Nobody is going to preface that with a wake word. The information is woven into natural speech between two flustered parents getting the family ready to leave the house. The moment you require a trigger, you lose the most valuable interactions -- the ones that happen while people are living their lives, not thinking of how to give context to an AI assistant.

You cannot build proactive assistance behind a wake word. The AI has to be present in the room, continuously, accumulating context over days and weeks and months, to build the understanding that makes proactive help possible.

This is where every major AI company is heading. Not just audio -- vision, presence detection, wearables, multi-room awareness. The next generation of AI assistants will hear and see everything. Some will be on your face or in your ears all day. They will be always on, always sensing, always building a model of your life.

The question is not whether always-on AI will happen. It's who controls the data it collects. And right now, the answer to that question is: advertising companies.

Policy is a promise. Architecture is a guarantee.

Here's where the industry's response gets predictable. "We encrypt the data in transit." "We delete it after processing." "We anonymize everything." "Ads don't influence the AI's answers." "Read our privacy policy."With cloud processing, every user is trusting:
• The company's current privacy policy
• Every employee with production access
• Every third-party vendor in the processing pipeline
• Every government that can issue a subpoena or national security letter
• Every advertiser partnership that hasn't been announced yet
• The company's future privacy policy

OpenAI's own ad announcement includes this language: "OpenAI keeps conversations with ChatGPT private from advertisers, and never sells data to advertisers." It sounds reassuring. But Google scanned every Gmail for ad targeting for thirteen years before quietly stopping in 2017. Policies change. Architectures don't.

Policy is a promise. Architecture is a guarantee.

When a device processes data locally, the data physically cannot leave the network. There is no API endpoint to call. There is no telemetry pipeline. There is no "anonymized usage data" that somehow still contains enough signal to be useful for ad targeting. The inference hardware sits inside the device or in the user's home, on their network.

Your email is sensitive. A continuous audio and visual feed of your home is something else entirely. It captures arguments, breakdowns, medical conversations, financial discussions, intimate moments, parenting at its worst, the completely unguarded version of people that exists only when they believe nobody is watching.We wrote a deep dive on our memory system in Building Memory for an Always-On AI That Listens to Your Kitchen.

Amazon already showed us what happens. They eliminated local voice processing. They planned to feed Alexa conversations to advertisers. They partnered Ring with a surveillance network that had federal law enforcement access. What happens when those same economic incentives are applied to devices that capture everything?

The edge inference stack is ready

The counterargument is always the same: "Local models aren't good enough." Three years ago, that was true. It is no longer true.

You can run a complete ambient AI pipeline today -- real-time speech-to-text, semantic memory, conversational reasoning, text-to-speech, etc -- on a device that fits next to a cable box (remember those?). No fan noise. A one-time hardware purchase with no per-query fee and no data leaving the building. New model architectures, better compression, and open-source inference engines have converged to make this possible, and the silicon roadmap points in one direction: more capability per watt, every year.We've been running always-on prototypes in five homes. The complaints we get are about the AI misunderstanding context, not about raw model capability. That's a memory architecture problem, not a model size problem.

Are local models as capable as the best cloud models? No. But we're usually not asking our smart speaker to re-derive the Planck constant.

Hardware that runs inference on-device. Models that process audio and video locally and never transmit it. There needs to be a business model based on selling the hardware and software, not the data the hardware collects. An architecture where the company that makes the device literally cannot access the data it processes, because there is no connection to access it through.

The most helpful AI will also be the most intimate technology ever built. It will hear everything. See everything. Know everything about the family. The only architecture that keeps that technology safe is one where it is structurally incapable of betraying that knowledge. Not policy. Not promises. Not a privacy setting that can be quietly removed in a March software update.

Choose local. Choose edge. Build the AI that knows everything but phones home nothing.


联系我们 contact @ memedata.com