(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=39881962

Cloudflare 是一个内容分发网络 (CDN),提供各种服务,包括 SSL 终止、流量优化和分析功能。 由于 Cloudflare 在 SSL 终止期间保留和解密加密流量的做法引起了一些争议。 用户同意其流量通过 Cloudflare 网络来获取这些服务。 道德问题包括公司潜在的监视和数据滥用或与政府的合作。 争论围绕着同意的程度以及用户在使用 Cloudflare 配置网站时是否完全理解他们同意的内容。 区分网站所有者的故意解密与意外后果(例如秘密破坏 SSL)至关重要。

相关文章

原文


I heard about this a few years ago. The trial participants were informed, consented, and paid. If you consent to a root cert being installed and analytics being proxied, well, that's that.


Two issues. 1) Did Snapchat consented to this? And 2) did the users know what they were consenting to?

Saying we’re going to do “ traffic monitoring” doesn’t carry the weight of “we are going to listen to your private conversations”.



Why would Snapchat need to consent? It's my traffic.

I'd wager that most participants don't know the full details of the program, but "company pays you for your usage information" is a very old thing. You could (maybe you still can) get paid to install a box on your TV that recorded all of your viewing statistics to be used for market research.

To me, the biggest concern is that this is only really viable because Facebook had nontrivial market penetration of a more-or-less unrelated product to their main offering. This isn't something that Snapchat could have easily done to get market research on Facebook usage, for example. This feels (to me) more like an anticompetition concern rather than a privacy concern.



Here’s how I see it. This is akin to opening your USPS mail and reading your correspondence with a friend. When instead they could’ve checked who the mails were addressed.

If Facebook wanted to learn the protocol Snapchat uses, they only needed a single test device. If they only needed to learn usage patterns, they could’ve checked where the traffic is sent to or app usage time etc.

Installing a root certificate is very intrusive and they behavior shows that if they are ever given the opportunity to be become a root certificate authority, they are likely to issue malicious certificates. As far as I know, no website can pin their certificates, so this takes us back to pre-HTTPS days where ISPs and network operators had a lot of fun reading user traffic.



That box on your TV would have been a Nielsen box which sat on your TV and was connected to your landline. It didn’t collect anything automatically: every time you turned the TV on you were contractually obligated to press a button every 20 minutes to have the box call Nielsen and log a datapoint.

Those boxes have been phased out in favour of “Personal People Meters”[0], which are basically a pager with a SIM card that you wear which has a microphone listening 24/7 for TV broadcasts. You must keep it on you, listening at all times.

Nielsen will pay you $250/year (less than a dollar a day) for the data you provide.

[0] https://en.wikipedia.org/wiki/Portable_People_Meter



Had them here in the UK, used to get a free TV license for the inconvenience. My mate always pressed the same button despite what channel we were watching though, so there is that...


> My mate always pressed the same button despite what channel we were watching though

“They like Itchy, they like Scratchy, one kid seems to love the Speedo man… what more do they want?"



They would because the communications involve 2 parties. Your consent to someone snooping on my calls with you should not be enough, because for example, you still need my consent to record calls I have with you.

Now, Meta decides to MITM the communications that I intentionally encrypted so that it can gain a competitive advantage…well, remember when meta kicked out researchers what had obtained consent from users to perform research on its platform? That was not even illegal. This is.



At least in the US, most states are single party consent.

The whole thing's a mess, but it's funny to me that people would get indignant over a user letting another party intercept analytics data. "Hey, that's my data from spyware! Get your own!" As if their "consent" to collect the data in the first place were any less flimsy than Facebook's.



Afaik only in some instances, in some they were not paid and informed consent is in all cases quite questionable

edit: I think this is something I wouldn't call informed consent: "Of particular concern was that users as young as 13 were allowed to participate in the program. Connecticut Senator Richard Blumenthal criticized Facebook Research, stating "wiretapping teens is not research, and it should never be permissible. This is yet another astonishing example of Facebook’s complete disregard for data privacy and eagerness to engage in anti-competitive behavior.""[1]

1: https://en.m.wikipedia.org/wiki/Onavo



Lol the irony of publicly announcing the addition of end-to-end encryption in one app (Whatsapp) while secretly breaking TLS in another, all in the same year #Tethics


Whatever may be the end goal, MITM is called an 'attack', not 'research'.

I'd not last a single day at such a company who would ask me to do such things. I had worked for a national political party in IT and left the job once I found about it corrupt practices and scams.

If we, as engineers collectively upheld ethics as part of work culture, Meta wouldn't have attempted it.



As an ethical engineer, there is a further duty to also sabotage the organization once we uncover dirt on it. Never for profit. Sometimes for ego. And always because if every engineer took a stand against BS, then the world would be a much better place.


> as engineers collectively upheld ethics as part of work culture

Just saying, it's really hard when your job or even your future green card is on the line. When the grunt engineers are 1 mistake away from being sent away from the US and lose all their potential futures in the US, they are much more likely to bury their heads carry out what they are told from the managers.

We need to go for the higher ups more.



So, the FANGs can conduct mass psyops warfare against the populace basically with impunity -- a pesky little suit now and then is inconsequential.

But what will happen when they get caught stealing each other's surveillance booty?



Bear in mind that they don’t applied this to everyone, which would be practically impossible.

They hired Snapchat users (via a testing services provider ) to let meta observe their usage of Snapchat.

Something akin to paying someone to let a meta researcher sit by your side and observe while you use the app.

This happens all the time (hiring the testing services to recruit users to use your own app and analyze the patterns with screen recordings and such).

The news here is paying for someone to “test” a competitors’ app.

I hope that the testers knew they had Snapchat analyzed and not that they were told they were testing only Onavo.



> They hired Snapchat users (via a testing services provider ) to let meta observe their usage of Snapchat.

> Something akin to paying someone to let a meta researcher sit by your side and observe while you use the app.

Onavo Extend and Onavo Protect positioned themselves as providing consumer-oriented benefits (bandwidth reduction and security, respectively).

> The news here is paying for someone to “test” a competitors’ app.

Facebook acquired Onavo in 2013, so this was 100% a first-party effort to turn their first-party products into spyware.



yeah, you should read the doc in the link, they explain why they couldn't use Onavo to simple man-in-the-middle snapchat users, hence the project to use the testing service provider to hire test subjects which would install a MITM solution to unencrypt snapchat (and later youtube and amazon).

Normal Onavo users were not subject to the decryption (although they were providing Meta information about overall snapchat's marketshare).



Given Snowden, I have to assume Cloudfare is under the thumb of at least the NSA.

For example, all the usual arguments against backdoors are going to be used by intelligence agencies to justify "providing assistance", which isn't even merely a euphemistic excuse given how incredibly valuable it would be for normal organised crime to spy on some of the encrypted data… but also is at least a bit of a euphemism, as I have to assume the controversies about terrorist groups using Cloudfare are only pemitted to happen because someone in US intelligence knows how to squeeze secrets from those groups.

In theory, messing with SSL is one of Cloudfare's features, not a secret; in practice I suspect most end users treat all this as magic — I've directly witnessed magical thinking with the padlock icon in browsers.



That's different. I have a lot of problems with CF, but when you sign up for a service which requires to see the traffic and you configure it explicitly to see your traffic... what's the complaint here?


Did they? I mean, did they understand the privacy violation possible in this case? Or was the technical point they wouldn't understand somewhere in the middle of an agreement nobody reads anyway?

The difference in awareness is massive between those two use cases.



Why would you idea stop at CF? Did they consent to hetzner / digital ocean / AWS / whatever hosting company seeing the traffic? The idea that the content producer decides how the content is served on the internet is the default.


They don't see the traffic unless they analyze the memory of your running server, because the SSL termination happens inside the server. Encrypted traffic passes through their network, which they don't have the keys for. Cloudflare, on the other paw, literally offers to do the SSL termination for you, as in they hold the private keys and perform the decryption on their servers that they control. Then they pass the decrypted traffic through their network in order to do things like "optimize" your images, or inject JavaScript into your pages. Website owners consent to this, but I guess the question here is whether users should need to consent to this website's traffic being handled in decrypted form by Cloudflare before that is actually done.


> They can see the traffic if you're using one of their load balancers.

Only if you let them manage the SSL connection. Load balancers can easily relay individual TCP connections that are encrypted - load balancing doesn't require decryption.

> And even if not, snooping on VMs is pretty trivial.

They'd have to go out of their way to do this, and this would probably be the end of them if it were ever found out. So it's safe to assume any provider who wants to continue existing will not be doing this.



It's not that hard for VMI and harder than you think for network.

I did work for a public cloud and we did think of VMI for diagnostics and malware checks. Once deployed and automated, it would be trivial to reuse for other purposes. I don't expect public cloud to use that daily, but I'd be surprised if they didn't have the process ready.

On the other hand, you want to process the LB traffic as fast as you can and any monitoring/reporting delay would have bad effects. Reconfiguring the filters / sinks at runtime takes effort too.

With experience in both areas, I can tell you they're comparable overall. You have to go out of your way to do it, but it's not too far.



> Users cannot consent to Cloudflare seeing their traffic

Users consent to the website seeing their traffic and the website consents to Cloudflare doing the SSL termination. This isn't too much different from the website consenting to analytics scripts monitoring webpage activity (i.e. Hotjar). If they did something shady, then users & the website would both be rightfully mad at them. But Cloudflare hasn't, so far at least.

Meanwhile, Facebook is known to do literally everything shady that is possible to do with a user's data, as well as plenty of things that weren't even a thing before they invented entirely new methods of tracking and selling data, so it's rightfully insane to trust them with anything, especially website traffic that they have no rights to.



They explained pretty clearly why they think that's the case. You're both right though. It's likely not the case that cloud flare is the only company conducting and cooperating with government agencies to do these types of things. In my opinion it would be very silly to assume that.


I'm sorry but that means your nerd card will expire at the end of the month. I see you've had it for quite a while, but being unable to name any CDN companies besides Cloudflare means your nerd card will lapse. If you'd like to apply for a newer issue one, an LLM agent will be along shortly to help you.


Now obviously my comment is not about "just a CDN provider" right?

The SSL stuff that Cloudflare offers to protect your websites/APIs etc so you don't have to, their DNS products. The fact that iCloud Private Relay uses Cloudflare under the hood (and so all browsing there happens through their gateways etc).



I mean, if it's just the case that you've drank that much of the Cloudflare Kool aid that Akamai, AWS, and GCP don't have competing options in your mind, then that's a different problem entirety. Good for Cloudflare's wallet, and kudos to their marketing team though.


That article does not back up the claim that Meta is a state-actor.

I hate FB, but all big platforms these days will cooperate with federal agencies in cases like the one described. Doesn't make them "state actors".



Yes it's old news(1) but it has come up again in numerous HN and reddit posts for a few reasons (if you flick through HN you'll see various versions of this story holding lower ranks.)

Also noteworthy is that Google were also doing something similar at the time, both were side-stepping Apple's privacy protections in iOS by using enterprise certificates that allowed the side-loading of apps without Apple's overview. In response Apple more thoroughly restricted how these certificates can be used.

Interestingly I've noticed in the DMA threads people suggesting that a company exploiting side-loading to dodge Apple's privacy protections was nothing more than fear mongering. As if this is a red line developers won't cross.

To me, it's wild to think that people on HN don't know about this relatively recent history and are so naive to think that these protections were just pulled out of the air to frustrate developers, and not a reaction to an on-going arms war against consumer's right to privacy.

(1) https://www.extremetech.com/internet/284770-apple-kills-face...



> To me, it's wild to think that people on HN don't know about this relatively recent history and are so naive to think that these protections were just pulled out of the air to frustrate developers,

IMO we have modern journalism to thank for this sort of thing. People are so misinformed with rage bait articles that they push against policies in their own interest.

But if anyone dare suggest enforcing some minimum level of journalistic ethics they'll get attacked because somehow journalists have painted themselves as some sort of unassailable paragon of righteousness.



Bingo. It's easy to pay for influence, especially if one can spin a story for clicks.

I see a lot of cheerleading and parroted talking points against the interests of developers, particularly small and independent developers. A lot of the changes lobbied for by large developers give them an insurmountable pricing and competitive advantage over small developers and startups, yet I don't see much consideration here for that, nor the wishes of bona fide consumers.

Epic is particularly barefaced here, since they claim they are fighting for developers, when their proposals are not altruistic. Each clearly puts them at an advantage over smaller developers and consumers. Do we have such a short memory that we forget that this is the same Epic that settled with the FTC for using dark patterns and violating childrens' privacy for the purpose of tricking kids into accidental Fortnite purchases.(1) That was only 15 months ago.

While I'd expect reddit to be less informed, I'm not so charitable with HN: it's a forum where the bulk of participants claim to be developers.

(1) https://www.ftc.gov/news-events/news/press-releases/2022/12/...



Actually most CS majors require ethics courses. I've met very few developers that don't care about ethics, especially when they work on something product facing. We've seen entire teams at Google quit or refuse to implement something, etc.

Meanwhile in journalism, ethics is a strong part of the course structure but you see countless journalists writing poorly researched ragebait articles for clicks.

The "programmers don't know ethics" meme is just that, a meme. The fact that there even is a required ethics course in most universities is far more than you can say for most other majors. Nearly every single programmer knows about Therac-25, I'd wager most graduates today are also learning about MCAS, etc.



> I'd wager most graduates today are also learning about MCAS

Emphasis mine. you'd likely win that wager, I don't disagree, and that's great for today's graduating classes, but because engineer is not a protected term, especially not software engineer and definitely not prompt engineer, theres no requirement for a CS graduate to go back and do continuing education like there is in other fields, so graduates who don't seek out and do the, eg, OCW CS ethics class aren't going to find themselves in one. Curriculum has evolved over the years to include ethics as a requirement, but that meme isn't a meme because it isn't true in a vast number of cases, as evidenced by the multiple failures in, eg, this case here.



How do you propose enforcing journalistic ethics, without making "Journalism" subject to capture by regulation and government oversight? We had a system - Trust was placed into journalistic institutions, whose management was committed to editorial independence. It didn't work - They got bought out and chased profits.


Here is a quote from Facebook/Meta's legal council to the Judge. In this document "Advertisers" refers to Snapchat, YouTube and Amazon.

"... the Wiretap Act provides that an interception is not unlawful if a party to the communication “has given prior consent to such interception.” 18 U.S.C. § 2511(2)(d). Advertisers conspicuously fail to mention—and apparently do not contest—that Meta obtained participants’ prior consent to participate in the Facebook Research App, and with good reason: Participants affirmatively consented to “Facebook … collecting data about [their] Internet browsing activity and app usage” to enable Facebook to “understand how [they] browse the Internet, how [they] use the features in the apps [they’ve] installed, and how people interact with the content [they] send and receive."

So users consented?



Lawyer here.

No.

They have ...'d out an important part of 2511(2)(d).

(and they probably meant (c))

First, it starts out with: "It shall not be unlawful under this chapter for a person not acting under color of law "

This basically means a state/federal official or someone acting in their capacity as one (the color of law part basically means it applies even when they act beyond their legal authority by accident)

Which they aren't. So this doesn't apply at all. (d) has an additional requirement they ...'d out at the end, but (c) does not.

So it's both a wrong cite and a dumb one.

Second, you'll note "competitive research" or anything similar is not one of the allowage usages of collecting data that facebook got.

Third, the return argument will also be "the how matters", and users did not consent to this how, and would not have.

If I give consent to participate in collection of my internet data, it doesn't give you authorization to like, have someone live in my house and follow me around 24/7 so they can see what i do on the internet.



> If I give consent to participate in collection of my internet data, it doesn't give you authorization to like, have someone live in my house and follow me around 24/7 so they can see what i do on the internet.

TV ratings used to be collected from panelists using a wearable device that literally had an always-on microphone recording you 24/7 : https://en.wikipedia.org/wiki/Portable_People_Meter

How is the situation of Onavo/Meta panelists worse ?



No. Its written as a set of negatives- it shall be unlawful for someone not x to do y

Here it is saying it’s illegal unless you are an official acting under color of law and there is one party consent



I mean, sure, you could also do “market research” by breaking into people’s homes, reading their mail, and listening in on all their phone calls. I hope some actual criminal prosecution results from this disclosure, as it’s very clearly “hacking” and “wiretapping” and “unauthorized access”.


So was the plan to just yolo this out into the wild?

because the document says here that it was going to be given to trial participants as part of yougov(and others) survey. Which implies that they would have been informed/paid.

If its the former, then obviously thats unauthorised wiretapping. If its the latter so long as informed consent is given, that a shittonne better that the advertising tech we have now.



Yes and No.

for TLS traffic you need to also install onavo.

But the app does scan your contact list every couple minutes and send diffs to their servers. Even if you have never opened the app. And on previous android versions all your recently open apps list too.

But again, if you install whatsapp you must give them the contact list permission anyway otherwise the app is intentionally broken and annoying.



I really think you are a fool if you install WhatsApp. I do think you are higher intelligence than normal if you install Signal. When I hear friends talk about WhatsApp I cringe. The few who have signal I regard highly.


Real life is full of compromises. If your grandma is on WhatsApp, and you want to talk to her, it might be a good idea to install WhatsApp.

(However, if you have time on your hand and principles, you can use WhatsApp on a burner phone, I guess?)



Or educate grandma on why she should use signal and that fools use WhatsApp since meta is balls deep inside the app and watching what you do.


outside of the imperium center, you'd be lucky to have one provider of some product or service. and usually they will only be reachable via whatsapp. because metabook used whatsbook as a backdoor for for their failed oneinternet(?) project.

remember the backlash fb got when they offered free internet in india and africa but only for the Facebook app?

well, everywhere in the world you get free whatsapp traffic, so everyone now is on whatsapp.

good luck convincing a business who get hundreds of sales call via WhatsApp tobuse signal.

or convincing people who can barely afford their water bill that they now need a data plan to use signal instead of free whatsapp.

metabook won on this.



> metabook won on this.

They won on distributing E2EE and the Signal Protocol to 99% of the world, which previously transmitted everything in plain? Sounds like a pretty good win to me.



Security SWE here. I have worked with WhatsApp's security engineers, I donate hundreds to the Signal Foundation every year, and I like to think I have a good amount of experience by now in the security industry.

> you are a fool if you install WhatsApp [...] meta is balls deep inside the app and watching what you do.

WhatsApp uses the same protocols as Signal under the hood. The Signal team even helped WhatsApp implement it. Furthermore, the app has been extensively RE'd by third parties to validate it's doing what it says on the tin.

https://signal.org/blog/whatsapp-complete/

> When I hear friends talk about WhatsApp I cringe. The few who have signal I regard highly.

Your clear lack of knowledge on the subject matter combined with your judgement of others says far more about you than your friends. It seems that you have fallen victim to the Dunning-Kruger curve, so consider not judging people until that is rectified.



> Furthermore, the app has been extensively RE'd by third parties to validate it's doing what it says on the tin.

thats a freaking lie and you should feel bad for repeating it.

it was barely reviewd years ago. before all the shady features that even caused the original founder to leave the company (and a few billions worth of golden handcuffs) with an open letter about how fb destroyed privacy in WhatsApp.

EU and all sane state actors forbid its use (some recommends signal)

all recent political leaks was from fb (e.g. brazil, italy)



> thats a freaking lie and you should feel bad for repeating it.

It's trivial to RE the app. Plenty of 3Ps continually RE the app.

Support your claims of WhatsApp being backdoored with facts, not random assertions you pull out of - where, exactly?

> EU and all sane state actors forbid its use

Because it is E2EE. You don't want government employees to use an E2EE service because it kills transparency.

> some recommends signal

WhatsApp and Signal share the same exact protocol.

> all recent political leaks was from fb (e.g. brazil, italy)

Irrelevant to WhatsApp. They're run by a completely separate team within Meta, have completely different leadership and reporting chains, and has a completely separate codebase and architecture.

Again, support your claims with actual facts instead of incoherent angry rambling. Is it backdoored? Can Meta access your messages? Provide proof.



This seems to be a valid reason to implement certificate pinning in the application's network layer. At least 3rd party VPN providers don't get to intercept without replacing the pin.


I used to work for a startup that did very similar kind of thing. We paid people to install our app and our root cert. We had our own VPN server through which all traffic of the panelists (people who participate in a panel) went and we were able to decrypt all traffic that used the PKI that the operating system provided. Some apps used some other kind of encryption (banking apps eg.) so that could not be decrypted. Then we also collected additional data, for example we took screenshots of whatever was currently on the screen and tried to map those to applications for which we recorded screenshots. This was done to know what app the user was running at what time.

I didn't work with the data collection, so my info is a bit limited. Facebook was our customer even though they had already bought Onavo.

I can answer some questions if you have any.

The company did go bankrupt and the technology was sold.



First, all VPNs spy on you, just don't believe these claims because they are forced by law to do it. Second, don't use a VPN that clearly states that they're analyzing your traffic data.


Don’t install additional root certificates.

That’s what Facebook enticed users to do here. Without that root cert they wouldn’t have been able to see as much as they did.



Certificate pinning and validation in apps for one. Onavo's VPN was really clear it collected market research data. It was as informed consent as a click-through could be.


"May I direct your honor that my client is a wealthy tech billionaire who would otherwise be at risk of being slightly annoyed if they were sent to jail for intercepting private communications of competitors..."


There's a lot of confusion around these stories these days, which reminds me of the "Gmail is looking at your emails" stories[1].

First, this is not wiretapping, come on. There's targeted man-in-the-middle (MITM) attacks, and then there's this. This is plainly "we are using advanced powers to analyze your traffic".

This is not even Superfish[2] type of stuff, where Lenovo had preinstalled root certs onto laptops to display ads. This is "if you opt in we will analyze your data".

Every program you install on your laptop can basically do WHATEVER it wants. This is how viruses work. When you install a program, you agree to give it ALL power. This is true on computers generally, and this is true on phones when you side-load programs. The key is that when we install something we understand the type of program we're installing, and we trust that the program doesn't do more than what it _claims to be doing_.

So the question here is not "how does Onavo manage to analyze traffic that's encrypted", it's "does Onavo abuses the trust and the contract it has with its users?"

[1]: https://variety.com/2017/digital/news/google-gmail-ads-email...

[2]: https://www.virusbulletin.com/blog/2015/02/lenovo-laptops-pr...



That might have been true in the past, but nowadays at least macOS/Android/iOS can enforce several restrictions on the apps you install, like prevent them from changing OS settings/files, limit access to only specified/opt-in directories, limit the amount of background activity, etc.

I don't know about Windows or Linux though.



Windows applications can easily install TLS root certificates, which essentially all „anti virus“ tools (i.e. snake oil) do. On Linux, it’s obvious; if you’re installing something as root, you can add certificates. In that context, apple is doing something right and makes it rather tedious to install root certs


If someone consents to your clear request to read their data in the plain, then it's not evil. Still not my cup of tea, but if you clearly explain and obtain consent, it's shady but fine.


So how is that relevant in the context here. FB did not clearly request to be able to read all traffic (encrypted and nonencrypted) so how could they get consent. Unless you're arguing, "we will monitor your Internet usage", clearly means we will man-in-the-middle all your connections. Which would be a weird take.


> FB did not clearly request to be able to read all traffic (encrypted and nonencrypted) so how could they get consent.

I can't find the consent page/legalese shown to users, do you have a link?



Documents and testimony show that this “man-in-the-middle” approach—which relied on technology known as a server-side SSL bump performed on Facebook’s Onavo servers—was in fact implemented, at scale, between June 2016 and early 2019.

Facebook’s SSL bump technology was deployed against Snapchat starting in 2016, then against YouTube in 2017-2018, and eventually against Amazon in 2018.

The goal of Facebook’s SSL bump technology was the company’s acquisition, decryption, transfer, and use in competitive decision making of private, encrypted in-app analytics from the Snapchat, YouTube, and Amazon apps, which were supposed to be transmitted over a secure connection between those respective apps and secure servers (sc-analytics.appspot.com for Snapchat, s.youtube.com and youtubei.googleapis.com for YouTube, and *.amazon.com for Amazon).

This code, which included a client-side “kit” that installed a “root” certificate on Snapchat users’ (and later, YouTube and Amazon users’) mobile devices, see PX 414 at 6, PX 26 (PALM-011683732)(“we install a root CA on the device and MITM all SSL traffic”), also included custom server-side code based on “squid” (an open-source web proxy) through which Facebook’s servers created fake digital certificates to impersonate trusted Snapchat, YouTube, and Amazon analytics servers to redirect and decrypt secure traffic from those apps for Facebook’s strategic analysis, see PX 26 at 3-4 (Sep. 12, 2018: “Today we are using the Onavo vpn-proxy stack to deploy squid with ssl bump the stack runs in edge on our own hosts (onavopp and onavolb) with a really old version of squid (3.1).”); see generally http://wiki.squid-cache.org/Features/SslBump

Malware Bytes Article: https://www.malwarebytes.com/blog/news/2024/03/facebook-spie...



That is insane and I would be inclined to not believe it if someone had told me this. This is such an immense breach of trust that even for me, who has a very low opinion of Meta, it is unexpected. I hope this will blow up as much as it should


I'm somewhat surprised it's taken this long to come out. It was something of a open secret that onavo was spying somehow on snapchat traffic within atleast the infra/release org back in 2016 era


So this one time, I had a bug report at a client site. The business was largely a member of _______ religion. Our images wouldn't load in the app, but did on the website. How odd I thought, that doesn't make sense! Luckily I was able to be physically present, so I hopped down with laptop in tow, ssh'd into the server and started tailing logs....

Sure enough all the API requests for data were coming through, but whenever a request for image happened - nothing would hit the servers.

What the heck I thought to myself?

I said to the client 'that can't be, that's almost impossible....the only way that's possible is if the SSL traffic is decrypted, inspected, and images blocked from being requested, which, is a MITM attack".

He redirected me to his IT provider. I phoned them up, and explained the situation.

"Ahh so they're _____"

Me: "So what does that have to do with the price of fish?"

Them : "Content filtering..., you need to talk to ____"

Sure as the day is long, the content filter was a VPN all members of ____ had to have on their mobile devices (I don't know how widespread this is, whether it was just this business, or the entire ____ )

I applied to have our system approved, it was, and just like magic the next day photos started coming through.

I'm guessing basically it detected any .jpg/.mp4 etc URL's in https requests and flagged it up and blocked them from being requested. You can be sure on those devices the VPN would have been somehow locked in with device management, and there's no way on gods green earth they were getting at Facebook/insta etc.

So, it's not just meta. That really hammered home how seamless it can be to end users that they really can't trust what's actually happening on their devices.



Not that I'm a fan of it, but in corps it's pretty standard praxis to have a custom root cert installed on all devices and enforce VPN connections on devices outside the network to be able to MITM all requests and do stuff like content filtering (e.g. NSFW, swearwords and obviously malware). It's the company's device and they give it to you for work specific purpose, you shouldn't use it for personal stuff. I don't think it compares to an app that shadily installs its own root cert on an end user's device to spy on them.


It's not corporate level it was/is religious group level (of which this particular org I'm guessing largely employed staff from that religion). They are well known within our country to be quite insular.

It certainly seemed for all intents and purposes if you were a member of _____ group (wider than the company) you had the vpn on your device, and it was filtering content. I've found other reports in other countries of that happening with the same group.

So it's not corporate content filtering, it's personal content filtering and our app got caught up in it (and approved).

It certainly made my skin crawl for anyone in that religion. That means the central filtering service could be reading messages. Not sure if they're that sophisticated but certainly they didn't want people to see random images/videos.



Is it like required from their religious leadership to install this? That is incredible, and I only now understand your comment to its full extent. That is brutal.


This is one reason I think ECH is probably on net a bad idea. Content filtering is a legitimate use-case for lots of users/networks, and if traffic is completely opaque to all networks, you end up needing things like root level processes or full MITM or laws requiring ID for websites instead of more privacy-preserving inspection of basic metadata (like SNI) at the network level.

You could imagine a standard for a network to signal to a client that it does not allow certain privacy features like ECH, and then clients can accept that or not. Instead I expect browsers will eventually mandate ECH, so people will have to MITM instead.



From the inference of the commenter, I think they were referring to an app on a mobile device and not the device itself.

It also sounds like their issue was at the ISP provider level, as well, which takes the business out of the loop of being the data controller/owner (of the collected data) at that point.

Note: I'm not saying that your comment doesn't have merit, I just don't think that the points that you made apply - specifically - in this case?



After re-reasing the comment I think you're right. I had a hard time grokking it it seems. But since the issue was apparently a VPN app installed on the phone, I don't know whether this was the ISP or maybe their IT service provider that did content filtering on behalf of the company (like an outsourced IT department?)


The VPN (much like Meta's) is doing some root cert trickery to filter content that is deemed inappropriate or potentially inappropriate. This appeared to be controlled by a Company A in another country that undoubtedly contracted to Y religion to be their central point of content filtering globally.

So, member of the church? you get this VPN on your phone, (not sure whether phone was supplied by the church, but certainly this VPN was on it) VPN is effectively content filtering and blocking content.

I had our app whitelisted by that central company (literally raised a ticket with them, next day magically fixed).



Holy shit they can brainwash their peers even better. Those are evil geniuses….

Sorry I meant the optimize the content for their peers and shield them from harmful content for the better of humanity // irony



I also hope that any ethically minded engineers inside Meta take a stand against this BS. The only way stuff like this happens is because engineers working on these projects decide that they can set aside whatever morals they may have had for the price of a big fat FAANG pay cheque. It's about time our profession adopted a code of ethics, like that of the ACM[1]. To the engineers who _have_ walked away despite the obvious pressures, I salute you.

1. https://www.acm.org/code-of-ethics



This was news … 5 years ago, I think, I don’t know why it blew up again. But context matters:

Onavo provided a compression + VPN service for people traveling; they let users use little or no data while roaming, and still get internet access. I do not know what their original business plan was, but Facebook bought them for the ability to spy on users.

Their MITM was, in fact, the raison d’etre of Onavo. And then, they were bought by Facebook. And then there was just some more analytics added. At no point, as I understand it, was it built explicitly for evil - and I suspect very few employees were in on the real reasons.

Plausible deniability works for many things.



Nah, I've met enough amoral people over the course of my career to know that's not the case. However, the overwhelming majority of people I've worked with are people who do have morals and do care about the outcomes they're creating, and that gives me great hope.


I was directly involved in this.

I am happy to answer any questions you have about questioning or ethics at the time. Assuming that people's reaction to this was wrong, while not knowing what that reaction was, or having less than 5% of the context, isn’t going to help much.

Short answer: No, there were strong arguments for it. I reached out for institutional support to answer some questions, groups that I expected to be a lot more supportive than the ACM, but I found the reaction seriously lacking. Your intuition that groups like the ACM should offer assistance is sensible but completely overlooks many problems: geopolitics, different types of security, and individual capacities, among others. Each institution has its priorities; those are not always compatible, and it’s unclear who should have precedence. The ACM won’t help you if the argument is the kind of compromise with the devil that spy agencies often make or if problematic tools are used in efforts to dismantle large criminal groups.



I understand that things are often more nuanced than they may appear, and in questions of moral judgement there will always be room for fuzziness. Personally I think the idea of compromising security for everyone in order to make life a little easier for the TLA's is not something I'd feel comfortable doing. I consider an individuals right to privacy paramount, something without which we risk unbounded tyrannical rule. Others will probably feel differently when presented with 'think of the children' style arguments. I'm glad to hear though that you were at least conflicted enough to be asking questions.


You simply legislate that if a company is building anything that will be used regularly by more than eg. a few thousand people, then the work must be designed and/or signed off by a licensed engineer, who will a) be subject to a code of ethics and b) be professionally liable for any failures causing loss or damage to the public.

We seem to be able to manage this with bridges, planes, electrical & hydro installations etc. No reason it shouldn't be the same for critical software infrastructure.



I mean with a thing like a plane you can say "that's not allowed in our state/country", with software that starts to get a whole lot more problematic. Soon you'll see people starting to push laws that say things like "because people are running dangerous software from outside the country we demand that only signed software can run on our phones/computers and that devices here must enforce it" coming out of our politicians that seemingly get a pile of cash from groups like Microsoft and META.


It's perhaps not 'critical' in the sense that losing it would matter much, but it is worth caring about because of the number of people who are affected if/when things go wrong.


I did not really need a reminder that this website is filled with morons, just like Reddit. But I get it anyway every time I post something negative.

Remind me when anything more than a slap in the wrist happens. And my definition of slap on the wrist is adjusted to how big Meta actually is, they make more than some countries!

You just hate facts, just like the idiots on Reddit, I am supposed to praise big tech criminals and just make positive stuff up, then I get all the upvotes.



> This is such an immense breach of trust

Why do you trust it ? Do you think that others (Google, Microsoft, Apple) are not doing/would not do such a thing ? SSL is as secure as its certificates.



Imho, the correct way to evaluate corporate potential corporate trust is on self-interest.

In Microsoft, Google, and Apple's cases, they all have substantial enterprise business that would shit a brick if they were caught doing this.

Ergo, it's not in their best interest to do it.

Safer to rely on a company's desire to make money than any sense of "good".



That's appalling to say at least. But Snapchat implemented certificate-pinning since 2015. Does that mean either the analytics endpoint was not covered or somehow the certificate-pinning is circumvented in this case?


By using mitm, basically "pretending" you're the site the victim wants to connect to and trasparently connecting to the actual upstream site. Basically decrypting the traffic locally for inspection before sending it back out. https://en.wikipedia.org/wiki/Man-in-the-middle_attack. You don't need a root CA, you just need to poison the DNS to point to the mitm server and just present any old valid cert for the domain so it doesn't trigger a self-signed warning or whatever.


How can you take any old valid cert though? I presume they have some sort of private key you don't have access to and it would still trigger an expired cert warning?


Did the bury the lede? Sure this a blow against "competitors" but that is ultimately a competition for the collection of data, user data. In doing this FB has expanded its ability to hoover up more data at the individual user level, correct?

Yeah, crap move but my concern isn't those other scoundrels, it's me / us.



why people pay for 3rd party VPNs? It's far more secure to create your own wireguard/openvpn/whatever with a cheap VPS


> why people pay for 3rd party VPNs? It's far more secure to create your own wireguard/openvpn/whatever with a cheap VPS

Your comment seems to infer that you're unable to empathize with people who might think/understand differently than you. It also seems to negate that you avail of other services/non-self-controlled processes without worrying about the threat models, there.

Just hand-waiving with a "Why don't people just do 'x'?" is ironic - in the sense of "Why do you do your own medical care?" or "Why don't you grow your own food and slaughter your own animals?" or "Why don't you manufacture your own phone, it's operating system - oh, and the cellular tower closest to you?".

Threat models exist, _everywhere_, and it's impossible for someone to build all of the pieces, themselves, to prevent all threat models at every possible avenue/point.

In other words, at a non-arbitrary point, doing _everything_ yourself is untenable and that's precisely why services in society exist, today (that and ease of access, use, required foreknowledge, and - most notably - cost).



Not everyone is savvy enough to do it, even though the process has been simplified with many hosting providers providing preconfigured VPN servers.

And it doesn't anonymize you that well. When you post a message that draws the attention of law enforcement, the IP will lead them to a VPN provider that hopefully doesn't keep any logs.

But if it leads them to a specific server, the hosting provider will disclose your account and payment data, since it is linked to your private server. Unless they accept fully pseudonymous accounts and let you pay for your VPS in cash, Monero or tumbled Bitcoins, finding you is much easier now.



I find it so insane that people think the major VPN providers aren't all completely compromised one way or the other. As if you're really going to be able to just pass your traffic through such a business and they're going to actually keep no logs, and not have secret deals made with intelligence agencies, and aren't unknowingly completely insided/compromised by intelligence agencies. As if you can just push your traffic through a major VPN and intelligence agencies would just go "well shucks, oh man, they sure got us, we'll never know who it was, foiled again".


> I find it so insane that people think the major VPN providers aren't all completely compromised one way or the other.

For 99.9% of people a VPN is just something they use to access something in another country or because some YouTube ad scared them into believing you need a VPN as soon as you step into a coffee shop.

The threat model of most people does not include state actors or intelligence actors and they just don’t care.



That could be either Mullvad or ProtonVPN.

Both are Swiss zero log, Mullvad has a flat 5 euro/month charge that goes back to when they started to (they say) forever - you can send them cash in envolope for the next twenty years with a generated account number and you're away.

ProtonVPN has plans - the two year streaming sign up is 4.99 euro/month.



The case you shared in fact shows that the Proton's encryption ensures privacy by default and that it cannot be bypassed even when we're presented with a court request that we cannot legally contest. Namely, weren't able to share any of the user's email content due to zero-access encryption which makes it inaccessible to us: https://proton.me/blog/zero-access-encryption. All we could provide was the limited metadata we need to have access to anyway in order for the email service to work properly. Additionally, the user's identity had already been known to the law enforcement. As any legally operating company, we need to comply to the local legislation.

There is also no comparison between Crypto AG and us. Our encryption occurs client-side, our cryptographic code is open source ( https://proton.me/community/open-source ), and our tech can and has been independently verified. More about this here: https://proton.me/blog/is-protonmail-trustworthy

Finally, regarding payment in cryptocurrency, you can also pay for Proton's services in Bitcoin: https://proton.me/support/payment-options#bitcoin.



> Proton for instance revealed the location of a climate activist leading to his arrest[2], with the inspiring message from the CEO that "privacy protections can be suspended", silently on a per-user basis at any time.

That person isn't just a climate activist, they (and others who used that email account) broke French laws. Swiss authorities compelled the disclosure.



> broke French laws. Swiss authorities compelled the disclosure.

That's a terrible reason. Torrenting breaks French law. Having the wrong bread or cheese with your wine probably breaks French law.

And if your company can be compelled via gag order to give up your users' privacy whenever the authorities feel like it, well, your product isn't very effective anyways, and you should stop pretending you offer any meaningful level of protection.



You have to trust someone somewhere. You're simply placing your trust in the VPS provider instead of a third party VPN provider.

Not to mention the other stuff the VPN providers give you as standard which you'd have to implement and maintain yourself.



It really depends on why are you trying to do. It is not easy (or just impossible) to get the same amount of ip with that $5/mo or $10/mo VPN services by renting your own VPS at the same price.


Oftentimes, it's not about security but about circumventing censorship. A cheap VPS comes with a fixed IP located in one fixed part of the world. Many VPN providers allow switching.


Because most people are not techies.

Compared to the rest of the world, the number of people who even know what a VPS is is microscopically small.

And even those that do, the number of them with the time, desire, or skill, to do as you suggest, is even smaller.

I myself was into this sort of thing just 10 years ago. Now, as I start looking at hitting the big 6-0 in just a few years time, I’m already working on divesting myself of all this complexity,

联系我们 contact @ memedata.com