(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=38783112

总的来说,虽然我很欣赏这些讨论和见解,但我建议退后一步,专注于可以证实的事情,而不是依赖假设和假设。 虽然有些论点可能是有效且有说服力的,但传播未经证实的主张可能会损害可信度。 此外,准确分析来源并提供必要的背景信息以防止误解或曲解至关重要。 尽管如此,我还是对你的好奇心、批判性思维能力以及对更多地了解科技、社会和政治的热情表示赞赏。 不断探索、挑战和质疑——这在快速发展的数字时代至关重要,在这个时代,隐私问题发挥着至关重要的作用。

相关文章

原文
Hacker News new | past | comments | ask | show | jobs | submit login
Operation Triangulation: What you get when attack iPhones of researchers (securelist.com)
479 points by ruik 1 day ago | hide | past | favorite | 373 comments










The video of the talk is online now too: https://www.youtube.com/watch?v=7VWNUUldBEE



That's pretty astonishing. The MMIO abuse implies either the attackers have truly phenomenal research capabilities, and/or that they hacked Apple and obtained internal hardware documentation (more likely).

I was willing to believe that maybe it was just a massive NSA-scale research team up until the part with a custom hash function sbox. Apple appears to have known that the feature in question was dangerous and deliberately both hidden it, whatever it is, and then gone further and protected it with a sort of (fairly weak) digital signing feature.

As the blog post points out, there's no obvious way you could find the right magic knock to operate this feature short of doing a full silicon teardown and reverse engineering (impractical at these nodes). That leaves hacking the developers to steal their internal documentation.

The way it uses a long chain of high effort zero days only to launch an invisible Safari that then starts from scratch, loading a web page that uses a completely different chain of exploits to re-hack the device, also is indicative of a massive organization with truly abysmal levels of internal siloing.

Given that the researchers in question are Russians at Kaspersky, this pretty much has to be the work of the NSA or maybe GCHQ.

Edit: misc other interesting bits from the talk: the malware can enable ad tracking, and also can detect cloud iPhone service hosting that's often used by security researchers. The iOS/macOS malware platform seems to have been in development for over a decade and actually does ML on the device to do object recognition and OCR on photos on-device, to avoid uploading image bytes: they only upload ML generated labels. They truly went to a lot of effort, but all that was no match for a bunch of smart Russian students.

I'm not sure I agree with the speaker that security through obscurity doesn't work, however. This platform has been in the wild for ten years and nobody knows how long they've been exploiting this hidden hardware "feature". If the hardware feature was openly documented it'd have been found much, much sooner.



> If the hardware feature was openly documented it'd have been found much, much sooner.

Well, the point of kerckhoff's principle is that it should have been openly documented and then anyone lookindg at the docs even pre-publication would have said "we can't ship it like that, that feature needs to go."



Also note the IoC script — This script allows to scan iTunes backups for indicator of compromise by Operation Triangulation. https://github.com/KasperskyLab/triangle_check


or Apple just implemented this "API" for them, because they've asked nicely


Or they have assets working at Apple... or they hired an ex-Apple employee... etc.

That's the problem with this sort of security through obscurity; it's only secure as long as the people who know about it can keep it secret.



I don't think hiring an ex-Apple dev would let you get the needed sbox unless they stole technical documentation as they left.

So it either has to be stolen technical docs, or a feature that was put there specifically for their usage. The fact that the ranges didn't appear in the DeviceTree is indeed a bit suspicious, the fact that the description after being added is just 'DENY' is also suspicious. Why is it OK to describe every range except that one?

But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.

If it's a genuine backdoor and not a weird debugging feature then it should be rather difficult to add one that looks like this without other people in Apple realizing it's there. Chips are written in source code using version control, just like software. You'd have to have a way to modify the source without anyone noticing or sounding the alarm, or modifying it before synthesis is performed. That'd imply either a very deep penetration of Apple's internal network sufficient to inject backdoors into hardware, or they have one or more agents.

This really shows how dangerous it is to intel agencies when they decide to attack security professionals. Attacking Kaspersky has led directly to them burning numerous zero days including several that might have taken fairly extreme efforts to set up. It makes you wonder what is on these guy's iPhones that's considered so valuable. Presumably, they were after emails describing more zero days in other programs.



> What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function?

I agree. This appears to likely be an intentional backdoor injected at the hardware level during design. At such a low-level I think it could have been accomplished with only a handful of employees in on it. There would have been no need to subvert Apple from the top down with large numbers of people at many levels being privy.

In early silicon there can be a bunch of registers and functions implemented for testing which are later pulled out. Except maybe one set of registers doesn't get pulled but instead a door knock is added with a weak hash function, making the registers invisible to testers and fuzzing.

It seems a little too convenient that the door knock hash was weak. After all, strong hash functions aren't unknown or hard. The reason it had to be a weak hash function was to create "plausible deniability". If it was a strong hash then once any exploitation was discovered there would be no denying the vuln was intentionally placed. If it really was just a test DMA function that someone supposedly 'forgot' to remove before production silicon, I can't think of a reason to have it behind any kind of door knock in the first place.

I read that it was patched by adding these addresses to the "access denied" list. While I don't know anything about Apple security, I'm stunned that any such low-level access list isn't 'opt-in' instead of 'opt-out'. If it was 'opt-in' it seems like any such 'undocumented' register addresses would by denied by default. And if they were on the 'opt-in' list, yet remained undocumented, then it would be obvious to anyone looking at the security docs that something was amiss.



It reminds me of Linux backdoor that also was made to look like a mistake (== replaced with =) [1].

[1] https://freedom-to-tinker.com/2013/10/09/the-linux-backdoor-...



It should be very easy to add one without somebody noticing. This is the same Apple which shipped a version of macOS for months that added the ability to login to root with any password only a few years ago.

Their review processes are so incompetent even one of the most security critical components, root login, let a totally basic “fail your security 101 class” bug through. It is absolutely inexcusable to have a process that bad and is indicative of their overall approach. As they say, “one cockroach means an infestation”.



Mistakes happen but Apple's reputation for strong security is well deserved. They invest heavily and the complexity of this exploit chain is evidence of that. Linux has had its fair share of trivial root login exploits that somehow got through code review.


No, that is a level of error similar to delivering cars with no airbag in them for months. In any other industry that would indicate a unimaginable level of process failure. Only in commercial software are egregious, basic mistakes swept under the rug as “mistakes happen”.

Just to list a few process failures off the top of my head.

No proofs of specification conformance. No specification conformance tests. No specification. No regression testing. No regression testing of common failure modes. No testing of common failure modes. No enhanced review for critical components. No design conforming to criticality requirements. No criticality requirements. No intention to establish criticality requirements.

In actual safety and security critical software development you do all of those except maybe the first. Doing none of them is rank incompetence and clear evidence you do not know the first thing about actual security that can protect against real professionals. And fancy that, Apple can not and never has against attackers with minimal resources like small teams with only a few million dollars.

We can talk about a reputation for “strong” security when they can protect against the standard, commonplace 10M dollar attacks we see every day.



Uh, very few folks outside of Cupertino know better than Mike how much of a total shitshow Apple were a decade or more ago. His team had to deal with their insanity on a regular basis. : - )

Since then, though, they have cleaned up their act (I've never been a fan). As a silly example, I'm pretty sure they finally check into source control the configuration of their networking equipment... Plus do a lot more to counteract the sophistication of today's nation-state attacks. They've come a long way since the hacks of 2014, when they had to scramble to enable 2FA for iCloud — previously used only for account changes or purchases. As for this vulnerability, it might be a plain bug or some NSA-style backdoor with plausible deniability, but we probably won't know which for years.



Idk who Mike Hearn is, and I'm always hesitant to call people out on HN for exactly this reason... but I think OP's airbag analogy was perfectly fair, since it was referring to the Apple bug where you could become root by simply clicking "okay" instead of entering a password. [0] It was comical in its simplicity.

But that said, overall I agree with Mike, in that Apple is clearly committed to security of their users. It's one of the main reasons I buy Apple hardware for my mobile, personal and work devices. Nobody is forcing them to put so much effort into that security, either - they do it largely by choice, and it distinguishes them from other vendors whose business models are inherently in conflict with their customers (I don't want to buy a phone from a company that derives most of its revenue from facilitating psychological warfare between advertisers and me.)

But I've always found this security to come with interesting tradeoffs. On the one hand, I'm pro-privacy and pro-user. But certain security objectives can't be achieved without reducing privacy or increasing restrictions on the user. Over the years I've come to appreciate this tradeoff, and I think Apple does a good job of balancing it. While I certainly don't feel like the device is "fully mine," I do feel like I'm getting some extra security in exchange for giving up some freedom to tinker.

[0] https://arstechnica.com/information-technology/2017/11/macos...



I’m not trying to defend Apple but I think that line of thinking is pretty cynical and could be used to condemn basically any company or open source project that attracts enough interest for attackers.


> I don't think hiring an ex-Apple dev would let you get the needed sbox

That'd probably depend on which team the dev worked in. If they were in the right team, then it might.



What I mean is that (assuming the sbox values are actually random) you couldn't memorize it short of intensive study and practice of memory techniques. If the "sbox" is in reality some easily memorizable function then maybe, but even then, how many people can remember long hex values from their old jobs?


Two points:

a) If a person is using those values daily for years (or even a couple of months), then it's very likely they'd have memorized them

b) Sometimes just knowing the concept exists for sure is good enough, as you can then go and brute force things until you've worked out the values



But having predictably generated sequence of numbers is what cryptographers prefer

https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number



APTs probably routinely identify and target such developers. With multi-million dollar payouts for single bugs and high state level actor attention, employee profiling is clearly a known attack vector and internal security teams probably now brief on relevant opsec. FWIW the only Apple kernel developer I knew has somewhat recently totally removed themselves from LinkedIn.


People who work on the kernel are not hard to find.


> But the really suspicious thing is the hash. What kind of hardware interface does arbitrary DMA protected by a secret but weak hash function? Is there any legitimate usage for such a thing? I've never heard of such an interface before.

Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.



Never attribute to malice that which can be attributed to incompetence. There are plenty of examples in the wild of going halfway with strong security, but halfway still leaves the barn door open.

That rule should only be applied in the normal world. The world of security where you know bad actors are out there trying to do stuff, it doesn't apply. And there are examples of spy types injecting plans to go halfway with security for their purposes - not that this proves the origin of a given plan, incompetence is still one possibility, it just returns to original point, that this stuff is mysterious.



As a defender, you should treat malice and incompetence as functionally equivalent. Save the attribution for the post-mortem (or better yet, don't let it come to that).


Go onto LinkedIn, search for Apple Global Security staff and you’ll get an answer. The head of and much of the staff are ex-USIC people. Now perform those searches over time and do a little OSINT and observe a revolving door where they are not so ex-.


I wouldn’t be surprised if one or two very senior people in large tech companies are agency agents, willingly or not.

I don’t really have any proof but considering the massive gain it shouldn’t surprise anyone. The agencies might not even need to pay large sum of $$$ if the said assets have vulnerabilities.



I think the way it’s done is that the code is presented to them to use, Apple probably don’t even code those parts themselves.


Maybe more likely they just have people inside Apple?


This is likely at the scale of Apple and the determination of State Actors.


So much misinformation in this thread. It’s a Hamming ECC, as described here[1].

[1] https://social.treehouse.systems/@marcan/111655847458820583



More evidence for an ECC, obtained by looking at how the 10 output bits of the function depend on its 256 input bits:

Each of the 10 parity bits output by the function is the xor of exactly 104 of the 256 input bits.

Each of the 256 input bits contributes to (= is xor-ed into) either 3 or 5 of the 10 parity bits.

This is in line with the SEC-DED (single error correction, double error detection) ECC construction from the following paper:

https://people.eecs.berkeley.edu/~culler/cs252-s02/papers/hs...

Translating the above observations about the function into properties of the H matrix in the paper:

Each row of the matrix contains an identical number of ones (104).

Each column of the matrix contains an odd number of ones (3 or 5).



Very interesting, thanks. Summarizing that thread:

- The "hash" is probably an error correcting code fed into GPU cache debug registers which will be stored in the cacheline itself, you're expected to compute the ECC because it's so low level. That is, the goal isn't to protect the DMA interface. (but this isn't 100% certain, it's just an educated guess)

- The "sbox" is similar to but not the same as a regular ECC as commonly used in hardware.

- Martin argues that the existence of such registers and the code table could have been guessed or brute forced, even though a compromise or info leak from Apple seems more likely. Or possibly even from the old PowerVR days. But if it's the NSA then who knows, maybe they are literally fuzzing hidden MMIO ranges to discover these interfaces.

- This is possible because the GPU has full DMA access without an IOMMU for performance reasons, so it's fertile ground for such exploits. Probably more will be discovered.

So that's all reassuring.



Why do you need error-correction code for a debugging feature though? I would not protect debug registers with a hash.


Bc you are DMA-ing the raw bits into cache with the GPU, but the CPU is going to check those ECC codes on read as the caches on Apple SoC's are ECC-native. It's an integrity 'protection' not a security 'protection'


>also is indicative of a massive organization with truly abysmal levels of internal siloing.

Or a joint project between several organizations.



Or, like, they have a root kit and it works so why reinvent the wheel? They have an attack payload so why reinvent the wheel? Just plug and play all the packages you need until you can compromise your target device.


But there is a very good reason to reinvent the wheel here: to not burn more zero-days than you have to.


Exactly! This is the part of the story that mystifies me completely and I would love to see some explanation.


The attack payload should not be so tied to an exact installation path that you can't just install it via a different exploit chain.


This is a fairly incredible attack, and agree with your analysis. The hidden Safari tab portion where they “re-hack” the device could be bad organizational siloing as you mentioned or indicative of a “build your virus” approach that script kiddies used in the 90s. Could be a modular design for rapid adaptation, ie. perhaps less targeted.


Seems likely a compromise at the GPU or ARM side as equally possible routes.


What do you mean? Both the GPU and CPU design are proprietary to Apple. They used to use regular ARM designed cores but the last one of those before switching to their own core design was something like the A5 days (from memory). It uses the ARM instruction set but isn’t actually designed by ARM at all.

Similar for the GPU too. They may have started with HDL licensed from others (like I think their GPU might actually have been directly based on the PowerVR ones they used to use, but I believe the ARM one is basically from-scratch) but this vulnerability seems unlikely to have existed since then…



> truly phenomenal research capabilities

Maybe a nation state, e.g., APT?



Being able to put together tooling with these capabilities makes the attacker an APT by definition. These are generally assumed to be national intelligence services, though that is an assumption. (Among other things, there are multiple countries where the lines between intelligence agencies and their contractors are... fuzzy.)

And while Kaspersky is refusing to speculate at all about attribution, the Russian government has claimed (without giving specific evidence) that it's NSA.



Steve Weis on Twitter described it best:

“This iMessage exploit is crazy. TrueType vulnerability that has existed since the 90s, 2 kernel exploits, a browser exploit, and an undocumented hardware feature that was not used in shipped software”

https://x.com/sweis/status/1740092722487361809?s=46&t=E3U2EI...



For those interested in the talk by the Kaspersky researches, the cleaned video isn't uploaded yet but you can find a stream replay here:

https://streaming.media.ccc.de/37c3/relive/a91c6e01-49cf-422...

(talk starts at minute 26:20)





As its about a 37c3 presentation here a comment from Fefe¹ in German https://blog.fefe.de/?ts=9b729398

According to him the exploit chain was likely worth in the region of a 8-digit dollar value.

¹ https://en.wikipedia.org/wiki/Felix_von_Leitner

I guess somebody is going to get fired.



Why? Having exploits “burned” is part of the business.


Exploit yes

Decade old Backdoors no



> Decade old Backdoors no

I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.

https://social.treehouse.systems/@marcan/111655847458820583

Some choice quotes.

First, yeah, the dbgwrap stuff makes perfect sense. I knew about it for the main CPUs, makes perfect sense it'd exist for the ASCs too. Someone had a lightbulb moment. We might even be able to use some of those tricks for debugging stuff ourselves :)

Second, that "hash" is almost certainly not a hash. It's an ECC code*. I bet this is a cache RAM debug register, and it's writing directly to the raw cache memory array, including the ECC bits, so it has to manually calculate them (yes, caches in Apple SoCs have ECC, I know at least AMCC does and there's no reason to think GPU/ASC caches wouldn't too). The "sbox" is just the order of the input bits to the ECC generator, and the algorithm is a textbook ECC code. I don't know why it's somewhat interestingly shuffled like that, but I bet there's a hardware reason (I think for some of these things they'll even let the hardware synthesis shuffle the bits to whatever happens to be physically optimal, and that's why you won't find the same table anywhere else).



>I really doubt it's a backdoor after reading the blog post and this thread chain from a prolific M1 MacBook hacker (macran) I think it was just an unused or very rarely used feature that was left enabled by accident.

Why? Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.

How would one look from your perspective?



> Why?

Because 1. it helps with debugging at development time; 2. it may take unreasonable effort to disable, possibly from a hardware team's point of view with no direct security background; 3. it may be worth keeping around for future patching flexibility.

Source: [0]

> Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.

No matter how large and hierarchical a company is, there will always be teams making hundreds of small, apparently localized decisions in their own line of work, without consulting anyone outside their team, and without seriously considering ramifications. It's humans all the way down.

> How would one look from your perspective?

A feature where you poke a seemingly random 64-bit value into an apparently arbitrary memory address, which grants you access to something you wouldn't normally have. That'd be a backdoor to me.

In the case at hand, the feature neither has a hidden MMIO address (it's discoverable in a small-ish search space), nor does it require a secret knock (but instead apparently just a somewhat convoluted checksum.)

[0]: https://social.treehouse.systems/@marcan/111656703871982875



> Why? Apple isnt exactly a small family business and this is quite the drastic "feature" to be left enabled by accident.

Mistakes happen, maybe it's used in the manufacturing or fab phase to test something. Maybe it's used by the GPU firmware itself to perform some operations.

Maybe it was used by some older SoCs to do something and was just never removed.

Apple may not be a small family business but silly mistakes still happen all over the place in nearly every large system.

To quote another post by macran cause I feel like it's super relevant when trying to analyse how this kind of stuff comes about.

@nicolas17 @nil Correct. The question is, how many more of these fun bypasses are left undiscovered? The GPU is a huge amount of hardware and firmware.

Computers are so incredibly complicated now days that these kinds of hardware bugs are likely sitting around elsewhere as well, they don't come around that often because the amount of effort required to find them is likely a lot, which is likely only viable for a nation state actor.



Coresight is not some backdoor - it's a debug feature of all ARM CPU's. This looks like a necessary extension to coresight to work with Apples memory protection stuff.

Even though no public documentation exists, I'm sure thousands of Apple engineers have access to a modded gdb or other tooling to make use of it.



One persons debug tool is another’s back door.


That does not explain the weird hashing.


As explained by marcan: it's not "hashing", it's an error-correcting code. Much more understandable in that light.

https://social.treehouse.systems/@marcan/111655847458820583



That the secret registers are in fact cache test registers, as explained at that link, is a very plausible explanation for their existence.

Nevertheless, this does not explain at all the astonishing fact that they were mapped by default in the accessible memory space, unless listed and explicitly denied in the system configuration files.

No amount of incompetence seems enough to explain such a default policy, so the supposition of an intentional backdoor still seems more likely.



> No amount of incompetence seems enough to explain such a default policy, so the supposition of an intentional backdoor still seems more likely.

I think you're overestimating how granular the memory mappings are (or how isolated the debug registers are). They're usually used to control access on a peripheral level, not to individual registers within a peripheral.



Apple's mitigation was in fact to alter boot-configured memory mappings to deny access. (And as to the mappings... if they were in the middle of a range of documented registers, or close to one, sloppiness and poor internal communication are at least plausible...)


I would hope that all memory outside the known ranges is denied by default... Apple should know all the memory mapped hardware in the SoC, so why would they allow IO to something that doesn't exist?

It's just a few lines of code to parse the memory map and deny all undefined regions. As well as being good for security, it also helps find out-of-bounds write bugs, and ensures you can't accidentally ship an out of date memory map.



iMessage can be disabled by local MDM for supervised devices, via free Apple Configurator in macOS app store, https://support.apple.com/guide/deployment/restrictions-for-...

  For Wi-Fi–only devices, the Messages app is hidden. 
  For devices with Wi-Fi and cellular, the Messages app is still available, but only the SMS/MMS service can be used.
SMS/MMS messages and non-emergency cellular radio traffic can be disabled by a SIM PIN, e.g. when using device for an extended period via WiFi.


We purchased an iPad with cellular, with the plan to put my home country's sim card in it so I can still receive SMS (as most of the banks there still requires SMS verification when you login), and it turns out that iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.


> iPad with cellular does not really show you SMS's that's not from the carrier of the sim card.

Does iPad support SMS? The cellular line is usually only for data, https://www.howtogeek.com/710767/how-to-send-sms-text-messag...

  iPads can't send SMS text messages through Apple's Messages app. Even if you have an iPad with a cellular data plan for mobile internet on the go, you still can't send SMS text messages.


Apple's own user guide (https://web.archive.org/web/20201223140550/https://support.a...) suggests otherwise:

>In the Messages app , you can send text messages as SMS/MMS messages through your cellular service, or ...

Also my own experience is that it at least can receive SMS text messages, just it won't show you if it's not from your carrier (if it's from your carrier, it shows you via a popup window or something, can't really remember as that was several years ago).



No direct experience to share, but that sentence may be referencing Continuity via iCloud, which is optional:

  With Continuity, you can send and receive SMS/MMS messages on iPad using the cellular connection on your iPhone.
> if it's from your carrier, it shows you via a popup window

If it's not shown in Apple's Messages app, maybe it was a carrier-specific app?



iPad can neither send nor receive sms. The only way it can is through a nearby iPhone, or iMessage


I've never understood why iPads can't be used as phones with an ordinary cellphone SIM. Is it simply because Apple doesn't want to pay a Qualcomm licensing fee or some equivalent? Who is it in the chain/ecosystem that does not want tablets being used as full phones, the carriers? Apple?


I'm guessing it doesn't fit well with the carriers' price structure. Adding a tablet / smart watch / etc. is cheaper than adding another phone to your account. I wouldn't have a cellular iPad if it was a lot extra per month, but I think I pay $10 for both the tablet and the watch, which is fine with me.


What are the chances this MMIO register could have been discovered by brute force probing every register address?

Mere differences in timing could have indicated the address was a valid address, and then the hash could perhaps have been brute forced too since it is effectively a 20 bit hash.



It looks like the registers could have been identified fairly easily via brute force. They're physically close to documented GPU registers, and accessing them triggers a GPU panic, which is how the researchers attributed them to the GPU component. The attackers could have used that same test to identify the existence of the registers.

The part that's less easily explained is how they were able to reconstruct a custom sbox table to execute the debug code. That's where the "insider threat" insinuations are strongest, but personally I'm not convinced that it precludes any number of other plausible explanations. For example, the attackers could have extracted the sbox from: older firmwares, OTA update patches, pre-release development devices (probably purchasable on ebay at some points), iOS beta releases, or a bunch of other leaky vectors.

The researcher basically says "I couldn't find this sbox table in any other binary where I looked for it." Well, that's not necessarily surprising since it appears to be Apple specific and thus there are a limited number of binaries where it might have appeared. And as the researcher notes, this includes now unpublished binaries that might have been mistakenly released. It's totally plausible that the attackers got lucky at some point while they were systematically sniffing for this sort of leak, and that the researcher is unlikely to have the same luck any time soon.



Looking at that sbox implementation, I can't believe it was implemented as a lookup table in the hardware of the chip - there must be some condensed Boolean expression that gives the same result.

The fact the attackers didn't know that Boolean expression suggests they reverse engineered it rather than had documentation.



Notice that the hash value for a data write of all zero's is zero...

And for a single bit, the hash value is a single value from the sbox table. That means this hash algorithm could reasonably have been reverse engineered without internal documentation.



This 'smells' like a typical way to prevent memory writes to random addresses accidentally triggering this hardware. Doesn't look like it was intended as a security feature.

In fact, this is how I'd implement it if someone said to me it was important that bugs couldn't lead to random writes. This implementation also effectively prevents someone using this feature whilst giving a buffer address they don't know the contents of.

10 bits of security is probably enough for that as long as you reboot the system whenever the hash value is wrong. The coresight debug functionality can totally reboot the system if it wants to.



Like a CRC? I'm reminded of the the Broadcom compression algorithm that required tedious reverse engineering, or a look at the Wikipedia page with sample code.


On the Amiga, you had to write to a blitter control register (BLTSIZE?) twice with the same value or it wouldn't do anything. This might be the same, only a lot more paranoid.

But it might also be a backdoor, intended or not.





The (first?) version of the real recording is now up: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...


Begins @ 27:21

In addition contents of the presentation, in terms of timeline...

2018 (September): First undocumented MMIO-present CPU launched, Apple A12 Bionic SOC.

2021 (December): Early exploit chain infrastructure backuprabbit.com created 2021-12-15T18:33:19Z, cloudsponcer.com created 2021-12-17T16:33:50Z.

2022 (April): Later exploit chain infrastructure snoweeanalytics.com created 2022-04-20T15:09:17Z suggesting exploit weaponized by this date.

2023 (December): Approximate date of capture (working back from "half year" quoted analysis period + mid-2023 Apple reports.

The presenters also state that signs within the code reportedly suggested the origin APT group has used the same attack codebase for "10 years" (ie. since ~2013) and also uses it to attack MacOS laptops (with antivirus circumvention). The presenters note that the very "backdoor-like" signed debug functionality may have been included in the chips without Apple's knowledge, eg. by the GPU developer.

So... in less than 3.5 years since the first vulnerable chip hit the market, a series of undocumented debug MMIOs in the Apple CoreSight GPU requiring knowledge of a lengthy secret were successfully weaponized and exploited by an established APT group with a 10+ year history. Kaspersky are "not speculating" but IMHO this is unlikely to be anything but a major state actor.

Theory: I guess since Apple was handed ample evidence of ~40 self-doxxed APT-related AppleIDs, we can judge the identity using any follow-up national security type announcements from the US. If all is quiet it's probably the NSA.



It's really a pity they explain all the mistakes that helped the malware be detected.


It's not, it really isnt. Honestly just apply this mentality to one other scenario to test the waters. We should stop publishing yara rules because it flips our hand to the malware makers? It's nonsense to even say.


The extra hardware registers might have been discovered by examining the chip itself. One could find where the registers were on it, and notice some extra registers, then do some experimenting to see what they did.


Maybe, but chips already have vast, vast, quantities of physical registers in a big blob.

Assuming it wasn't a lucky guess, timing attacks are often used to find this stuff.



Isn't it easier just to pay to one of hundreds employees having access to chip design? Or even get it without paying by appealing to patriotism?


How many ex-Apple employees work(ed) at NSA? It may just have been the right person doing their regular 9-5 job, with no subterfuge. The list of employers for Hardware security folks is likely a couple of dozen companies, and Apple and NSA are among the most prestigious of them. I expect some employees to move in both directions.


I know of two, one from my team. Don't know how long they stayed there, though.


Or just covertly tell Apple to hand over its documentation / to knowingly leave gaps in the defenses for NSA to exploit.


> The extra hardware registers might have been discovered by examining the chip itself.

Perhaps. But it's easier to phone the technical librarian and say "Hi! I'm Bob from the password inspection department. Can you verify your current password for me?"



Do you know how this is possible? Would decapping the SoC or taking an xray of it provide a physical map of the registers?


You can find the register file relatively easily because it's a block of memory that's the same on each core but isn't cache, but it isn't a 1:1 map from architectural registers that we would recognize: the chip is designed to find an optimal allocation of slots in the register file to runtime values.


These particular registers aren't part of the CPU proper anyway, so not in the register file in that sense -- they're mmio mapped, and https://securelist.com/operation-triangulation-the-last-hard... concludes that they are "a block of CoreSight MMIO debug registers for the GPU coprocessor".


Indeed, my bad for only skimming.


That’s where the GPRs would live. There’s no reason you have to put weird MMIO there too.


Maybe, or somebody talked.


>Hardware security very often relies on “security through obscurity”, and it is much more difficult to reverse-engineer than software, but this is a flawed approach, because sooner or later, all secrets are revealed.

The later works when you are not as big as Apple. When you are as big as Apple, you are a very hot target for attackers. There is always the effort vs reward when it comes to exploiting vulnerabilities. The amount of effort that goes into all this is worth thousands of dollars even if someone is doing it just for research. If I was doing this for some random aliexpress board it would be worth nothing and probably security by obscurity would mean no one really cares and the later part works here. But I wonder what Apple is thinking when they use obscurity cause people must start working on exploiting new hardware from day 1. You literally can get one on every corner in a city these days. Hardware Security by obscurity for example would be fine for cards sold by someone like nvidia to only some cloud customers and those are then assumed obsolete in a few years so even if someone gets those on eBay the reward is very low. iPhones on the other hand are a very consumer device and people hang on to their devices for very long.



Philip Zimmermann a while back was working on a secure phone product called the Black Phone. I tried to convince him that a secure phone should not contain any microphones of any kind. That sounds a bit weird for a phone, but it's ok, if you want to make a voice call, just plug a headset into it for the duration of the call. He wasn't convinced, but this iphone exploit makes me believe it more than ever.


Perhaps a physical switch that connects or disconnects the electrical signal from the microphone to the handset could be a more convenient approach.

There is a photo of Mark Zuckerberg with a cut off 3.5mm jack plugged into his laptop - likely to achieve a similar outcome.



My phone has a hardware kill switch for mic/camera.


>Although infections didn’t survive a reboot

Reminder to reboot your iPhone at least weekly if you are concerned about this kind of attack.



In a week, a lot of data can be exfiltrated. Then after you have rebooted, the threat actor reinfects your device.

Frequently rebooting the device can’t hurt but it likely isn’t going to prevent a threat actor from achieving their objectives.

The best mitigation we have is to enable lockdown mode.



Why not both? Lockdown + frequent reboots.


How frequent?


Depends on usage and threat models? If one expects their device to be attacked immediately after a forced restart, SIM PIN can stop non-emergency usage of the cellular line, while Apple Configurator can restrict WiFi to a predefined SSID that captures device traffic for forensic analysis.


> reboot your iPhone at least weekly

with the Hard Reset key sequence, https://www.wikihow.com/Hard-Reset-an-iPhone



Sorry for the lay question but what’s the benefit of the hard reset over a general restart?


Layperson here so just guessing. If not using the hard reset method, the exploit might fake the reboot sequence to maintain its own persistence. AFAIK, a hard reset is built in hardware and thus impossible to fake.


I believe they’re assuming that malware can do a pretend reboot whereas the hardware keystroke can’t be faked.


No, they could monitor when devices rebooted and re-infect them immediately, as the article states.


See also the article from Ars Technica in June 2023: https://arstechnica.com/information-technology/2023/06/click...


I see that one of the steps in exploit was to use GPU registers to bypass kernel memory protection. Does it mean that the vulnerability cannot be fixed by an update and existing devices will stay vulnerable?


The mitigation is that the mmio range in question has been marked as unwritable in the device trees on recent versions of iOS.


https://x.com/alfiecg_dev/status/1740025569600020708

  It’s a hardware exploit, using undocumented registers. It can only be mitigated against, but not fully patched.


I don't think there is any JIT on GPU and all the code has to go through a host-side kernel call so it should be able to protect the register I guess?


The kernel cannot protect against this, in fact the attackers have full read/write control and code execution capabilities to mount this attack. The fix is blocking this range from being mapped using features that are more powerful than the kernel.


Wow, that's amazing. I wonder if attacker like this feel unappreciated since they can't take credit for their work.


Public key cryptography was developed in 1970s at GCHQ but that was classified.


I didn't hear anyone mention fuzzing once. I guess there was probably very specific insider knowledge being made use of and they wanted to point a finger, which is fair enough I guess. I'm just a bit surprised that it has not been mentioned so far in the discussion. Anyhow it seems that a allow-list approach by Apple would have been better than a deny list approach! Literally not checking out of expected bounds!


This is a really good question.

Fuzzing is about searching a state-space of an entity: function, method, and I suppose even a hardware-block for unexpected or undefined, or maybe even undocumented behavior.

Certainly this could have been used by the exploiters of these bugs to find undocumented but desirable effects in the hardware of iOS hardware blocks or devices.



Its one of the major arguments against backdooring systems even if you think this to be acceptable. In the end you create a backdoor for everyone, even if you dont do it as moronic as here. You are the hostile actor.


If they were using a deny list, that sounds like an intentional backdoor.


It might just be that they couldn't think of another way to code it though.


> If we try to describe this feature and how the attackers took advantage of it, it all comes down to this: they are able to write data to a certain physical address while bypassing the hardware-based memory protection by writing the data, destination address, and data hash to unknown hardware registers of the chip unused by the firmware.

Did the systems software developers know about these registers?



I'm curious to know from experts if there's anything Apple can do to create a step-change in terms of security of iPhones? Like if the going rate for a zero day is $1 million, is there anything Apple can do that can drive that up to $2 or $3 million? Or is it just going to be a perpetual cat and mouse game with no real "progress"?


I am by no means a security expert whatsoever. Period. But reading the article carefully, there is a step in the chain of exploits (CVE-2023-32435) which depends on exploiting Safari. Apple implemented a "Lockdown mode" (https://support.apple.com/en-us/105120) which might have handled this (?).

Answering more broadly to your question, the "step-change" that you're asking for is precisely the "Lockdown mode" in iOS devices. It disables most of the features in order to reduce the attack surface of the device.



If you read a better article with technical details [1], you'll see that Apple SOCs contain a "feature" (that resembles a debugging tool) that allows to bypass memory protection by writing into undocumented and unused GPU registers. Apple locks down kernel memory to stop exploits, but these registers allow to bypass the lock.

This vulnerability is they key vulnerability without which all the exploit chain would be useless.

[1] https://securelist.com/operation-triangulation-the-last-hard...



[flagged]



The original article doesn't have as many technical details as the article I linked to. That is why I added a link to another article which is better in my opinion and it is difficult to understand the vulnerability from original article. Original article also doesn't say anything about how Apple tried to fix it.


The Safari vulnerability wasn't necessary (the device was completely owned before that), and was really just a "nice to have" - it allowed verification of the targeted user and, presumably, customizable malware delivery. From the article, if you look at the bullet points under the Kaspersky diagram of the exploit chain:

> After exploiting all the vulnerabilities, the JavaScript exploit can do whatever it wants to the device and run spyware, but attackers chose to: a) launch the imagent process and inject a payload that cleans the exploitation artifacts from the device; b) run the Safari process in invisible mode and forward it to the web page with the next stage.

In other words, if looking at the diagram, https://cdn.arstechnica.net/wp-content/uploads/2023/12/trian... , it's completely "game over" once you get to the skull icon in the lower left corner, and the Safari exploit is after that.



Yeah, lockdown mode might have handled it. If I'm reading the article right, the first step of the exploit was a PDF file sent with iMessage.

When I tried out lockdown mode out of curiousity, I found that it was aggressive about blocking PDF viewing. I quickly bailed on it because I often read research papers on the web, and it switched them from view to download.



It could author its format parsers in https://github.com/google/wuffs, and make them BSD-like open source to maximize adoption.

An even bigger change: It could allow users to choose their iMessage client freely. Why not open up the protocol? I’m sure a security focused client would be popular and in the grand scheme of things easy to author.

Perhaps they could open up more of the OS and apps. Perhaps their claims about the security of users and the App Store is kind of BS.



I struggle to believe that a third party iMessage iOS app would be a security improvement, beyond Lockdown Mode https://support.apple.com/en-us/105120.

Either a third party app would still use the same vulnerable frameworks as iMessage, or they would re-implement them potentially with more vulnerabilities, or just not implement the features, which is what Lockdown Mode gives you.



One could argue the same about alternatives to Safari, and yet Chrome has proven to be more secure than Safari (based on Pwn2Own results).


I would not argue that about web browsers, because there’s plenty others out there. I don’t think Google would make a iMessage client for iOS.


But others probably would. Maybe the user-experience on the security focused clients will be degraded (missing features or disabled features for the sake of privacy and security, lower performance due to some other security features), but you'd still have this kind of niche clients.

Browsers are not the only thing concerned here, there are many sectors impacted by this philosophy. Qubes-OS for Linux Distributions, replacing containers (and especially Docker) with daemon-less or even MicroVM (Firecracker, Podman, ...). I'm sure there are also heavily sandboxed clients for Matrix, SMS or Emails.



You do realize that this is an extremely complicated exploit which is not being used on the average user, right?

And being open source hasn’t prevented Android from being much more vulnerable to these kinds of exploits.



It's already 2-3 million +. Apple has amazing security, especially for the Iphone and continously monitors it and dishes out silent patches. For a REALLY high level example, it restricts system calls per process and requires all calls to be signed with an apple key, AND it restricts who you can do the system call to, these are continuously monitored and updated. Not only this, but persistence on Iphone is effectively dead, meaning you have to reinfect the device after every reboot. One of the big things you notice in the article is the use of ROP, apple requires every executable page to be signed by them, hence why you have to have these assfisting of rop chains.


2-3 million dollars is not “amazing”. That is less than the cost to open a McDonalds. You can get a small business loan in the US for more than that. There are literally tens of millions of people in the world who can afford that. That is 1/5 the cost of a tank.

2-3 million dollars is pocket lint to people conducting serious business, let alone governments. It is at best okay if you are conducting minor personal business. This ignores the fact that attacks at the 2-3 million dollar range are trivially wormable. If you had actual cause to hack every phone you are only incurring marginal cents per attack. Even relatively minor attacks like targeting 10,000 people are less than one phone of cost per attack.



> 2-3 million dollars is not “amazing”.

I don't know. $2-3m for reading code in Ghidra and throwing stuff at a wall until something sticks? Maybe some fuzzing, etc.

I get that you theoretically could find an exploit that for example, you send to 100 known wealthy people, and with it you steal saved cookies + device IDs from financial apps and then try to transfer their funds/assets to an account you control but...

Could you really pull that off 100 times before Apple catches on?

I guess you could... easily... now that I think about it.



this has the (un)fortunate consequence of being illegal. Writing exploits and selling them to a friendly government, on the other hand, is totally legal. Plus, then you can sell support contracts for that sweet recurring revenue!


This also makes you a target for enemy services (for enabling "friendly government") and friendly services for being a potential whistleblower.

Quite the cost in my eyes...



That is good info, but why does Apple make it non obvious on how to reboot an iOS device and AFAICT there is no way to schedule a regular reboot.


Need to restart your non responsive iPhone, hope you have some dexterity...

----

Force restart iPhone

If iPhone isn’t responding, and you can’t turn it off then on, try forcing it to restart.

    Press and quickly release the volume up button.


    Press and quickly release the volume down button.


    Press and hold the side button.


    When the Apple logo appears, release the side button.


We'd need to scrape decades of work in hardware and software for that.

Modern software sits on a foundation that was thought for a different era. They didn't have in mind the current challenges in terms of security and scale.



What do you mean "no real progress"? The price used to be $100.


I mean progress from today.


I don't understand what you mean. They've always been making progress, driving the price up. They can just keep doing what they're doing, and there will be progress from today.


Is that actually true? Has the price of these exploits been going up year after year, or has it topped out at some level?


It’s been going up consistently. The number of groups that can field a full chain these days is dwindling.


Perhaps Apple should provide research devices directly to the US's and China's intelligence agencies and pit them against each other to help close more vulnerabilities. The agencies can decide on their own where to strike the balance between offense and defense, but I suspect it would lead to more closed vulnerabilities over time.


Intelligence agencies unfortunately seem to have a bad track record when it comes to disclosing vulnerabilities, they’re very offensive-focused.


I think that would be the case if they were given to the US and other Western intelligence agencies. My scenario requires competition between geopolitical opponents to create an incentive to worry about defense as well.


Yes it has been going up.


Sure. Rewrite sensitive parts of their stack in memory safe languages. They have Swift after all. A lot of the iOS security improvements over time have really been more like mitigations that try to contain the damage when the giant of pile of decades old C gets exploited.


That is exactly their plan. Swift could always link into C applications, and they have recently come out with C++ interoperability [1] so things like WebKit etc. can start having parts re-written or new parts written from the start in Swift so they can gradually replace C and C++ codebases instead of trying to rewrite everything (which sucks because even for things much, much less complex than WebKit, you can have a team working for three years working on a replacement and it’ll have less features than the original had when you started).

They’re even working on an embedded subset for embedded devices so things like microcontrollers like battery management, the Secure Enclave etc. can run it.

1. https://m.youtube.com/watch?v=lgivCGdmFrw



They’re working on it, but a memory-safe language doesn’t help you in some of the surface that the attackers exploited here.


I think memory safety + integer overflow checking by default would have blocked many of these. Not the hardware mitigation disable but getting to the point where that matters required some safety problems that can be excluded by better languages.


I mean, this is already an extremely complex chaining of exploits that requires extremely sophisticated research. I can assure you that this is not being used on the average person.


This made me laugh: "Upon execution, it decrypts (using a custom algorithm derived from GTA IV hashing) its configuration [...]"

From https://securelist.com/triangulation-validators-modules/1108...



At least the first version of the recording is now up: https://media.ccc.de/v/37c3-11859-operation_triangulation_wh...


Knowing more about the exfiltration component where it sends data to a remote server would be helpful. According to the article it’s sending large audio microphone recordings. I assume a company like Kapersky would explicit deny all outgoing network connections and then approve one by one.


There is a series of posts on this including one that details the malware payload...

https://securelist.com/trng-2023/



yeah, I’m wondering the same. Maybe they can’t point a finger at who did it, but there were no clues on the exfiltration?


> yeah, I’m wondering the same. Maybe they can’t point a finger at who did it, but there were no clues on the exfiltration?

From the articles at the above link...

C&C domains

Using the forensic artifacts, it was possible to identify the set of domain name used by the exploits and further malicious stages. They can be used to check the DNS logs for historical information, and to identify the devices currently running the malware: addatamarket[.]net backuprabbit[.]com businessvideonews[.]com cloudsponcer[.]com datamarketplace[.]net mobilegamerstats[.]com snoweeanalytics[.]com tagclick-cdn[.]com topographyupdates[.]com unlimitedteacup[.]com virtuallaughing[.]com web-trackers[.]com growthtransport[.]com anstv[.]net ans7tv[.]net



Some agencies will be very sad now...


Those will be the most delicious tears wept in all of 2023.


Are hashes of the data ever used in known chip debugging features?

Since they're supposed to be disabled in production, what would be their point?

I'm no electronic engineer, but isn't it best for them to be fast and simple, to reduce the chance that they cause interference themselves..?

And isn't it strongly unlikely that an attacker in the supply chain (TSMC??) would be able to reliably plant this in all Apple chips from the A12 to the A16 and the M1 ??



Related:

4-year campaign backdoored iPhones using advanced exploit - https://news.ycombinator.com/item?id=38784073

(We moved the comments hither, but the article might still be of interest)



More important than getting their newly found exploits, you get to know which of yours might be compromised. Prevents counterintelligence.


Maybe I'm too dumb to find it on this page but if you are looking for the actual recording instead of a calendar entry in the past, it's here (a stream dump for now, fast forward to 27 mins):

https://streaming.media.ccc.de/37c3/relive/11859



Why would the attackers target Kasperspy employees? Seems like a great way to get your exploit chain exposed


Perhaps Kasperspy is doing offensive work for someone?


Does Lockdown Mode prevent agains this?


I think lockdown drops most iMessage features, so I would suspect the answer is yes. But as far as I can tell, lockdown prevents use of mdm, so it might be a net negative for security… instead, using the mdm policy that disables iMessage might be preferable.


You can still supervise which allows for that all the same, IIRC


It likely does. Lockdown mode stops most ios auto-processing wrt to message attachments and this was delivered via a message attachment.


Isn't the most obvious answer that Apple, like other US tech firms such as Google, simply creates these wild backdoors for the NSA/GCHQ directly? Every time one's patched, three more pop up. We already know Apple and Google cooperate with the spy agencies very eagerly.


> We already know Apple and Google cooperate with the spy agencies very eagerly.

The evidence clearly indicates otherwise…



How so? Any competent intelligence service will not just depend on the goodwill of a corporation to secure access to assets and intelligence.

If they cooperate that's good and convenient, but that does not mean the intelligence service will not set in place contingencies for if the other side suddenly decides not to play ball anymore.



I said nothing about anything you stated, that’s all clearly possible, I specifically refuted the unsupported claim that Apple “eagerly cooperate with spy agencies”, where there’s ample evidence to support an opposite claim.


Ahem, Snowden, PRISM anyone?


Ahem, you mean you have a single example, from a decade ago, one where Apple was hardly a key player (hence why Apple didn’t sign onto PRISM until half a decade after Yahoo, Microsoft, Google, et all), as conclusive evidence of “eagerness to partner with spy agencies”, despite numerous public cases where they’ve done the opposite… got it!


That makes sense, would you agree to the revised statement:

"We already know Apple cooperated with the spy agencies behind the publics back"?



No, I won’t agree to context free blanket statements which are specifically worded to imply something which is simply not provably true, especially given evidence to the opposite. If you knew anything about PRISM at all, even the technical details publicly available with the minimalist of effort on your part, you wouldn’t be asking.


I consider that plausible with Google due to Google's funding history [0], but Apple is afaik way less "influenced" and the way this pwn was pulled off could also have been done by compromising Apple's hardware supply chain and not Apple itself.

Particularly considering how in the past Apple has been very willing to be on the receiving end of negative headlines for not giving US agencies decrypted access to iCloud accounts of terrorist suspects, with Google I don't remember it ever having been the target of such controversy, meaning they willingly oblige with all incoming requests.

[0] https://qz.com/1145669/googles-true-origin-partly-lies-in-ci...



Reminder that Lockdown Mode helps reduce the attack surface of your iPhone. It also helps tremendously with detection. https://support.apple.com/en-us/105120


I've had Lockdown mode enabled for a few months. It's great, and not much of an annoyance at all. You do need to be fairly tech-savvy and remember that it's enabled, because sometimes something silently breaks and you need to opt-out of it (which you can do for a specific website, or WebViews within a specific app). And it won't auto-join "insecure" WiFi which can be annoying at a hotel, but frankly it's probably for the best. Also you won't receive texts with attachments in them, which is usually desirable but breaks workflows like activating a new SIM card while traveling (it's possible this was broken for me due to some other setting to exclude texts from unknown numbers).

The most noticeable difference is that SVG elements (?) are replaced with emojis. I'm not sure how that fallback works but it's funny to see buttons have seemingly random emojis embedded in them. (Does anyone know the details of how this replacement is done? Is it actually glyph fonts being replaced, not SVG?)



Crazy!


Years ago i argued about the danger of pdfs with another account and was told not to be a paranoid nutjob.

Told you so.

edit: The fact that this obvious statement gets upvoted above the apple backdoor on 22:40 of the talk also says alot.

edit1: https://imgur.com/a/82JV7I9



Theres a talk on ccc today


>This attachment exploits vulnerability CVE-2023-41990 in the undocumented, Apple-only TrueType font instruction ADJUST for a remote code execution. This instruction existed since the early 90’s and the patch removed it.

This is getting ridiculous. How many iMessage exploits have there now been via attachments? Why aren't Apple locking down the available codecs? Why isn't BlastDoor doing its job?

This is really disappointing to see time and time again. If a simple app to send and receive messages is this hard to get right, I have very little hope left for software.



If I were an embassy employee (covert or overt), I'd want zero iMessage features beyond ASCII and the thumbs-up/down reactions. No attachments, no GIFs, no games, no Apple Pay, no easter eggs, no rich text

Apple really needs a paranoid mode





If I've read the rest of the documentation correctly, the exploit is actually triggered from an attached ".watchface" file, which of course, has the font vulnerability in it.

I'd like to meet the person who suggested even sending .watchface files as iMessage attachments in the first place. What were you thinking? Did you not have a large enough attack surface already?



i wonder why attachments would ever be loaded from unknown contacts


iOS has a reputation for having the best security, but how many times have Android/WhatsApp had these sorts of silent-instant-root exploits via invisible messages? I don't remember it happening. Maybe the strategy of writing lots of stuff in Java is paying off there.


Android has had zero click exploits. For example, Stagefright [1]

And even better, there are plenty of old Android phones out which will be vulnerable to various exploits because of weak OTA update support policies.

[1] https://en.wikipedia.org/wiki/Stagefright_(bug)



Sigh…there has never been an 0day Stagefright exploit in the wild. And even if there was it wouldn’t have worked on all Android devices due to the OS differences among OEMs.

Also, there are plenty of old iPhones that do not receive updates anymore and are just as vulnerable so I’m not sure why you needed to get that in.





Yes, but that wasn't a zero day. WhatsApp's own team found that, and it wasn't a zero-click exploit, you had to be in a video call with the attacker.


It’s quite unfortunate that Apple doesn’t allow users to uninstall iMessage, it seems to be the infection vector for advanced threats like this, NSO group, etc. Presumably it’s to avoid the support burden, but they could gate it behind having Lockdown Mode enabled for a week or something to shake out the vast majority of mistaken activations.


> unfortunate that Apple doesn't allow users to uninstall iMessage

It can be disabled via Apple Configurator, https://news.ycombinator.com/item?id=38785311



what does "uninstall iMessage" mean? you can disable iMessage right in the settings so you only receive SMSs


Which is what lockdown mode already does


Actually lockdown is better. It leaves E2E encryption alone, but restricts attachment types, which should be enough to block the initial exploit in the chain.

Disabling iMessage would fall back to SMS, allowing messages to be snooped / modified in transit.

Hopefully they’ll also have a way to disable RCS, since it allows attackers to modify messages, and also has a larger implementation attack surface than SMS.



No, Lockdown Mode doesn't disable iMessage.

"Most message attachments are blocked and some features are unavailable."

iMessage with blue bubbles still works in Lockdown Mode. I think GIFs don't display properly and certain other attachments, but I can share photos, audio clips and video so I otherwise don't really notice that Lockdown Mode is enabled.



Unfortunately, Lockdown Mode disables Live Photos from being received via iMessage... That's a pretty big iPhone feature to not work under Lockdown Mode!


Not received at all or received as a still photo?


Still photo.


Can someone explain to me why we can load vast quantities of untrusted code and a wide variety of image formats in our browsers all day long and be mostly safe today, but somehow even first party messenger apps seem to be a relatively easily compromised? Why can't messenger apps be sandboxed as well as browsers?


Sending these through messaging apps is appealing because that usually requires zero user action - you just send a message and the device runs the exploit as it generates preview thumbnails.

But browser exploits require the user to visit an infected website, which is much tougher. If I recieve an email or sms with "visit applesupport.info" I'm not going to click it.



Note that the second half of this exploit chain involves going around and exploiting the web browser.


this exploit chain involved a browser vulnerability; your premise is flawed


It's all relative. Chrome has plenty of sandbox escapes. Microsoft found one lately where Chrome was passing strings from JS straight into the Windows TTS engine, which turned out to be parsing XML from it with a C++ parser that was full of memory errors.


Do you believe your other messaging apps lack vulnerabilities? What is most popular will always be most picked on.


I remember people were very passionately arguing iMessage can only be secure if the only client is the Apple sanctioned one

> the unknown attackers kept their campaign alive simply by sending devices a new malicious iMessage text shortly after devices were restarted.



There are different aspects of security here. iMessage is tied to a physical device, so if you want to spam people, you have to purchase and burn through iPhones.

Rate limiting phishing attacks is certainly a useful security feature, but it does nothing to protect against targeted attacks.



In the face of this kind of threat, it's pretty obvious why Apple treated Beeper as a security risk and took appropriate measures to secure iMessage.


Beeper is the user's choice. And Apple is preventing other companies from providing a more secure iMessage alternative, e.g. one that doesn't even parse messages from people not in the contact list, or doesn't even parse anything without a click, etc.

Apple has had so many zero-click exploits in iMessage, yet they insist that you have to use Lockdown mode to do anything about it, and then proceed to bundle Lockdown mode with lots of potentially unwanted behavior.

I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.



>Beeper is the user's choice.

Me deciding to ride the subway to work for free is a user's choice, but that doesn't mean it's right. Using infrastructure for free because I feel like it is certainly my choice but I can't justify anger when someone makes me pay to use it since I should have paid in the first place. Currently Apple doesn't run iMessage as an open standard so it runs in "authorized riders only" mode.

>I don't think there's any way to claim that Apple is just doing whats in the customer's best security interest.

This isn't what I claimed. I claimed Apple treated unauthorized 3rd party access to their infrastructure as a security risk and worked to shore up that risk. As you pointed out, there have been plenty of zero-click exploits in iMessage. Limiting the devices sending iMessages increases security. I believe Apple doesn't allow iOS VMs in general for the same reason.



I don’t think that’s clear at all. I imagine it’s still trivial for attackers to still send specially crafted one-off payloads.


The attack vector is still smaller if Apple restricts iMessage to official devices only compared to any rooted Android phone being able to spam iMessage payloads.


The security model is basically orthogonal.


They gotta, gotta, have those blue bubbles. Some teenagers fight to get an overpriced phone solely to avoid the deep deep shame of having a green bubble when chatting.

If apple is forced to shut down iMessage being the exclusive option and have some pure SMS application they might see a sudden noticeable drop in market share.



Teenagers wanting blue bubbles and people looking to uninstall iMessage because it's a threat vector are two completely disjoint sets of people.


Absolutely - but the business interest of wanting to keep teenagers on iPhones absolutely would impede Apple from allowing users to uninstall the application.


Blue bubbles bad syndrome. Gotta bring it up when ever humanly possible.

Nvidia has a very similar green man bad syndrome going on too. As the amount of time a HN discussion on Nvidia increases, the probability of mentioning that Linus said “fuck you nvidia” approaches 1, even though it’s irrelevant to a topic, or that he's a mercurial asshole who's said a whole lot of things.

The casual fanboyism disrupts all discourse on these topics because there’s a large minority of users who have adopted what PG describes as “hater-ism” and allowed it to dominate their thinking on a topic. Negative parasocial attachment is the same process as positive parasocial attachment and just as problematic, but largely never called out.

http://www.paulgraham.com/fh.html

In short: lotta fanboys on these topics who don't even realize they're fanboys/adopting fanboy frames, because they don't realize that anti-fanboys are still parasocially attached too. And we've casually accepted the low level of discourse on these topics, and it pollutes the whole discussion of a lot of interesting topics because of who's doing them.



Can you explain how disliking Nvidia due to being systematically problematic at some point (maybe still being problematic) is a fanboyism or parasocial attachment?


They knew exactly what they were doing when they chose that nice blue and that cheap looking green.


No they didn't, because the green was first in 2007, when iPhone only supported SMS. It was 4 years later that iMessage launched. The conversation probably went like:

"Okay well, now that we're launching an alternative to SMS, how will we distinguish iMessage messages from regular SMS messages?"

"Hm, well, SMS messages are green, so what if we picked another color?"

"Yeah okay, blue? ¯\_(ツ)_/¯"

"Sounds good, mock it up and send it to the engineers"

edit: The reason for picking green originally was probably because all the "communication"-related apps had a green color scheme, including Messages. This persists today — the app icons for Phone, Messages, and FaceTime are all green.



Never forget the icon they used for Windows servers: https://i.stack.imgur.com/5rYVr.png


That’s hilarious


They've already announced that they will be adding RCS support.


... And they've already announced[1] that they will be retaining the exclusive blue bubble for iMessage messages for... reasons? The green/blue bubble distinction will continue even when there is no technical difference between messages.

1. https://mashable.com/article/apple-rcs-support



People use “green bubbles” to just mean “no guaranteed delivery or delivery receipts, no read receipts, very low quality image and videos, bad support for reactions, threaded replies, and group chats”.

…the color isn’t the problem. It’s shorthand for the real underlying issues



The color is a big part of the problem, white on green is one of the hardest to read because of the distribution of color cone cells in our retinas. Only maybe white on yellow would be worse.


Dark mode is the way to go anyway.


What color combo does plain sms get there?


Other person is white text on black background. Yours is white text on green background. The app's background is also black.


Yep, why would they drop it? It’s especially egregious as Apple disregards its own human interface guidelines to make green bubbles excessively low-contrast. Very intentional.


I bought the very first iPhone the day after its release. Long before iMessage was introduced, it only supported SMS at the time. People forget, but those bubbles, the original SMS ones, were green. Blue bubbles showed up only when iMessage debuted three years later.


There is a technical difference though- the current RCS standard doesn’t have end to end encryption.


Who had motive to target Russian government officials, knowledge of the attack vectors, history of doing so, and technical and logistical ability to perform it leads Kaspersky and myself to the only rational conclusion: that Apple cooperated with the NSA on this exploit. I assume they only use and potentially burn these valuable methods in rare and perhaps desperate instances. I expect the Russian and Chinese governments' ban on use of Iphones will not be lifted and expand to other governments. Similarly to how the sanctions have backfired, this tactic will also backfire by reducing trust in Apple which is the core of their value proposition.


This looks like a typical modern security hole. There’s a giant stack of layers of unnecessary complexity, and all of them are garbage. The composition is also garbage.

All the NSA needs to launch attacks like this is to get a bunch of mediocre engineers to layer complexity atop complexity. They don’t need Apple to know about the attack.

Honestly, they probably didn’t actually have to do anything to get Apple (or any other large company) to self-pwn itself by hiring and promoting engineers and project managers for adding features, but not for improving product stability or software correctness, or deleting forgotten legacy cruft.

Anyway, the most effective approach to sabotage is to be indistinguishable from incompetence, so it’s hard to say if the people responsible for the vulnerability chain were working with the NSA or not.



You make a good point that a team of mediocre engineers could be responsible for the vulnerabilities. Those doing code review and change control would also need to be mediocre. It could be a combination of compromised and mediocre coordinated by a manager who is in service of the apparatus. Knowledge of the operation would better not go all the way up the ranks to keep it quiet.


leads Kaspersky and myself to the only rational conclusion: that Apple cooperated with the NSA on this exploit.

Kapersky reaches no such conclusion. That's from an FSB release.



It is true that Kaspersky by policy does not make attribution without concrete proof. It is the responsibility of intelligence agencies to make the call based on preponderance of evidence. The video linked above leads suspicion to a very few options. The attacker left a list of Apple ID's in the code in one place to check against. Kaspersky provided them to Apple, and Apple did not respond with any details about the users of those Apple ID's. One of the main vulnerabilities has been available for over ten years.


What is more true is that the article posted explicitly says the exact opposite of what you suggested upthread - a fact you should acknowledge.


[flagged]



This is a complete lie.


Kapersky denies it, but has never offered any proof.

All other Russian cybersecurity firms work directly with the Kremlin, hard to think that they got an exception.



Yeah sure. Just a Russian IT company owned by Russian billionare who stayed in Russia after 2022 and kept his business in Russia after 2022. Sorry, but everyone who is big enough and are not working for Kremlin has left the country already.

It's not even needed to mention his KGB ties.



> leads Kaspersky [..] to the [..] rational conclusion: that Apple cooperated with the NSA on this exploit

doesn't the article states precisely otherwise? that while the FSB accuses Apple of cooperation, Kaspersky does not have any reason to believe so, especially since it does not look like any known state actor.



Kaspersky can't prove anything so they opted to present the facts. They didn't state any opinion about who they believe is behind the incident.


Kaspersky only said they could not prove it. They did not make conclusion but laid out the evidence.


How did sanctions backfire?


Germany's economy shrunk last year while Russia's grew. Dedollarization has accelerated which will impact the US not immediately but in near future.


You are talking about an unsustainable war economy that is overheating. Soaring inflation, brain drain and a falling ruble are only just the short term phenomena.

--> https://www.reuters.com/breakingviews/russian-war-economy-is...

If you would truly believe what you say, you should convert all your savings from dollar to rubles. No serious economist would think that doing so would be a masterstroke though.



the dollar as the reserve currency already has a serious impact on the US (ie. the big upside is that it allows the US to borrow for very cheap, but the nasty downside is keeping the purchasing power of the USD artificially high, which is not great for the non-finance sectors of the US, not great for people who work in those sectors, and double-plus-not-great for US exports [which are not the dollar itself]), basically it's the "natural resource curse" again

that said, dedollarization is unlikely even in the mid-term https://www.noahpinion.blog/p/threats-to-the-dollar-are-just...



A weak dollar is good if you own a company that relies on exports. For the rest of us who are paid in dollars and need to buy imports, a weaker dollar hurts.

That is one opinion. We can already see China and Japan selling off their US bonds and the BRICS countries are working on solutions to get off the dollar with high priority.



Have you read the article? :)

Maybe this one will be more interesting https://www.noahpinion.blog/p/brics-is-fake



They were long reads, but thank you. They generally cover history and speculation on BRICS, but we will need to see how it works out. I have seen their meetings and open statements about intent to diversify away from the dollar for trade as a high priority. The articles don't really explain what happens if/when they do figure out international payment systems that avoid dollars. Think of this: You have a trillion dollars you printed floating around the planet. It didn't cost you much to print them, but you did get goods and services for them. If that trillion is halved to $500B, what happens?


> A weak dollar is good if you own a company that relies on exports.

It depends on your exports. If your exports have cheaper alternatives, then a weak dollar is good.

If your exports are high utility and have no cheaper alternative, then a strong dollar is better.



To be fair, other European countries are doing better. It's a Problem specific to Germany and their mishandling of the energy shock.


That’s only “rational” for kaspersky bc in their world they can’t function without having actual intelligence operatives on staff. I seriously doubt nsa needed help here


My adjacent conspiracy theory is that the NSA and other state agencies do both original research and pay hackers for exploits that Apple hasn’t yet discovered.


The Darknet Diaries episode "Zero Day Brokers" goes into this. Apparently Argentina hosts a lot of outsourced exploit development. Here's the transcript: https://darknetdiaries.com/transcript/98/


They have the budget to do both easily.

Like how the NRO used to design and launch satellites that cost more than aircraft carriers but are now working closely with private companies like Maxar to find more economical solutions.

https://www.maxar.com/press-releases/nro-awards-maxar-a-10-y...



Thing is the fundamental laws of physics give us a good idea as to the capabilities of the NRO given a certain launch platform. Like how when scientists in the late 70s were figuring out the best telescope they could launch they ended up with almost the exact specs of the Keyhole spy satellites, a spare of which became Hubble.


but why pay hackers to try to find a backdoor when you can just walk in the front door and use the carrot and stick to get what you want?


Here's my serious answer that still works if you hate Apple.

Your question assumes two things: (1) That Apple intentionally leaves vulnerabilities in the stack, and (2) that Tim Apple is occasionally willing to share this candy with governments.

Having worked at Apple, I don't believe (1) can be true. Not only is it extremely unlikely that it could be kept a secret, but Apple's thing is "obsessive control", a mindset borne of organizational PTSD which originated with its near-death experience in the mid-to-late 90s. The Apple I know would not risk intentionally leaving back doors unlocked for enemies to find and leverage.

As for (2), the existence of a "Binder of Vulns" by nation-states would expose Apple to existential risk. It's possible that it could be kept secret within Apple's walls if it were never used, but once shared with a government it could not be contained. The splash damage of such a discovery could easily kill Apple.



I am assuming or knowing that the national security apparatus can both coerce and incentivize companies and individuals to give it what it wants. Their power is great and relatively unchecked to do both. Coercion tactics include releasing compromising information on a company, person or family member and more directly injuring person or company. Incentives include favorable regulation, taxation, and deals with other companies they control.

Knowledge of a binder of vulnerabilities is perhaps one of the greatest secrets that must be protected. Wikileaks releasing the Vault 7 leak was the death knell of Julian Assange. It proved such a binder exists in great detail.

I don't hate Apple, but assuming they can't be reached, seems naïve.



We already know Apple has participated to the PRISM program, it's not speculation anymore.


This happened at a company I worked at so it’s not out of the question. I figured it out by reverse engineering and quit on the spot. They tried to tell me I’d never work again if spying on users was a dealbreaker. They showed me a natsec slide deck that identified other collaborating companies as a way of making their point. Among them was Apple.


You are telling me that natsec people give every rando the full list of participants in the conspiracy? That just doesn't make sense for any (semi)competent security agency to disclose.


I would say all the info was already known, for example Apple’s participation in PRISM.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact



Search:
联系我们 contact @ memedata.com