(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=41434500

这是您的文本的简化解释: 护照伪造:您可以以数字方式复制他人的护照,但对护照进行物理更改以匹配原始持有人至关重要,因为大多数护照的微芯片内不包含照片。 此外,许多国家没有维护旅行记录的中央数据库,这使得护照伪造品被发现的可能性较小。 电话飞地:这些保护单个应用程序,防止未经授权的用户访问敏感信息。 然而,由于系统范围的威胁,这些飞地可能仍然容易受到攻击。 加密货币钱包:保护硬件钱包的安全至关重要,就像保护普通钱包免遭盗窃一样重要。 SIM 卡交换:政府可以使用拦截令来监控通信,而不是交换 SIM 卡,从而消除此类活动的需要。 笔记本电脑安全:笔记本电脑在加密方面面临与手机类似的风险,这意味着仅加密不足以保护关键数据。 EMV 芯片欺诈:如果个人拥有必要的技术技能和资源,他们就不太可能浪费时间复制信用卡或借记卡。 电子邮件验证:双因素身份验证 (2FA) 设备在在线验证身份时确保额外的保护层,需要多个元素(用户名、当前密码等)或满足特定标准(电子邮件确认、视频通话、尝试失败后的等待时间) )。 物理访问攻击:获得 Yubikey 物理访问权限的入侵者可以轻松地用相同的伪造品替换它们,从而使正版 Yubikey 对其合法所有者毫无用处,从而可能危及安全措施。 附加因素:为了增强安全性,网站可以实施附加因素,例如发送带有随机数据的请求来验证每笔交易,从而使重复使用变得显而易见。

相关文章

原文


As per https://arstechnica.com/security/2024/09/yubikeys-are-vulner...

An attacker not only needs your username and password, they also need physical access to your key. They then have to disassemble the device. If they want to give it back to you, they'll need to reassemble it.

So not exactly trivial!

A blob of nail-varnish over a plastic seam might be a useful canary.

But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them. And if your token is lost or stolen, you have to manually revoke every single one.



It's not just YubiKey.

NinjaLab: "All Infineon security microcontrollers (including TPMs) that run the Infineon cryptographic library (as far as we know, any existing version) are vulnerable to the attack."

- Chips in e-passports from the US, China, India, Brazil and numerous European and Asian nations

- Secure enclaves in Samsung and OnePlus phones

- Cryptocurrency hardware wallets like Ledger and Trezor

- SIM cards

- TPMs in laptops from Lenovo, Dell, and HP

- EMV chips in credit and debit cards



Nice to see a fellow enthusiast here, this is a nice point that different hardware will have different levels of related risk. But this is kind of an entire class of attack where similar paths may be able to be used on these other controllers. Don't gloss over it.

On a side note, used to frequent a bar where one of the creators of Ledger also did. Was nice to spend various crypto freely!



This is an entire class of attacks known since almost forever. So yeah, some of us already considered this so we'd like to gross over it this time.



Ledger literally supports key extraction as a feature and pushes hard the firmware updates. Last S firmware w/o key extraction still works, while the same X version cannot be used anymore.



Passports are kind of a big deal. The customs agent is going to visually verify the photo vs the holder, but the customs agent is going to trust the valid RFID chip probably 100% of the time as it's assumed to be unbreakable.



However if we look only at border checkpoints (including airports) in first world nations the number is probably a lot higher.

Not only are agents likely to be using the chip, self-service immigration gates have become really popular at airports around the world and mostly use the RFID chip together with a face scan



On the bright side, this bug seems to require an ECDSA operation, and I would guess that most ePassports are using RSA. Can't seem to find any statistics but the standards support both.



Since it's a non constant time implementation of a specific part of the EC operation (modular inversion) my guess would be they reused the code for that everywhere and it's probably also present in ecdh and all other algorithms requiring a modular inversion.



1. Obtain a donor passport, get the chip, dump the Active Authentication key which is used to ensure you can't clone the chip

2. Make a fake passport with your photo (and fingerprints etc.) and add the AA key so that it passes the check

You'll still have to somehow fake Passive Authentication (in which your data, including photo, is signed by the country's public key) too, though.



That's assuming that the validation software even has all issuing countries' root keys available.

Supposedly it's surprisingly (or maybe not, given how international government relations historically work) difficult for countries to exchange their public keys: Since there isn't any central authority, nor a chain of trust available (a la "this key is signed by France and Switzerland, so it's probably the real thing to us, Germany"), it boils down to n^2/2 key exchanges, and n additional ones every time a single key expires or, worse, has to be rotated on short notice. Then all of that has to be distributed to all border authority systems.

Last time I looked into this (10+ years ago), my laptop doing Passive Authentication and Active Authentication using 10 lines of Python and my country's root certificate (it's publicly available) was supposedly more than what most border checks could practically do.



This is at best another forensic tool (unlocking the TPM of a locked laptop/phone for prosecution) and at worst a red herring for security flag.

- Clone a passport -> why cloning if you can issue new ones - getting risked being detected while using a clone (2 entries in 2 different countries, and you also need to look like the person) not to mention you have to destroy the passport

- Phone enclaves -> see above

- Crypto -> Hardware wallets should be kept on eye as badly as your normal wallet

- SIM Cards -> Swapping is faster, or if you're the the gov, just an intercept warrant will do the trick

- Laptops -> see above

- EMV Chips -> If you have those skills and money, I don't think you'll lose time on cloning credit/debit cards



> - Phone enclaves -> see above

Well... not really, from the viewpoint of a bank. Look, now the user can extract the key that the bank TOTP app carefully keeps, and transfer it to another (rooted) device, or use without a phone at all, meaning that this app is no longer a "something unclonable that you have" authentication factor. From a risk management and compliance perspective, that's a contract breach: the bank is legally obliged to store that secret securely, so that the user is guaranteed to complain if it could have been used by someone else.



> - Clone a passport -> why cloning if you can issue new ones - getting risked being detected while using a clone (2 entries in 2 different countries, and you also need to look like the person) not to mention you have to destroy the passport

Well... not really. ICAO compliant passports do not require storing a photo embedded in the chip, as long as you can forge the physical part of the passport (or obtain blanks) you just need the digital certificates from a "donor" passport of John Doe, print "John Doe" and his personal data (birth day/place, nationality, issuance/expiry data) on the human readable and MRZ fields, but crucially the photo of the person using the forgery.

Also, there are no centralized, cross country stores of entry/departure. Lots of places don't even register it for visa-free border crossings.

Some national ID documents, e.g. the Croatian national ID card "osobna iskaznica", do store a photo embedded in the chip, so that indeed restricts a forgery from being used by a non-lookalike.



Infineon chips are used in some smart tachographs in EU. This is likely to get very messy.

Extract those keys and your drivers can ignore all annoying work-time rules and you can just patch the files if you are audited.



If I remember correctly, Infineon already had a big TPM recall a while back. I remember my T470p had to first install a BIOS update to enable userspace updating of the TPM, then the TPM update itself. And I think some Yubikeys were replaced for free due to the same or similar issue.



> - Cryptocurrency hardware wallets like Ledger and Trezor

Ledger hardware wallets (which btw can serve as U2F authentication but, AFAIK, not FIDO2) are protected by a PIN: three wrong PINs and the device, unlike a Yubikey, factory resets itself.

IIUC the side-channel attack relies on a non constant-time modinv.

I don't know if there's a way to force that modinv to happen on a Ledger without knowing the PIN. I take it we'll have a writeup by Ledger on that attack very soon.



I just tried it on a 5C NFC (firmware 5.4.3) and got:
    ERROR: Always Require UV is not supported on this YubiKey.
So I'm really not sure this is an option for non-Bio keys, unless it was introduced quite recently.


I agree this is not trivial, but yubikeys are (at least sometimes) used in context with very high stakes.

This includes access to "crypto" assets, but also more serious situations such as defense contractors.

These are scenarios where you have well-resourced, highly motivated attackers, and this is precisely what the yubikey is supposed to defend against.

So, the fob still provides fishing resistant authentication, but some of the security expectations have been subverted.



> But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them.

FWIW, I use KeePassXC as my password manager and tag each account that uses my hardware keys, so if lost, stollen or broken, I can quickly get a list of sites from which to remove that key. I always register two keys and keep them physically separate, so I can still login in the event I lose one key.



A small advantage to this attack is they don't need their own manufacturing and can attack keys which are already in use.

A supply chain attack of "here a pre-backdoored key I'm pretending it's perfectly secure, go use it" has no need for this exploit if you have manufacturing capability.

If you don't, then intercepting new yubikeys in transit, extracting the key, and sending them along the way would also be doable with the exploit described.



> But this does highlight one weakness of these FIDO tokens - you have to manually maintain a list of where you've registered them. And if your token is lost or stolen, you have to manually revoke every single one.

I agree. I've been keeping track of FIDO tokens and where they work in my password manager and it's great.

I honestly want to extend this idea not just to FIDO tokens, but for anything that would ever need to be revoked and replaced. So stuff like FIDO tokens, 2FA secrets, Passkeys (both already handled by my password manager), payment methods, GPG keys and such.



> An attacker not only needs your username and password

Usernames and passwords are leaked all the time. Many users even re-use these across multiple services.

> they also need physical access to your key.

With enough practice, a motivated actor could make it seamless enough that you don’t notice. Or stalk the target to find a weak point in schedule and give attacker enough time to perform EUCLEAK. We are creatures of habit after all.

> And if your token is lost or stolen, you have to manually revoke every single one.

I agree here. No way to easily track. I have to make a manual note for each service in password manager.



> Usernames and passwords are leaked all the time. Many users even re-use these across multiple services.

I’d wager that people using TOTP tokens like the YubiKey are more likely to use password managers.



> you have to manually maintain a list of where you've registered them

For an individual, the answer there might be to limit the usage of the device to your "core" services, and rely on good password hygiene for everything else.



> For an individual, the answer there might be to limit the usage of the device to your "core" services, and rely on good password hygiene for everything else.

What's the advantage here over password + yubikey? Isn't password + yubikey always going to be at least as secure as password, even if the yubikey gets lost/stolen/compromised?



> rely on good password hygiene for everything else.

Usually this means using a password manager, and these days many of them also support WebAuthN in a way not tied to a specific device (or platform/ecosystem).



It's still two factors regardless of storage. Say you accidentally paste your password into the wrong field and post it on a forum. Whoever gets that still needs the second factor.

Sure, if your password vault gets breached then everything is exposed but that's extremely unlikely and you have a lot of work to do in that event regardless. It's an inherent risk to using a password manager: everything is centralized so it's a valuable target.



It is not extremely unlikely, all it takes is for you to unlock your password database on a device with some malware.

The point of a second separate factor is to reduce that risk.



If there's malware on my device, isn't it game over already? Even if I have a second factor elsewhere, the malware can access session keys to whatever service I logged into from that device, among other things.



Yes, but I don’t need/want 2FA everywhere, and it’s still a strictly better single factor than a password (since WebAuthN is resilient against server-side database leaks and phishing).



That's less related to passkeys, and more to what the site (i.e. the Relying Party in WebAuthN parlance) requires in terms of authentication security.

User Verification (including PINs) is possible for non-discoverable credentials and vice versa (e.g. Bitwarden's implementation doesn't seem to support user verification at authentication time, but supports discoverable credentials).

In any case, note that this particular attack seems to degrade the security of "Yubikey + PIN" from two factors (possession of the key, knowledge of the PIN) to one (possession of the key), as the PIN entry can be bypassed due to how user verification works in WebAuthN/CTAP2.



You can’t do that without the same assembly issue. Yubikeys don’t let you generate the FIDO2 keys at all and I believe there’s a flag on x509 keys which were imported rather than generated on the device.



The “looks physically the same” part is the problem I was referring to: this attack needs a lab and some time. If they have to add reprogramming a key and weathering it to look like yours to the task it’s certainly not impossible but it moves even further into “are you at war with a nation-state?” territory where the attacks are both unlikely to be worth the cost or easier than alternatives.

I’m not saying it’s not possible but 99% of normal people are really only at risk of online attacks or physical attacks which won’t be sophisticated stealth operations but more along the lines of “you’re sending me Bitcoin or I’m shooting your dog”.



Cloning a functional piece of hardware that behaves like the original yubikey doesn’t imply the clone looks and feels like the original. It sounds to me it’s like someone would make a clay copy of a key. I don’t think the attacker gets to replicate the manufacturing process to make an item the same shape and size as the original.



Someone was shipping something and was using sparkle nail polish on the screw holes. They took pics and claimed the pattern was hard to duplicate.

I have a nagging memory that someone demonstrated defeating it though.



Another side. I always worry more about being locked out by 2FA (a main use case of non-discoverable FIDO keys) as a consequence of lost/retired-then-forgotten keys.

This happened once when the US custom detained my electronics, including a Yubikey. Later I managed to recover many accounts that accept email for 2FA or as the ultimate auth factor (i.e., password/2FA reset). But a few, including AWS, doesn't allow that.

Many websites encourage you to enroll 2FA without clarifying the alternative factors and the consequence of lost access.



> without clarifying the alternative factors

This is super annoying. I wish sites would standardize on a simple infographic saying:

You will be able to access your account with:

* Username, Password and 2FA device.

or

* 5 points from the following list: Username (1), Current password (2), old password (0.5), sms verification (1), mothers maiden name or other secret word (0.5). Email verification (2). 2 factor device (2), Video call with support agent showing government ID matching account name (1), has waited 7 days without successful login, despite notifying all contact addresses (2)

For added security, you can adjust the account to require 6, 7 or 8 points from the above list, but please be aware doing so might hinder your ability to access the account if you lose your password or 2FA device.



Unfortunately identity systems have adopted security through obscurity instead.

They use IP address, location, time since last login, number of attempts, device etc to determine how easy to make it for you to log in.

I lost a Gmail account because I no longer had an old phone number even though my password was correct.



> the US custom detained my electronics, including a Yubikey

Can you elaborate on what happened?

I know that it's theoretically possible, but I thought that:

1- They would potentially detain only devices that they can extract information out of (even if encrypted), like a laptop or some hard disk... Yubikey (at least the old U2F-only ones) don't have anything that you can extract

2- They would eventually return the device to you (unless guilty of some national security violation?)

Am I mistaken on both counts?



> 2- They would eventually return the device to you (unless guilty of some national security violation?)

"Eventually" is not good enough. People take things with them on their trips, because they expect to use these things while they are traveling/doing work at a remote location. Imagine you need to do some work on a remote site and you can't log in to your company's network, because the TSA has taken away your key so that they can inspect it.



For 1, tbh I don't know the rules. technically it was a regular model and can store some data. Or maybe they simply suspected it's a usb disk.

And 2 was true. But it was after weeks, and of course I didn't wait until then to reset my account credentials.



If they just seize it, it will typically be returned at some point. If they decide it's subject to forfeiture, it is now their property. You can contest this with the forfeiture department but I guess if they decide the item is guilty of a crime or other excuses to keep it there's nothing you can do.



That's right. It will be United States v. Some Person's Yubikey. And you can hire it a lawyer if you want, because they will NOT give it a public defender. Massive violation of Constitutional rights if you ask me.



> Yubikey (at least the old U2F-only ones) don't have anything that you can extract

Newer Yubikeys hold secrets that, if exfiltrated, give you access to accounts.

I assume that if it doesn't already exist, there will be a Cellebrite-like device that governments can plug Yubikeys into to dump keys quickly like they're able to with cell phones.



The entire point of Yubikeys is that such a device should be impossible, and vice versa, if such a device were to exist, the Yubikey is nothing but an expensive USB flash drive.



This would probably be a good place to suggest to others here to track which accounts you've logged into via Google or other social media oath.

I just had to log into stack overflow for the first time in years, and did not remember what I used to previously log in. Once I figured it out that information went into Keepass too.



I can't get into my Google account that's almost 20 years old because I only have the username, password, recovery email and have all the email forwarded to me, but I no longer have the phone number and they silently enabled 2FA SMS at some point.



I wonder if you can phone the phone number, explain the situation and offer to venmo/paypal the new assignee money for the 2FA code

You could try every 3 - 5 years or so as it gets reassigned again



Yes, this is exactly why I won’t use these federated identity features of platforms like this. I have a reasonable amount of trust that they are mostly secure, but I have zero trust that they will be helpful if I ever have account troubles. What I don’t need is to have Google (etc) auth problems cascade down to every other account I own.



Everything about security / auth sucks in this age.

Each time I go down this path with either work or personal stuff it's just people changing passwords all the time / having to re-login all the time ... there's no happy path without a huge hassle.



I mean, that's the security story around it. You solve this and buying multiple yubikeys. Google and others support multiple keys, which gives you the backup story (I have 4 keys in various places).



There's no fundamental reason it needs to be this difficult.

Yubico or really any other manufacturer could totally e.g. release "Yubikey pairs", both a stateless CTAP2 implementation with the same root secret, that would correspondingly work at the same sites without having to synchronize anything.

The reason they probably don't is that their entire trust model heavily depends on the perception that nobody, including the manufacturer, has a "reserve key" to any given Yubikey, even though of course the absence of such "linked keys" doesn't demonstrate the absence of any such backdoor.

To be clear, I don't have any reason to believe Yubico does this, but it would probably be a harder story to tell, marketing-wise, if they sometimes selectively did, even if it's for a valid use case.



I mean you could also design keys to be synchronisable by the user. Generate a key-pair inside of key (2), transfer the public key from (2) to (1). Encrypt the the root key inside of (1) with the public key, transfer it over to (2).

(Or just allow the user to generate the root key outside of the device and insert it)

I honestly think the interest from customers is just too low. I would bet the majority of Yubico's customers are enterprises where this is not really an issue for most use cases. If you loose your key used for Entra ID / SSO, just go to IT support and get a new one enrolled to your account. Much cheaper than having synchronised hot spares (at 30-50 USD a pop) for thousands of employees.



But what stops Mallory from simply using this sync method to sync your private key to her Yubikey? I mean, look at the kerfuffle that's been kicked up by this vulnerability, and a key-sharing scheme like the above is much easier to exploit.

(The second idea seems better assuming the user nukes the private key once the import is done. Otherwise the weakest link in the security chain will continue to be the opsec you practice with the private key file, in which case why spend the money on the Yubikey?)



Yeah of course the operation needs to be (cryptographically) authenticated somehow, I edited my comment in haste while going to work and accidentally messed it up completely. Thanks for pointing it out!

The idea I thought of is to essentially use the public key of (2) to seed the generation of the root secret on (1). Meaning the sync-pairing-setup is destructive to the root secret, and can only be done at startup (or if you are willing to reset the device).

(A mallory could of course reset it still, unless you have some e-fuse or something, but anyways that's only marginally worse than simply physically destroying it.)



If you have a hardware key setup for anything you want sustainably operated in the future,

you register at least two keys, and when one fails or is lost, you pull the emergency backup out of a safe and register new one(s).



True, but what's even more convenient than that is to just not use hardware authenticators for anything but the most important accounts/sites, and e.g. use syncing credentials (as provided by many password managers, Google, and Apple).

The fraction of people willing to regularly schedule enroll-o-ramas at each of their accounts and each of their backup key locations is probably smaller than a percent of all potential WebAuthN users.



You register multiple keys on a handful of critically important websites.

Password manager. Primary e-mail account. DNS provider.

Other than that, it's rarely supported and rarely worth the hassle when it is.



It is really annoying that more sites don't support multiple security keys, though. As far as I can tell, it's not encouraged by the FIDO Alliance and I can't think of a good technical reason for it.



I forget which financial institution I was using at the time, but they explicitly only supported one key. That is, you add a new one and the old one is expunged.

Banks are so slow with this sort of thing, and still require SMS as a fallback option.



Maybe I'm out of date, then! I don't enroll new keys very often. Paypal is a great example of a service that I would like to support multiple keys, though, so it's disappointing that they still only support one.



How often do you check to see that those other/backup keys are still secure? This attack becomes easier if the attacker knows of the secondary key(s) location and because of disuse they aren't even necessary to replace.



I mean, not all at once, but (I only have 3) there's the one when I bought a new laptop in 2019 which I setup when I got that laptop and became the old one when I got a new laptop in 2021 and setup a second one. And then the third one is a backup key I made at some point and is stored offsite in case I/my house gets robbed/burgled or my place burns down in a fire.

It's inconvenient, sure, but it's more convenient than my bank accounts that are accessible online being cleaned out.



Any idea why this was changed? The big advantage of non-residential keys is that they do not take up any space on the Fido token and thus you can have an unlimited number of them.



> I don't remember where I've used my YubiKey in the past.

I've yet to encounter a site that allows to enroll a FIDO device without setting up some other form of 2FA and for me it's TOTP which are kept in the app.



If you're using a yubikey solely for its PGP key stuff and you have a backup of the key or have a key hierarchy then replacing a yubikey is pretty trivial.

(I know because it's my specific usecase.)



Yubikeys are intended to block phishing. This attack requires physical access.

IE: If you're "worth it" to target IRL, you shouldn't use a Yubikey to begin with. Someone can swap your spare and you won't realize it until too late.



The last time Infineon chips had a crypto-breaking bug, Estonians got new ID cards for free. Meanwhile my less than two months old Yubikey 4 stopped working as a hardware attested PIV smartcard.



> Meanwhile my less than two months old Yubikey 4 stopped working as a hardware attested PIV smartcard.

As in that the hardware broke? Or software stopped supporting it?



> primary goal is to fight the scourge of phishing attacks. The EUCLEAK attack requires physical access to the device

Something to consider: If someone is going to go through the effort to get physical access to a Yubikey, they only need to swap it with one that has a similar level of wear and a similar appearance. At that point, the victim will merely believe that their Yubikey is broken; and/or the attacker will have enough time to use the Yubikey.

For example, I have two Yubikeys. Someone could sneak into my house, swap my spare, and I wouldn't figure it out until I go to use my spare.

Basically: This attack is only "worth it" if your target is so valuable that you can target them in person. At that point, I'd think the target would use something a little more secure than a Yubikey.



> At that point, the victim will merely believe that their Yubikey is broken; and/or the attacker will have enough time to use the Yubikey. For example, I have two Yubikeys. Someone could sneak into my house, swap my spare, and I wouldn't figure it out until I go to use my spare.

You can inspect a yubikeys identity with `ykman list` so you can easily have checks to check if a yubikey is broken or actually swapped. If you have high security requirements you can do this periodically and/or have the physical location of the spare be physically secured.

> use something a little more secure than a Yubikey

For a hardware authenticator what would that be?



> You can inspect a yubikeys identity with

Who's going to do that? Most of the time, when I use my Yubikey, I'm using it in a text field in a website.

But, to quote https://news.ycombinator.com/item?id=41439400

> Seriously, it's trivial to fry a key and swap it with the working spare if you have access to it

So all an attacker needs to do is swap my Yubikey with a fried one. Maybe someone will figure it out if they're tracking the numbers written on the outside.



> Who's going to do that?

The point is that if you require more security there are tools to check it. For me I'm comfortable enough that an attack requires physical access to my keys, so I don't.

> Maybe someone will figure it out if they're tracking the numbers written on the outside.

So if your opsec requires it keep track of which keys you have and their identities. If one is fried remove it from all the services you authenticate with.

I'm not saying its perfect but you can create practices/procedures that protect (or at least let you know it happened) from most realistic attacks.



> Basically: This attack is only "worth it" if your target is so valuable that you can target them in person. At that point, I'd think the target would use something a little more secure than a Yubikey

Absolutely.

In practice, the Yubikey is almost never going to be the weakest link in the chain. They could target your devices, intercept your communications, or serve warrants on/covertly exploit the services that host your data.



> due to a non constant-time modular inversion.

That isn't exactly some subtle side channel involving tiny emissions of radio waves... The time depending on the secret data is pretty much the first thing that any side channel audit ought to check.

It's super simple too - simply verify that all operations always take the same number of clock cycles, no matter the data. Errors too - they should always be thrown after a fixed number of clock cycles, independent of the data.

How did auditors miss this?



> That isn't exactly some subtle side channel involving tiny emissions of radio waves...

No, it seems to be exaclty that. What's non-constant-time is not the execution of the algorithm (as that would probably be exploitable even via USB, worst case), but rather the duty cycle of the externally-observeable RF side channel, if I understand the paper correctly.

Infineon's implementation doesn't seem to be vulnerable to a pure timing attack, as otherwise that RF side channel wouldn't be needed.

They also do implement nonce blinding, but unfortunately with a multiplicative mask significantly smaller than the size of the elliptic curve, so it's brute-forceable.



> What's non-constant-time is not the execution of the algorithm (as that would probably be exploitable even via USB, worst case), but rather the duty cycle of the externally-observeable RF side channel, if I understand the paper correctly.

Are you sure? Section 4.3 (pg 52) starts with "The leaked sensitive information relates to the execution time of Algorithms 1 and 4."



Maybe my terminology is off here, but a "pure time leak" to me would be something like a given operation varying in return time (i.e. execution time/latency being a function of some secret data), whereas this seems more like all operations take constant time, but in that constant time, RF emissions vary in their timing (e.g. on/off, strong/weak etc.) as a function of some secret data.



Fantastic research by NinjaLab. One of the most interesting parts to me from Yubico's advisory is that the Webauthn protocols attestation [1] is also defeated by this local cloning. Could the protocol have been better designed to resist this local cloning attack?

> An attacker could exploit this issue to create a fraudulent YubiKey using the recovered attestation key. This would produce a valid FIDO attestation statement during the make credential resulting in a bypass of an organization’s authenticator model preference controls for affected YubiKey versions.

1. https://www.w3.org/TR/webauthn-2/#attestation



> Could the protocol have been better designed to resist this local cloning attack?

I don't see how, the attacker is cloning the secrets used to sign the request, if they have those secrets there's no way of distinguishing the clone from the original device. The whole security model of secure elements is preventing the keys from being extracted, if you can do that there's no more security than saving the key to a file on your computer.

Of course to get the key they need to physically open the device, so unless someone actually takes your key it's more secure than saving them on you computer.



The login service could send not just the request, but also N random bits for the next session.

These would be stored by the device and combined with the next sessions' request data before signing. The login site does it's own combining before checking the signature.

This way any clone usage would be obvious. If the attacker uses the clone before you, your key wouldn't work for that site anymore. The site could even warn you if it keeps track of previous values.

Likewise it limits the timeframe the attacker can use the clone.

I guess even just 16 bits of data should make it quite resistant to guessing by the attacker.

This requires some non-volatile storage to keep the "future bits", but at 16 bits you can do quite a few logins before having to do a single page erase.

Then again, not my field so perhaps there's something I'm missing.



Perhaps some kind of rolling key system could've been used? If the key was rewritten on each successful login, either the attacker would have to use their cloned key immediately (alerting the user), or have their cloned key become useless the moment the user logs in again. This would only work with discoverable credentials, and would increase wear on the device's flash storage.



Even with extraction possible, it's much harder to get the keys from this device than a file on a computer.

Security isn't just the binary of "possible" vs "impossible". It's about *how expensive" that possibility is.



Not really, unfortunately, given that attestation practically always depends on some piece of secure hardware being able to guard an issuer-certified secret with which it can authenticate itself to a relying party and thereby bootstrap trust into any derived/stored secrets.

If that attestation secret is extractable in any way, nothing prevents an attacker from creating a fake authenticator able to create fraudulent attestations, despite not behaving like an authentic one (i.e. in that it allows extracting stored credentials).

You could theoretically try to mitigate the impact of a single leaked attestation secret by using something like e.g. indirect attestation and authenticator-unique attestation keys (rather than attestation keys shared by hundreds of thousands of authenticators, which is what Yubikey does), but it would be a probabilistic mitigation at best.



I see the Yubico website says 5.7 or greater not affected.

Elsewhere on the Yubico website[1] they state that a feature of 5.7 release was ...

    Migration to Yubico’s own cryptographic library that performs the underlying cryptographic operations (decryption, signing, etc.) for RSA and ECC
Hopefully they've had lots of eyes looking at that ! Not sure why anybody feels the need to write their own crypto libraries these days, there are so many implementations out there, both open and closed source.

[1] https://www.yubico.com/blog/now-available-for-purchase-yubik...



> Not sure why anybody feels the need to write their own crypto libraries these days

Because this is the second time they've had a security issue (the last time was even worse) because of their vendor? When your entire company is based around doing cryptography, it actually makes sense to hire enough applied cryptographers to own your own destiny.



An embedded platform that hasn’t been ported to and may be impractical to port to because it’s so different. There may be other reasons and for closed source there may be economic considerations especially since this is a migration from a previous closed source (probably source available) vendor to an in house solution.



Including the very, very limited environment of secure elements, and the capability of interfacing with the sometimes very specialized cryptographic accelerators/coprocessors required for adequate performance?

We're talking low double-digit kilobytes of persistent storage, and sometimes single-digit kilobytes of memory here.

Also, including a full TLS library seems like complete overkill if you only need some cryptographic primitives. These things are usually certified in expensive code and hardware audits; you essentially have to justify (in terms of adding complexity, and with it the possibility of vulnerabilities) and on top of that pay for every single line of code.



And had so many stupid bugs that it looked like it was written by summer interns. Everybody I know who actually worries about actual security ran far, far away from it.

If, however, you just need https for web pages, it's good enough to get started.



Does anybody know if plain ECDH (as used by e.g. ICAO biometric passports implementing PACE) or Ed25519 (assuming Infineon's chips/libraries implement that) are impacted?

The paper specifically calls out PACE as also using modular inversion, but I don't understand either that or Ed25519 enough to get a sense for how bad this is for algorithms beyond just ECDSA.



> The new YubiKey firmware 5.7 update (May 6th, 2024) switches the YubiKeys from Infineon cryptographic library to Yubico new cryptographic library. To our knowledge, this new cryptographic library is not impacted by our work.

Found vuln in library used by many for 14 years. Solution: switch to custom code. That's a bold strategy. I hope it pays off.

> Infineon has already a patch for their cryptographic library [---]



Additionally, it looks like this vulnerability exist on on all Infineon trusted X (platform modules, microcontrollers, etc...) that use the same internal crypto libraries.



3. Infineon has already a patch for their cryptographic library, to our knowledge it did not yet pass a Common Criteria certification evaluation.

For what it's worth: this doesn't matter even a little bit.



As distinct from the attack itself? This is an interesting exercise and worth publishing, but in practice I don't see much real world consequence even for a notionally vulnerable device.



They really should. The recovery of the one secret the device is supposed to keep is catastrophic. Sure, the recovery itself might be an edge case, but Yubikey users buy the product to protect themselves from edge cases.



I have used SoloKeys since v1. Currently own two v2 SoloKeys, and they "just work" for anything involving FIDO2. I specifically use them for storing SSH private keys and WebAuthn credentials. The key can be used on any device with a USB-C port (there is also a variant supporting NFC, but I don't have that variant)

Despite being a bit careless with my keys (e.g. leaving them in a pocket and washing said clothing), they still work just fine. I highly recommend SoloKeys to anyone who wants to support open source hardware and firmware.



> I wish Yubico had some serious competition, but sadly they don't. NitroHSM is not the same thing

What's not the same thing as what? There's no NitroHSM (Nitrokey has 2 different HSM-related products that are different kinds of things from each other, and neither is called that), and most Yubikeys aren't their special HSM devices.



> You don’t need to be some big corpo to be considered ‘serious’.

That's not what I meant and I suspect you know that. :)

I meant everything from the Yubico hardware (more compact and less bulky than anything else out there) to the Yubico software (extensive featureset with more controllability than most other products out there).

Also as I said already, Yubico is one of the few (only ?) one that does not permit firmware flashing. Most competitor keys have firmware flashing capability, which to me is a big no-no as its an attack surface just waiting for an exploit.

That's what I meant by 'serious'.



I suppose I'm not a regular consumer, but I buy devices like these under the expectation that they will eventually succumb to practical low-cost attacks.

I would be feeling a bit miffed if I bought one recently, though.



As I understand it, the ROCA vulnerability is "the secrets generated by a YubiKey may be susceptible to classic cryptographic breaks", something along the level of "the cipher is inherently weak."

This vulnerability, meanwhile, appears to be in the class of "if someone has physical access to your hardware token, and has access to some specialized (expensive) hardware to do side-channel analysis, they might be able to do side-channel on your hardware token." But if someone has physical access to the hardware token... I mean, at that point, most people would consider it compromised anyways and wouldn't expect security guarantees from that point.



> and you can't flash the firmware.

Not being able to flash the firmware is a feature, not a bug. :)

Its the fundamental reason I won't buy NitroHSM because of the Rumsfelt unkown-unknowns about use of the firmware flash feature on NitroHSMs as a future exploit route.



> Not being able to flash the firmware is a feature, not a bug. :)

It is a feature only if they ship replacement devices in case of issues like this. If they don't and you're left with a broken device then I'd rather count it as a "bug".



Can you explain me, why do you need to be online to extract the private key? Can't you just steal the token, input the nonces offline, and meter timing? Then, crunch out the private key, and only then, if needed, phish the password?



Yubikeys and similar FIDO hardware authenticators roughly speaking have two modes of operation:

Resident/discoverable credentials are stored on the hardware itself. You can attack these completely offline.

Non-discoverable credentials are not stored on the hardware. To get the authenticator to perform a private key operation (which is a prerequisite for being able to exfiltrate the private key using this attack), you need to supply the credential ID to it, which contains the data required for the authenticator to re-derive the private key.

Usually (i.e. in the WebAuthN-as-a-second-factor use case), a website will only reveal candidate client IDs upon successfully entering your password.



This requires the attacker to steal your key. When that happens, by the time they can get the secret key I've already revoked it.

The biggest problem with FIDO keys isn't the fact that people can gain access to your accounts if it's physically stolen.

I have redundant keys for backup access. But I have no idea which accounts I used the lost key for, in order to log into them one by one to revoke the key.

How does everyone here keep track? A post-it note in the cookie jar?



>This requires the attacker to steal your key. When that happens, by the time they can get the secret key I've already revoked it.

You're held in custody, detained, arrested, etc while your keys are dumped and accounts are accessed. You don't have the opportunity to revoke it without risking prison time.

This situation can happen if you simply choose to fly or visit another country.



That’s a different situation outside of most people’s reasonable threat model. The police don’t need to clone your Yubikey if they can use it as much as they want, and if they decide to go NYPD on you nothing else you do is going to end in a different outcome unless your MFA check is an in-person confirmation in a location outside of their control.



If you have a Yubi Key and you are worried, read this part from their report:

> the adversary needs to open the device (...) Then the device must be re-packaged and quietly returned to the legitimate user (...) assuming one can find a reliable way to open and close the device, we did not work on this part

Yubi Keys are supposed to be tamper-evident and you can also put tamper-evident stickers on them. I am more concerned that a determined attacker may eventually find a way to record the signals without having to remove the casing.



In the FIDO2 case, only the derived keys are extracted. The master key that derives non-resident keys isn't extracted. So I think it's not possible to really copy the key.

In the cases of FIDO2 resident keys (passkey) / PIV / GPG, maybe it's possible to extract and copy the exact keys. But I guess it can be detected through attestations.

And I just looked at ykman command. It doesn't seem to allow you to import a passkey to a Yubikey.



I’ve created two TOTP 2FA on two different YubiKeys. During the TOPT configuration process, the website gives me a password that I enter on the YubiKey configuration app.

Then, I do not store that password. But if I stored it on Bitwarden, I could easily create a YubiKey backup or set it on another app like ente.

I have not kept that password because I considered that it could be easily compromise my security. I have kept the backup codes nonetheless.

Should I keep the TOTP configuration password that the website gives me when I tell it that I cannot scan the QR code?



> But if I stored it on Bitwarden, I could easily create a YubiKey backup or set it on another app like ente.

If you do that, you could just use Bitwarden’s TOTP functionality directly.

I don’t do that myself for important accounts as it effectively collapses 2FA into a single factor, but it’s an individual security/convenience tradeoff in the end.



I’d rather keep passwords, TOTP and backup keys in different services because I never saw the point of keeping them together with the passwords as you explain.

With the YubiKey, TOTP has become more convenient and more secure than it used to be with Authy and then Ente. But maybe I should consider integrating them for most accounts



Don't have high hopes for this but I just requested a replacement device through their support system as the offered mitigations are not something I would like to consider.



Infineon is a listed company, not sure how much news will be published on this before German and Austrian markets open in a couple of hours so could be a profitable short selling opportunity.



From a practical perspective, what protection does the use of a fido2 PIN provide here? Is the EM side channel exposed without knowledge of the PIN?

In any case this is a tremendous attack, good job!



Information is light about an actual Proof of Concept here.

I have no actual knowledge, and it makes sense to assume the PIN is required to implement the EM side channel attack, as without a valid PIN the old, vulnerable Infineon library most likely does not complete all the steps.



Requiring/not requiring the PIN is a per-authentication flag that the RP can set though, as far as I know.

Since the RP challenge is not authenticated in any way, nothing seems to prevent an attacker from just preparing a "user verification not required" challenge and getting the Yubikey to sign it.



Oh, potentially important corollary: This means that this vulnerability allows breaking an “always UV” credential as well:

- Do as many UP-only challenges as required on a stolen Yubikey to extract the private key, not involving the RP (or maybe a single incomplete one, to discover the credentialID)

- Use the recovered private key in an UV challenge against the RP



There also seem to be simpler fixed-function implants (e.g. NTAG or similar NDEF-storage-only tags), but I don’t think there’s anything non-Java capable of doing something as complex as CTAP2.

For something that potentially needs surgery to remove from your body, I’d go for the most capable secure element you can afford for maximum flexibility and future proofing anyway; usually that’s also Java Card.



really never got the whole concept of what is basically is a tpm that advertises "money here"

software otp in a phone or something that is able to be updated seems to be a better choice



> This vulnerability – that went unnoticed for 14 years and about 80 highest-level Common Criteria certification evaluations

This is yet another example of why you don’t f around with crypto and auth in general. Just use best practices and you might be OK.



Don't ask question about cryptography friend! Just use excellent library that implement best practice elliptic curve parameters chosen (of course) *completely* randomly from hat and stop thinking about it!



It's really easy to blind a variable time modular inverse, assuming you have a random number generator handy: you want the inverse of x, compute instead the inverse of x*r, then multiply the result with r.

So I wonder if some things using the same chip/library are not actually vulnerable because they blinded the operation?

It's *better* to use a constant time algorithm, but that's harder to do in a curve generic way and has a pretty significant performance impact (particular before the safegcd paper).



> It's better to use a constant time algorithm, but that's harder to do in a curve generic way and has a pretty significant performance impact (particular before the safegcd paper).

Crypto noob here, but isn't modular inverse the same as modular exponentiation through Fermat's little theorem? I.e., x^-1 mod n is the same as computing x^{n - 2} mod n which we know how to do in a constant-time way with a Montgomery ladder. Or is that too slow?



Yeah, same. Not sure why - nothing is loading. Just a broken site theme, I guess. Switching to reader mode in your respective browser should instantly show the post content (or at least it did for me in Firefox).

联系我们 contact @ memedata.com