(评论)
(comments)

原始链接: https://news.ycombinator.com/item?id=40525130

很大一部分消费者很少更新路由器的固件,这导致许多人认为更新是有害的而不是有益的。 建议的解决方案涉及实现硬件定时器,以在用户成功启动更新其固件后控制路由器的写使能线。 此过程包括提示用户通过按下按钮启动更新、重新启动路由器以确保消除潜在威胁,以及在重新启动后的指定时间内打开写入启用门以完成更新。 然而,这种添加可能会增加物料清单 (BOM),并且出于成本考虑,制造商不愿意实施此类措施。 对于所有联网设备来说,内置自动后台更新机制对于实现最佳安全性至关重要。 研究表明,与路由器变砖的情况相比,可蠕虫路由器漏洞带来的实际风险更大。 受感染的路由器被网络犯罪分子用作住宅代理,发起 DDoS 攻击或参与快速通量网络以及其他非法活动。 与普遍看法相反,只有一小部分路由器恶意软件尝试持久化。 大多数僵尸网络(包括广为人知的 Mirai 僵尸网络)都会在设备联网后简单地对其进行感染,然后不久就会再次感染。 定期固件更新可以减轻这些风险。 尽管互联网服务提供商没有对这些所谓的事件进行确认,但其规模和频率表明了一个潜在的令人担忧的趋势。 尽管有关违规的具体细节仍然未知,但可能性包括供应商分发受损的固件更新或同时针对众多易受攻击的设备进行协调攻击。 受影响方需要对调查和安全事件的状态保持透明度。 此外,有关受影响设备的固件映像的信息以及有关这些事件的其他详细信息将有助于更好地了解情况。

相关文章

原文


> These reports led us to believe the problem was likely a firmware issue, as most other issues could be resolved through a factory reset.

My dream is to intercept the write-enable lines on the flash chips holding these firmwares so I can lock out updates. And schedule a daily reboot for any memory-resident-only crap.

That’s what we used to do on, ahem, satellite receivers, 20 years ago and maybe we all need to treat every device attached to the internet as having a similar susceptibility to “electronic counter-measures”.

Or at least monitor them for updates and light up a light when an update happens if it was my own equipment and I’d know if it should go off or not.



It's a no-win situation. Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.

But what I don't get in this case is why it was not possible to reset the device to its original state. It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.



You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system. It needs a way to be triggered through hardware examining traffic and that also needs to require the seen command be signed. That recovery boot system needs to be as simple and minimal as possibly so you can have good assurance that there aren't problems with it, and should be written in the safest language you can get away with. Guard that signing key with your life, and lock it away for a rainy day, only to be used if much of your fleet of devices is hosed entirely. It should not be the same as a firmware signing key which needs to be pulled out and used sometimes.

I think that could work, to a degree. There's always the risk that your recovery mechanism itself it exploited, so you need to make it as small and hardened a target as possible and reduce its complexity to the bare minimum. That doesn't solve the problem, which might be inherently unsolvable, but it may reduce that likelihood of it to levels where it's not a problem until long past the lifecycle of the devices.



> You could but a base level firmware on ROM, with a hardware trigger, and all that does on boot is listen and receive a signed firmware to write to the system.

Almost all devices have something like that already in the form of a bootloader or SOC bootstrapping mode. But the idea breaks down if you want to do it OTA. The full storage/kernel/network/UI stack required to make that happen isn't ever going to run under "ROM" in the sense of truly immutable storage.

The best you get is a read-only backup partition (shipped in some form on pretty much all laptops today), but that's no less exploitable really.



> The full storage/kernel/network/UI stack required to make that happen isn't ever going to run under "ROM" in the sense of truly immutable storage.

Why not? I'm essentially describing a specialized OOB system, and it would just use a carved out small chunk of system RAM or ship with a minimal amount RAM of its own. If you mean actually impossible to change because it's physical ROM ("truly immutable"), that's less important to the design than there's no mechanism that allows that storage area to be written to from the system itself, whether that's just the very locked down and minimal recovery kernel it houses not allowing it, or a jumper.



Sure, but now your device needs two eMMC chips or whatever to store all that extra junk, and it's been priced out of the market. FWIW: an alternative to designs like this is just to ship your customers an extra router to keep in a box if the first stops working: it's exactly the same principle, and fails for the same reasons.



There is a simple solution to this: Make the flash removable. The firmware is stored on some SD card or M.2 device, if it becomes corrupt then you take it out and flash it with clean firmware using any PC.

You don't even need the rest of the device to contain any signing mechanism with keys that could be compromised, because using this method requires physical access, and any compromise that occurs from physical access can be detected or undone with same by checksumming or re-flashing the storage device again from a clean PC.

And you can also do signed firmware updates OTA without worrying that the device can be bricked by a vulnerability or signing key compromise, because it can always be restored via physical access.



> and it's been priced out of the market.

Whether that's likely is entirely based on the cost of the device. Some things are simple and cheap and extra hardware cuts deeply into the profit. Others are not but this sort of thing is also important because they are remote and you don't want to have a person go out on site. When the device is expensive enough or sending someone to the site is expensive enough, "just ship a replacement" is not really a viable solution, unless you're installing it in a high-availability capacity where you can fail over to it without physical intervention.

Obviously it's not a solution for every circumstance. Nothing really is. I don't think it's useful for us to assume that a solution has to be, as that doesn't really help us in the many instances when it's good enough.



I am not most people, but I keep a backup modem of a different brand which is properly configured.

Granted, I use it once a year because lightning toasts many of my appliances and I have to wait for the replacement from the ISP.

At least my ISP modems can disable OTA updates. A happy oversight on their part.



Which is still running out of mutable storage. The point isn't whether you can verify the boot, it's whether you can prevent a compromised device (compromised to the point of being able to write to its own storage) from bricking itself.

Now, as it happens Apple (everyone really, but Apple is a leader for sure) has some great protections in place to prevent that. And that's great. But if you feel you can rely on those protections there's no need to demand the ROM recovery demanded upthread.



... right, which as mentioned requires physical access and external hardware and doesn't meet the requirements above either. And it's not particularly notable either: every flashable SOC in every market has something like this, at different levels of sophistication. Again it's just not a solvable problem at the level of integration imagined.



the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore. you'd need to physically access the chip to use an external flashing device. Some devices have non-writable bootloaders. They have an internal fuse that blows after the first write, so the chip's bootloader is locked. That means you can always flash a new firmware, but you can't fix any bugs in the bootloader.



Or a JTAG interface that the chip has in silicon and recovery is always possible from bare-metal. Dunno if that’s technically in the MCU’s bootloader or if the boot loader comes after.

Still requires a truck roll but at least you don’t need a hot air workstation.



> Or a JTAG interface that the chip has in silicon and recovery is always possible from bare-metal. Dunno if that’s technically in the MCU’s bootloader or if the boot loader comes after.

If the vendor's actually trying to lock down the platform they'll usually burn the JTAG fuses as well. It's hit or miss though, I've definitely come across heavily locked down devices that still have JTAG/SWD enabled.

Edit: To your question, JTAG is usually physical silicon, not part of the bootloader.



> the bootloader installs the firmware. if you corrupt the bootloader, it can't install anything anymore.

That seems like awful design? Can't you have an alternate immutable bootloader that can only be enable with a physical switch? Or via some alternate port or something? That way they can update the live one while still having a fallback/downgrade path in case it has issues.



That's good idea I wish they would have such a "safety-switch".

However I assume that any malware doesn't want to be detected so I would have hard time knowing whether I should flip the switch or not, in a typical scenario.



That was likely the point that whoever did it was trying to make, that they were an extremely bad device.

1) The ISP exposed some form of external management they used to access them they shoudldn't have 2) The attacker overcame whatever security used on said management interface 3) Once in, the attacker could simply overwrite the first few sectors of the nand to make them unbootable without local hardware serial console. 4) There was no failsafe recovery mechanism it would seem

An actual "modem" would mostly likely prove volatile/immutable by nature, but anything with a "router" built into it is far more vulnerable that typically run for poorly secured tiny linux systems, and subject to Chinese enshittification.



It's an interesting challenge because the device is nominally "under ISP control" but any device located in a customer's home is under the physical control of the customer. The mistrust between the ISP and the customer leads to "trusted" devices where the firmware, including the backup, can be overwritten by the ISP, but then cannot recover if it gets corrupted. And believe me, the corrupt firmware scenario happens a lot due to incompetence.

This is getting attention because it wasn't incompetence this time.

But how does blank, unprovisioned equipment discover a path to its provisioning server? Especially in light of the new "trusted" push, this is an arms race in a market segment such as routers where there isn't any money for high end solutions - only the cheapest option is even considered.

tl;dr: a social and economic problem, likely can't be fixed with a purely technical solution



This was years ago, but I remember getting cable service activated somewhere in Florida with Bright House. I handed the cable guy some ancient motorola cable modem I had found at a discount store. The guy took one look at it and said "look dude, if you hacked this thing to get around bandwidth caps it is your problem if you get caught". I guess apparently that particular modem was pretty easy to modify



Technical solution: customer treats ISP's modem/router as untrusted, and daisy chains their own router after it. Neither malware nor ISP's shenanigans can access the inner network.



That’s what I do. Also makes changing providers straightforward (though last time I needed to set up some custom VLAN stuff on my router but didn’t have to fumble with any wifi config).



Generally the way this works is you have two partitions in your flash chip. One contains the current firmware and the second is a place to drop new firmware. Then the bootloader twiddles a bit somewhere and boots to one partition or the other. There's really nothing stopping you from wiping the previous partition once you're done.

I think some routers still have a single flash partition and the update process here is a lot more hairy and will obviously not retain the previous version after an update.

Apart from attacks like this, there's absolutely no reason to have a protected read only copy of the factory firmware. 99.9999% all you would ever need to do to recover from a bad flash is to just fail back to the previous image.

A proper read only factory image would require an extra ROM chip to store it, as well as extra bootloader complexity required to load from ROM or copy to flash on failure. It's just barely expensive enough at scale to not be worth it for an extremely rare event.



> It seems like a misdesign if it's possible to destroy all of the firmware, including the backup.

Humor me; how would that work? If anything, I'd expect it to be easier to overwrite the inactive slot (assuming an A/B setup, ideally with read-only root). If you really wanted, you could have a separate chip that was read-only enforced by hardware, and I've seen that done for really low level firmware (ex. Chromebook boot firmware) but it's usually really limited precisely because the inability to update it means you get stuck with any bugs so it's usually only used to boot to the real (rw) storage.



> Sure, disabling firmware updates would have prevented this attack, but it would also prevent security fixes that keep the routers from being turned into a botnet.

But a switch on the route: Flip the switch the router reboots to a known safe OS, that downloads, verifies, and updates the firmware. Then it waits for you to flip the switch back before it will behave as a router again.

Unless attackers manage to steal key-signing codes, and also intercept and redirect traffic to their webserver to send a fake firmware, this seems secure to me. Only downside I'm seeing is that it would be impossible to put in a custom firmware. Maybe add a USB-key firmware option?



Eventually in the satellite world, card emulators took over and only the receiver was a vector of attack, but then the receivers started getting simulated too.

The nice thing about emulators is that you could intercept calls that you wanted and send your own response while still taking any and all updates. Hard to break when you have more control than they do.



I suppose from the point of view of someone with a black-market HU card, DirecTV was an example of an Advanced Persistent Threat. Never thought of it that way before.



Funny thing about directv is that because they allowed for many manufacturers to build receivers, directv had little control over the receiver firmware, so these counter-counter measures weren’t necessary at the receiver level.

Other providers that rolled out their own receivers had high control over the receiver firmware and once users figured out how to protect their cards, the receivers became an effective attack vector for the lazy.

But that’s where a lot of the public knowledge about JTAGs really started coming to light. Awfully nice of them to put in a cutout at the bottom of the receiver.



I'm not too familiar with customer DSL solutions but for cable modems, that firmware and configuration is managed by the CMTS because technology and configuration changes on the head end may require customer-side changes to ensure continued operation. The config is a pretty dynamic thing as frequency plans, signal rate, etc change over time as the cable plant and head end equipment is upgraded and maintained.

I'd expect that any attempt to lock write enable to the EEPROM would eventually result in your modem failing to provision.



When your provider cuts you off, that’s when you know that your provider has a legit upgrade you need to take. Take the update and then lock stuff up again.

Of course, I don’t think you’re supposed to make mods to your vendor provided equipment…

In the satellite world, this would happen too: old firmware would be cut off. That’s when you go legit for a while with your sub’d card, take the update, and watch your sub’d channels until the new update could be reverse engineered. And probably have some heroes learn the hard way of taking the update and having some negative impacts that are harder to reverse.



I'm not sure what such an approach would accomplish. If the goal is to prevent the kind of problem seen in the OP (which, let's be real - is a rare occurrence) in order to avoid an unplanned outage, you've instead created a situation where it'll fail to connect far more regularly as you're kicked off the network for not correctly handling the provisioning process. You're trading a rare unplanned outage for a common unplanned outage.



Depends how often the provider pushes out updates (and the purpose/necessity of them).

And it’s only that “rare unplanned outage” when a malicious update bricks your device. Much worse is a malicious update that doesn’t result in an outage. Probably still rare but that impact though.

Edit: would also add that there’s probably a big firmware chip that changes infrequently, and frequently changing config stored on a separate and smaller chip (like a 24c or 93 series eeprom that holds a few kilobytes). That way you don’t risk bricking your modem by unplugging it at the wrong time.



Secure boot schemes can already "fix" this. If a boot image is programmed that isn't signed, the system boots to a write protected backup image. The system can also to some degree block the programming images that aren't signed, but presumably malware has gained root access.



ISPs can send any firmware to a docsis cablemodem, without the user knowing or accepting.

Imagine the damage that could be done by a malicious actor via the ISPs computers.

Or imagine someone being able to hack the system that does that update even without the ISP.

600K users would be a toy, they could do it to 6 Million.

Doesn't even have to be clever, just brick millions of cablemodems.

North Korea or some other government level entity could manage the resources to figure that out.



Most non-programmers don't give a shit about their router beyond "the wifi must work". Something completely stateless that can't be broken or messed with actually sounds like something they'd want.



Well, considering that most home users never update their router's firmware, I'm going to go out on a limb and suggest that the majority of applied updates are malicious.

If you did want to go this route, a simple fix would be to have the write enable line gated by a hardware one-shot timer (think like a 555 timer) triggered by a physical button on the front (and the button would also reboot the router).

The firmware update sequence would go like: Router prompts user to update using button -> user presses update button -> router reboots to clear malicious software -> when the router comes back up, the write enable gate remains open for $n minutes (and maybe there's a GPIO that can hold the gate open if it is already open) -> router performs software update.

The problem is that there's pretty much no way to do this without adding a $1-2 to the BOM, and no manufacturer of (pro|con)sumer routers will do that.

Edit: Or do what stacktrust wrote and just have a toggle switch (update/secure).



> Well, considering that most home users never update their router's firmware, I'm going to go out on a limb and suggest that the majority of applied updates are malicious.

Consumer grade routers often have automatic updates these days.



Yeah I have a pretty nice router and inexplicably, random nights, it drops out at 3am for ~3 minutes and then comes back on (and has all the markings of a reboot in terms of the pattern of request failures). I have to assume Synology just decided that nothing's using the internet at 3am



I had no idea that Synology did routers too, but I would assume this would be a configurable time somewhere in the settings. But yes, that sounds like automatic updates to me.



A physical switch can be locally toggled by the device owner/admin.

Some motherboards offer a physical jumper for firmware updates, including x86 PC Engines APU2 coreboot router.



So here's another problem. Going back to the point made earlier in this thread "users don't care much about their routers." The average user opens their router page exceedingly rarely, which is the benefit of automatic updates. I set up my mom's router, and she works in IT (more in project management these days, so she's fallen into being a mostly non-technical user.) And she still texts me every time she needs to get into it to ask me what the password I set up was... once every few years. The downfall of automatic updates is obviously something like the case of the article this discussion is about. But weigh up the costs and benefits, in my opinion, the scale tips more towards automatic updates when you factor in all the critical level vulnerabilities that router OS's have accrued over the years.



Some consumer routers (e.g. Amazon eero) have moved to cloud/app config and automatic updates.

A read-only partition can bootstrap recovery from cloud, if the main firmware or config is damaged.

Some updates are possible with live kernel patching of the memory-resident OS, which can also be used by malware.



Wormable vulnerability in a router is 10x bigger issue in practice than attacks that brick a router. By "in practice" I mean judging by the real world attacks that I know of. Hacked routers are used as residential proxies for criminals, for DDoS attacks, fast flux networks, credential stuffing attacks, and more. Bricking routers is rare (and very loud), that's why it's news on HN.

Many router malware families don't even try to be persistent. Even Mirai - arguably the most famous router botnet - is not persistent [1]. In case the device is rebooted, it just gets infected again in a few minutes.

It's very important that all network connected devices have an update mechanism, working automatically in the background.

[1] At least the original version. After the code leak people were doing all kinds of updates, so there are some variants that try to be persistent in some cases.



I think that’s usually the case: that will be stored in a small 24c or 93c (i2c or SPI) chip that is separate from the firmware that may only handle 1000 flashes and takes a lot longer to flash (often requiring risky and long downtime).



Half of my technical friends haven't changed theirs from the factory defaults. None of my non-technical friends have changed them. A non-representative sample of SSIDs I receive in my apartment gives 9 default ones, and 2 custom ones (plus mine).



plenty of routers in my country (id go on a limb and say the majority) have both written on a sticker, different password for each device generated when the firmware first gets installed, and people NEVER change it.

edit they also come with a QR code you can scan on your phone to connect to the wifi without manually copy the password, so people just put up that qr code somewhere and connecting is easy enough.



I have a friend who used to do classified military work - and a lot of the firmware on munitions is designed in precisely this way to avoid countermeasures. Advanced systems have a lot of software, and the systems they are shipped on have fuses and circuit traces that are melted to avoid any possible countermeasures or modification once they've passed acceptance.



When the military finds a flaw in their immutable firmware, what do they do with those munitions? Throw them away and build new ones? Routers are networked and critical vulnerabilities that should be addressed get patched on them over time. Is the proposal to just tell the user to buy a new router?



Here is the same with the routers that got bricked. They probably need to be reflashed. Most likely they are leased by the ISP so under their scope of responsibility and maintenance.

"internet doesn't work" -> "send me a new router, it broke", problem solved.



Given what little I know of military stuff, swapping out a hardware board is cheap compared to the cost of the whole thing.

Would they be sent back or would an engineer travel to wherever stuff is stored to do the job?



Same type of person who blocks Windows updates, messes with their registry, removes system files and processes, and then complains when Windows gets flaky on them and breaks in strange and unpredictable ways.



Windows has a stateless mode.

https://learn.microsoft.com/en-us/windows/iot/iot-enterprise...

> Unified Write Filter (UWF) is an optional Windows 10 feature that helps to protect your drives by intercepting and redirecting any writes to the drive (app installations, settings changes, saved data) to a virtual overlay. The virtual overlay is a temporary location that is cleared during a reboot or when a guest user logs off.. Increases security and reliability where new apps aren't frequently added. Can be used to reduce wear on solid-state drives and other write-sensitive media. Optimizing Application load timing on boot – it can be faster to resume from a HORM file on every boot rather than reloading the system on each boot. UWF replaces the Windows 7 Enhanced Write Filter (EWF) and the File Based Write Filter (FBWF).



Reminds me of some management software we ran on school computers some 20 odd years ago, it basically made the computer immutable and it'd be back to how it was after a reboot. I forgot what it was called and never knew how it worked though, at the time I was impressed. It needed an admin password to make changes that persisted.



They wouldn't have to do that if Windows didn't insist on being forceful with updates even for power users, who can be trusted to decide when to update. Lately it's not as bad since you can delay/disable forced reboots for updates for ~2 months at a time, but still it'd be far more ideal to just be able to turn off forced reboots entirely.



The issue isn't updates breaking stuff, the issue is forced reboots when I have something open for which I don't just want the OS throwing away the state for. The IT dept. comment is pretty accurate though, just as dismissive, pointlessly snarky and disconnected from doing productive work as the real deal.



Article is light on the interesting details. How did they came in? Do these routers have open ports and services by default and answer to the Internet in a meaningful way?

Couldn't someone grab different firmware versions and compare them?

Looks like they are doing what everyone else is doing and using OpenWrt with a vendor SDK: https://forum.openwrt.org/t/openwrt-support-for-actiontec-t3...

What's interesting here is speculated the vendor send a malicious/broken update: https://www.reddit.com/r/Windstream/comments/17g9qdu/solid_r...

So why is there no official statement from the ISP? If it was an attack shouldn't there be an investigation?

I'm not familiar with how this is handled in the USA but this looks really strange.

Maybe these machines were bot infested and the vendor pushed an update that broke everything?

Maybe it's like in the article and it was a coordinated attack maybe involving ransom and everyone got told it's a faulty firmware update, keep calm?

which is also kind of bad, as the customer I'd like to know if there security incidents.

Has anyone links to firmware images for these devices? Or any more details?



> So why is there no official statement from the ISP? If it was an attack shouldn't there be an investigation?

We should assume a decision to make no statement was based on the outcome of an investigation.

I wonder how much of the replacement cost is insured. I am guessing none. Leaving the ISP at severe risk of, er, business discontinuity. Another good reason for no statement.



> Lumen identified over 330,000 unique IP addresses that communicated with one of 75 observed C2 nodes

How does Black Lotus Labs global telemetry know which IP communicated with which other IP if they have control of neither endpoint? Who/what is keeping traffic logs?

If these guys can do it, remind me again how Tor is secure because nobody could possibly be able to follow packets from your machine, through the onion hops, to the exit node where the same packet is available unencrypted...



I have a friend who works at Black Lotus (and who may have written this blog post, who knows). Black Lotus is part of Lumen which is Level3 and CenturyLink and is one of the biggest (if not the biggest) backbone traffic provider in the world, with a huge percentage of the worlds traffic transiting their network, and thus I think they get direct insight into the traffic including metrics on it.



I'm quite disenchanted here. So this means that it is practically impossible to avoid IP fingerprints in any way ? Even with Tor, VMs, etc ? You'll always be at the mercy of whoever runs the show unless you own the physical servers



Of course a backbone provider can directly inspect the source and destination IP addresses of any traffic transiting its network. How could it be otherwise? Thats not fingerprinting, it’s just pulling fields out of a struct.

Tor does defeat this though. Rather than seeing the true destination of your traffic they see that of a Tor exit node.



But... That tor exit node then sends the traffic onwards... Again via the internet, and the backbone provider can inspect it again.

Seeing a packet heading to a tor exit node and then a similarly sized packet heading onwards a fraction of a millisecond later is a pretty surefire way to spy on individual tor users.



I think Tor tries to resize/split/join packets a bit. And each Tor node will in theory be carrying traffic for many different users simultaneously. And Tor uses 3 nodes, each in a different country. So it's not quite as trivial as you make it sound.

If 1, 2, or possibly all 3 nodes are run by a malicious actor, deanonymization becomes easier. At one point 10% of nodes were run by a single malicious actor: https://therecord.media/a-mysterious-threat-actor-is-running...



Yes, being able to see all the traffic on a given network is a legitimate threat to Tor's anonymity.

IIRC There is an alternate method of connecting to an endpoint which uses a 3rd node as a rendezvous point which is meant to be better, but I forget the name of the process...



This is reasonably standard functionality for backbone routers. They have to parse the TCP headers in hardware anyway, and can track common endpoints with O(1) state.

Of course, on the other end of the spectrum, the NSA has tapped into core internet links, is recording everything it possibly can, and is keeping it forever.



Is that actually feasible with their budget?

If we are generous and assume there a zettabyte of data a year that they want to store.

At consumer prices, you would have to pay $10B per year just buying hard drives yet alone the operational costs/redundancy.

The budget for all of the US intelligence services is ~$65B. I think if they wanted to actually do what you are describing it would be the single biggest intelligence expense they have and I don't see how you hide that.



Its not. They don't store the raw IP packet data, instead they store the metadata (this was revealed in a leak a long time ago), like the type in this article (data source and destination, timestamps, size of the data, etc.) the metadata is orders of magnitude less data than the raw packets and likely easily compressible, so I wouldn't be surprised if they keep it all for a decent chunk of time.



Presumably this becomes increasingly useless in an eSNI world?

i.e. they can prove I visited a bunch of cloudflare sites, but there are millions of those so who cares?



> They have to parse the TCP headers in hardware anyway

Backbone routers have no need to implement stateful TCP inspection or deal with the transport layer for TCP, dealing with IP is enough.



Presumably, Windstream is logging customer traffic as a matter of course. It might just be metadata (NetFlow/sFlow/IPFIX/etc), but one way or the other the only way they have this information is if they are recording and retaining it.

Hopefully this is made clear in Windstream's contract terms.



These aren't likely 'top flows', since the C&C data will probably only be a few kilobytes.

So to capture this, you at a minimum need to be logging every TCP connection's SRC IP and DST IP.

And they seem pretty confident in their worldwide map and fairly exact counts, so I would guess they must have probes covering most of the world doing this monitoring, and it likely isn't just 1-in-a-million sampling either...



> If these guys can do it, remind me again how Tor is secure because nobody could possibly be able to follow packets from your machine, through the onion hops, to the exit node where the same packet is available unencrypted...

You're supposed to be protected by the fact that you're going through multiple nodes before exiting TOR, and traffic should be mixed. Can you find some streams if you have most/all the nodes within your network and can analyze the traffic? Probably some, but the more traffic a node handles the harder it would be.

There is a simpler approach though, which is to just run exit nodes.[1]

1: https://en.wikipedia.org/wiki/Tor_(network)#Exit_node_eavesd...



What do you mean by just run exit nodes? The linked section says that just running exit nodes allows the exit node to steal data sent over plain HTTP. Is that actually a problem? Who's using plain HTTP? The linked section says just running exit nodes doesn't allow deanonymization.



I didn't mean to imply it was exactly analogous, just a lot simpler, and there is a lot to be gleaned from that data. In fact, I would assume the more data you pass over it (e.g. if you proxy all your traffic across TOR) the easier it is to make assumptions about the source, unless it actively splits it across different exit nodes, which seems like it could be problematic in a lot of cases. If you have a DigitalOcean personal server you run some services on and you access it through TOR... well you might have just made the job of anyone trying to deanonymize you much easier.



> unless it actively splits it across different exit nodes

I believe it does. I believe each destination IP you connect to uses a different exit node. And that'll even switch over time. And even if you connect to 2 destination IPs from the same exit node, I don't think there's a way for the exit node to know those 2 connections were from the same user.



Lumen (merger of Level3 and CenturyLink) sells services to a large part of the Internet and may provide a lot of the backhaul for Windstream. In which case they would be in the path for monitoring.



Lumen is a tier 1 network so a lot of traffic passes through them. They can man-in-the-middle the traffic and see the TCP packets going through their network.



"They can man-in-the-middle the traffic" could be interpreted as them having to actively do something to become the man in the middle, when they already are.

It's likely they just do sampling (think netflow) to get some statistics over the data that's already transiting their network.



For a few years now I only buy a small x86 box with dual nics and run OpenWRT. I love it. It's open source, lots of support, good community. It supports wireguard. Latest version allows you to even run docker containers.



The article says that the "modems" affected are the Sagemcom F5380 and ActionTec T3200 which from a quick search looks like full fledged CPEs aka routers with a web interface and NAT, WIFI and all the stuff. They also write about Censys and banners so it looks like they had their web interface exposed to the Internet.

When I hear people say they use OpenWRT I assume they have their modem in bridge mode so that it doesn't even have an IP. OpenWRT would save you in that case.



OpenWrt works for some modems pretty fine. It's not straight forward as the VDSL firmware can't often be distributed but poeple use it on avm Fritzbox devices. Also LTE devices are supported. Not sure about cable modems, probably not. It's probably involved and not straight forward so for most users, even technical ones it's no alternative.



It's huge shame Pascal basically stopped building those boards since AMD and Intel wouldn't play ball. I'd really like to have something like an APU with 10G connectivity with an x86 processor that was not built and designed in China running open firmware. With PC Engines gone now, I think you're basically out of luck.



My APU2 died few years back and haven't found a decent replacement. Instead I use a Lenovo Thinkstation M720Q off eBay with the PCIe x8 riser and Intel 2 port 10GbE card. You could also fit a 4 port 1Gb card too. The thing idles at 18-19W which is less than the big white rectangular Verizon "trash can" router which idled at 22W and has horrible WiFi (I use a Unifi APLR.)



These are DSL modems, though. At some point there has to be some interface between the WAN side, be it DSL or coax or fiber, and your network. Even DSL adapters for PCIe slots are just systems on a stick, coming with all the features and bugs of a "router" but without the enclosure.



The interface to DSL or coax only has to be a layer 2 bridge. You can put many modems into bridge mode so they don't do any layer 3 (IP) at all. For fiber, if you don't use PON at least, even a standard SFP(+) will do.



@dang, if there are karma points at HN, you could add some for submitters who improve upon the oft-execrable original clickbait headlines/titles. (Here, I see present verb tense being used for an incident from October of last year.)



Well if you backdoor 600k routers and introduce a firmware bug with one of your patches, this is what happens.

Can't they just stage their updates? Surely, malware authors and users must be too cool for adopting standard prod practices.



> Surely, malware authors and users must be too cool for adopting standard prod practices.

Their economic pressures are just different, it's not their own hardware that they're bricking, nor are likely to be held liable for it.



This attack happened a few days before Halloween 2023 (pumpkins), with a large drop in the number of devices connected to the Internet – like how an eclipse suddenly brings a period of darkness, maybe?

This is just my interpretation, I also found it cryptic.



For anyone else that was confused by the headline, this is about the destruction of 600,000 individual (small) routers. Not routers that are worth $600,000 (each or combined).



It does include information which the original article specifically excluded from mentioning: the ISP involved.

"Windstream" is mentioned in the first paragraph of the Ars article, while the Lumen post makes references to "a rural ISP" throughout the post.



For my home network I've purchased a networking appliance form-factor computer, which is basically a regular old an i3 with VT-x support in a fanless case and 4 2.5GiB NICs. I've installed my favorite stable Linux distro that gets regular automated security updates in both host and a VM, and I've device-mapped 3 of the NICs into that VM. The remaining NIC remains unattached to anything unless I want to SSH in to the host. I'm running UFW and Shorewall in the VM to perform firewall and routing tasks. If I want to tweak anything I just SSH in to that VM. I have a snapshot of the VM disk in case I mess something up so I can trivially roll back to something that I know works.

I've purchased a couple of cheaper commercial WiFi access points, and I've placed them in my house with channels set up to minimize interference.

Prior to this I've gone through several iterations of network products from the likes of Apple, Google, and ASUS, and they all had issues with performance and reliability. For example infuriating random periods of 3-5 seconds of dropped packets in the middle of Zoom conferences and what not.

Since I've rolled my own I've had zero issues, and I have a higher degree of confidence that it's configured securely and is getting relevant security updates. In short, my home network doesn't have a problem unless some significant chunk of the world that's running the same well-known stable Linux distro also has a problem.



It's a HUNSN RJ36. It came preloaded with pfSense, as many of them do, but I immediately made a full disk backup and then wiped and installed with a Linux distro because, well, "This is Linux. I know this." You're going to find a lot of people who strongly prefer one over the other, and you may find you prefer pfSense over a "do-everything-yourself" Linux distro if you give it a shot. There are also Linux distros that are targeted for network appliances, and setting them up (correctly) can be easier if the distro is built for the task.

There are quite a few machines in this category, and what's in stock at any given time tends to rotate relatively quickly. I think the one I bought might still be available, but you will want to check to see if there is something with specs that will work better for your use case.



Reminds me of the CIH virus. It's only a matter of time for ransomware authors to start using firmware blanking as a new technique.



I read the lotus labs blog post they linked and they mentioned no analysis of the actual firmware payload that actually bricked them, is this out there or a sample?

I'd be curious to know if it was actually meant to brick or someone f'ed the image and accidentally bricked them trying to be clever.

Also if it was a nation state why would you so publically burn your capability bricking residential routers on an ISP that seems to mostly serve rural areas, if they did it for testing that'd be real dumb.



"We began an investigation after seeing repeated complaints mentioning specific ActionTec devices, as a massive number of device owners stated that they were not able to access the internet beginning on October 25, 2023. A growing number of users indicated the outage was common to two different gateway models: the ActionTec T3200s and ActionTec T3260s, both displaying a static red light."



“the ActionTec T3200s and ActionTec T3260s, both displaying a static red light.”

“This included a drop of ~480k devices associated with Sagemcom, likely the Sagemcom F5380 as both this model and the ActionTec modems were both modems issued by the ISP.”



this along with other recent security incidents suggest somebody is rehearsing for massive campaign tied to another geopolitical ambitions.



If you do an image search of the impacted devices, they all seem to be Windstream branded.

Not sure if that's enough evidence to say it was them... but I don't see another ISP's logo on these things.

联系我们 contact @ memedata.com