![]() |
|
![]() |
|
Quite rightly most attention has been on the operational success of the backdoor and how it achieves it's goals. But I would love some deeper analysis of the mistakes and over-engineering parts. In the Bryan Cantril interview [1] Andrés suggests there are a lot of dumb things done because it was a kind of off-the-shelf backdooring piece that didn't necessarily know how it would be deployed. Eg the symbol table lookup that led him to investigate. Similarly, why were they chopping off 48 bytes using RC4 [2] So I'd love folks to talk about how (given more time/better team) this could have been better or where they messed up more [1] https://youtu.be/jg5F9UupL6I?si=gvXsYFXgagkGOMd4 [2] https://twitter.com/matthew_d_green/status/17744729080201014... |
![]() |
|
It needs money and people. Government funding for defense of the economy and computing in the "free world". Certainly the defense department has a billion to spare, and so does the EU |
![]() |
|
What I am really curious about is what caused the initial delay in ssh which was what triggered the research into xz in the first place. Has someone figured that out?
|
![]() |
|
I have looked in the sshd code at https://github.com/openssh/openssh-portable and I cannot find it forking and re-execing _itself_. It forks and execs other commands, of course, and it forks to handle new connections but does not re-exec in those paths that I can see. If some inetd-like program were listening on port 22 and fork+exec'ing sshd to handle each incoming connection, that would explain it. But on my systemd-based Linux system I see a long-running sshd that appears to be taking care of port 22. I do agree that it seems like the best explanation of the delay is that somehow sshd was being exec'ed per connection, but I haven't seen all the dots connected yet. |
![]() |
|
This is behavior introduced in OpenSSH 3.9 (grep for "re-exec" in the release notes https://www.openssh.com/txt/release-3.9) You should also be able to see the error message "sshd re-exec requires execution with an absolute path" in the source. If you follow the `rexec_flag` tested above this message, you can see where it calls execv, later in the code. |
![]() |
|
Has anyone done an analysis of the bug in the backdoor that caused valgrind errors and slowness in SSH which eventually exposed it?
|
![]() |
|
> Nobody ever even audits the binary contents of flatpaks on flathub (were they actually built from the source? the author attests so!). IME/IIRC There aren't (or shouldn't be) any binary contents on Flathub that are submitted by the author, at least for projects with source available? You're supposed to submit a short, plain-text recipe instead, which then gets automatically built from source outside the control of the author. > The Flathub service then uses the manifest from your repository to continuously build and distribute your application on every commit. https://docs.flathub.org/docs/for-app-authors/submission/#ho... Usually the recipes should just list the appropriate Git-based URLs to get the source code, or, for proprietary applications, the official .DEBs. Kinda like AUR, but JSON/YAML. Easy to audit if you want: https://github.com/orgs/flathub/repositories Plus, if properly configured, Flatpak is supposed to securely sandbox every application anyway, although other systems it needs to be compatible with (E.G. X11) currently prevent that. |
![]() |
|
My idea of getting one step ahead of similar backdoors is to wrap sshd traffic into a spiped tunnel. Spiped is compiled from source and linked statically (last stable version from 2021).
|
![]() |
|
I still want sshd updated by my OS package manager. Statically linked spiped, which is out of scope of my OS package manager, is just a second line of defense.
|
![]() |
|
If they constantly are on a VPN and not willing to disclose a real location or IP then I fail to see why they should be trusted when they don’t provide anything trustworthy themselves.
|
![]() |
|
new project idea: OpenBackgroundCheck volunteer osint researchers attempt to dox any identity you submit, so you can know whether they're the Right Kind of OSS contributor or not. /s |
![]() |
|
I’ve seen some google zero day vulnerability hack analysis and those looked out of this world but this hack must be one of the all time
|
![]() |
|
It's a helpful collection of information on this topic. But Kaspersky... really? No one with any sense has that voluntary backdoor in their network.
|
![]() |
|
On the other hand, the shorter version communicates "five" without a human needing to count them up. Either way, some grep-ability would come from hits on samples within unit tests.
|
![]() |
|
I bet a bunch of employers don’t want any unauthorized contributions to open source. For governments it seems much more niche—-only a few specific projects would raise red flags.
|
![]() |
|
Governments are made up of people. There is likely at least 1 real person (if not more) for whom this attack has been the entirety of their professional life for years.
|
![]() |
|
> It’s inconceivable that I’ve would have found the first stage and just given up But what you were looking at might not be the first stage. You might be looking at the modified Makefile. You might be looking at the object files generated during the build. You might be looking at the build logs. You might be investigating a linking failure. The reason for so many layers of obfuscation, is that the attacker had no idea at which layer the good guys would start looking; at each point, they tried to hide in the noise of the corresponding build system step. In the end, this was caught not at the build steps, but at the runtime injection steps; in a bit of poetic justice, all this obfuscation work caused so much slowdown that the obfuscation itself made it more visible. As tvtropes would say, this was a "Revealing Cover-Up" (https://tvtropes.org/pmwiki/pmwiki.php/Main/RevealingCoverup) (warning: tvtropes can be addictive) |
![]() |
|
Those who caught it were indeed very clever. But the attacker did make mistakes, particularly the Valgrind issues and large performance regression. That ultimately is what raised suspicions.
|
![]() |
|
Or someone further west who did this after work during the evenings. Or someone further east with a classical hacker "let's get up late in the afternoon, and start the serious hacking late at night." |
![]() |
|
Whilst the ip addresses and email headers etc should be examined meticulously, in the distant hope that they lead somewhere, the chances are that they won't. Very basic opsec.
|
![]() |
|
> It does not have to be a goverment. There are other cyber criminals. Criminals with the resources to maintain such a complex operation for two years? |
![]() |
|
If a government agency tasked with national security started caring about a critical home project by Nebraskaman only and long after Nebraskaman collapsed, that country needs a new government agency.
|
![]() |
|
It also seems to specifically violate HN guidelines: Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to. <https://news.ycombinator.com/newsguidelines.html> Snarky shallow dismissal ain't sensitive information. It's precisely the sort of behaviour which should be tied to, and reflect reputation of, a primary account. (Which can of course be pseudonymous, example myself.) |
![]() |
|
Even let's go all the way down and say the backdoor was directly written by Kaspersky themselves, does it change the value of the content (which is purely technical analysis) anyhow?
|
I think this sentence actually understates what happened.
What I find more frightening than the technical aspects of the backdoor is the amount and degree of "social engineering" involved: The backdoor was the final product, but inserting it was only possible because at that point the entire xz project had long been taken over by malicious actors ("Jia Tan" and friends) - after a yearlong psychological operation against the maintainer. All that without the maintainer or anyone else noticing anything.
That's spy novel stuff - and if something like this is possible, it makes me wonder what else might be going on in other projects right now.
(You can see the same mentality reflected in the backdoor code itself - I find it remarkable how much of the code is devoted to active misdirection. It's not just trying to look harmless, the code actively tries to build a "narrative" through commit messages, comments, variable names, choice of commands, etc, of what it's ostensibly doing while really doing something completely different - leading anyone who'd inspect the code first to doubt themselves and their understanding of the code, then suspect a bug and then, maybe, suspect malicious intent)