![]() |
|
![]() |
|
My standard development practices like code review and unit tests do not scale to review and test every dependency of every dependency of my projects. Even at company-wide scale.
|
![]() |
|
The internet would become a nicer place (again) if we killed off JavaShit and the concept of obfuscating ("minifying") code to make View Source practically unusable.
|
![]() |
|
> Far too many people, including developers, seem to just accept upgrades as always-good instead of carefully considering the risks and benefits. Another example of this was log4j: if you were still using the old 1.x log4j versions, you wouldn't have been vulnerable to the log4shell vulnerability, since it was introduced early in the 2.x series. The old 1.x log4j versions had other known vulnerabilities, but only if you were using less common appenders or an uncommon server mode or a built-in GUI log viewer (!); the most common use of log4j (logging into a local file) was not exposed to any of these, and in fact, you could remove the vulnerable classes and still have a functional log4j setup (see for instance https://www.petefreitag.com/blog/log4j-1x-mitigation/ which I just found on a quick web search). Did log4shell (and a later vulnerability which could only be exploited if you were using Java 9 or later, because it depended on a new method which was introduced on Java 9) lead people to question whether always being on the "latest and greatest" was a good thing? No, AFAIK the opposite happened: people started to push even harder to keep everything on the latest release, "so that when another vulnerability happens, upgrading to a fixed version (which is assumed to be based on the latest release) will be easy". |
![]() |
|
> valuable, not-yet well understood, and public exploits Except in the scenario that is this exact case: Supply chain attacks that are developed with the exploit in mind. |
![]() |
|
The Jia Tan character was never rude. If you make rudeness the thing that throws a red flag, then ‘nice’ fake accounts will bubble up to do the pressuring.
|
![]() |
|
The assumption is that the group behind this attack had sock puppets that were rude to Lasse Collin, to wear him down, and then Jia Tan swept in as the savior.
|
![]() |
|
Never allow yourself to be bullied or pressured into action. As a maintainer, the more a contributor or user nags, the less likely I am to oblige.
|
![]() |
|
It makes Rich Hickey’s „Open Source Is Not About You” [0] particularly poignant. As a hobbyist developer/maintainer of open source projects, I strive to remember that this is my gift to the world, and it comes with no strings attached. If people have any expectations about the software, it’s for them to manage; if they depend on it somehow, it’s their responsibility to ensure timely resolution of issues. None of this translates to obligations on my part, unless I explicitly make promises. I empathize with Lasse having been slowed down by mental issues. I have, too. And we need to take good care of ourselves, and proactively prevent the burden of maintainership from exacerbating those issues. [0]: https://gist.github.com/g1eny0ung/9e7d4d0f72547a8d156452e76f... |
![]() |
|
> I would have banned the account Yeah, same. We should be much more willing to kick jerks out of our work spaces. The work is hard enough as it is without also being shit on while you do it. |
![]() |
|
The premise of the post I replied to is that the mailing list moderation is currently not great and that it allows people to be abusive. It suggest that we should crowdsource this moderation. I assume they think this will lower the burden. I myself do not think that this is the actual problem. I think the actual problem is that many FOSS communities have fostered an idea that cracking down on certain types of behavior is censorship. Of course they might all agree to ban somebody who is saying certain well known explicit words or being really really aggressive, but there's a lot of behavior that I would consider bannable but they do not. I'm trying to avoid mentioning specific cases because I don't wanna either reopen old wounds or cause another similar dustup. I can say that it reminds me a lot of the really old post about the 5 geek social fallacies https://plausiblydeniable.com/five-geek-social-fallacies/ In particular social fallacy #1 Obviously we're not talking about geek culture here (in general anyways). After rereading it, I saw something else that we actually do a see a lot even today. This in particular has happened on many a mailing list, including the ones I'm not specifically mentioning >> Well-known annoyances get tolerated for years, and the people who do the work start moving their discussions to bug trackers, irc, or in personal emails to avoid interacting with them. |
![]() |
|
If it's open source they can just fork it, and if you're no longer maintain yours you can put a link over to their fork. (Or any other active forks). It's still on the user to vet new forms.
|
![]() |
|
Your experiences may differ, but I'd say pretty much anyone who lived through the past few years has reason enough to pay careful attention to their mental health.
|
![]() |
|
In cryptography, there's the concept of a nothing-up-my-sleeve number. [1] Instead of obscure constants, you use known constants, or at least simple methods to derive your constants. You can do the same thing to come up with your test cases. Bad zip? Construct a good zip of 10 files, each containing the first 10,000 prime numbers. Then corrupt the zip by seeking to position (100/pi) and write a thousand zeroes there. Bad JPEG? Use Imagemagick to render the first 1000 prime numbers as text into a JPEG file, then apply a simple nothing-up-my-sleeve corruption operation. There are still cases where this approach isn't going to work: that new icon, helpfully proposed by a contributor, meant to be used in production, might contain malicious code, steganographically embedded. I think there's little you can do to prevent that. [1]: https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number |
![]() |
|
A consortium is a great way to get money and power into those maintainers. Never said they should take the power from them or provide code. I think people are hearing their own mind here, not mine.
|
![]() |
|
lcamtuf's two posts argue that this may simply not be an open-source maintainer's job to defend against. ("The maintainers of libcolorpicker.so can’t be the only thing that stands between your critical infrastructure and Russian or Chinese intelligence services. Spies are stopped by spies.") That doesn't mean we shouldn't try to help burnt out committers, but the problem seems very hard. As lcamtuf also says, many things don't need maintenance, and just paying people doesn't address what happens when they just don't want to do it anymore. In an alternate universe with different leadership, an organization like the FSF might use donated funds to pay a maintenance staff and an open-source maintainer might be able to lean on them. Of course, that still doesn't address the problem of Jia Tan getting a job with this organization. https://lcamtuf.substack.com/p/technologist-vs-spy-the-xz-ba... https://lcamtuf.substack.com/p/oss-backdoors-the-allure-of-t... |
![]() |
|
> "I'm trying to work backwards from "how do we stop bad actor Tan from getting maintainer access to the project?" Creating an identify-verified relationship (employment) is a good fit for that, I think." It would stop a sock puppet, but Jai Tan might be a real person, a real developer paid or blackmailed by a hostile group; Amazon might just have hired him and handed over maintainer access to him thinking it was above board, if a problem hadn't been found yet. I don't know where Jai Tan claimed to be from, but it's quite possible they would say "I don't have a passport", "I can't leave my family to travel to America for an in-person interview", "I'm not in good health to travel", "I don't speak English well enough for an in-person interview", "I live in a poor country without a functioning government and have no tax documents", or etc. etc. excuses which are quite plausible > "Or, if he was feeling burnt out, he may have felt more comfortable passing the torch earlier if there was a clear framework to do so, backed by an entity that can do some of the heavy lifting of naming/validating a successor." Your suggested $200k is equivalent to £160k GBP in the UK; look at this UK average salary list: https://uk.jobted.com/ no job comes close; not Managing Director, IT director, Finance Director, Aerospace engineer, DevOps engineer, neurosurgeon, nothing on the list is above £110k. Sure there are many people earning that much as a senior devops AI cloud security specialist in a fast paced London based fintech trading house, but the idea that someone would comfortably pass up a salary around the 98th percentile of incomes in the country for like 2 days a month of work because they're "feeling burnt out" is unthinkable. Anyone sensible would hold onto that until they pried it out of one's cold dead hands; American tech salaries are almost literally unbelievable. Even moreso if we consider a maintainer in a poorer country. > "I tweaked your questions a tiny bit to rephrase them as I interpreted them. I think the spirit of your questions was kept, I apologize if not" I started writing Tan, but then changed it. A lot of your reply is assuming that we know there were malicious patches and suspect Jigar Kumar was a bad actor and that the big company would be somewhat US friendly. But we can't plan to know all that for all situations like this. Some people will be speculating that the previous paid maintainer was complicit and all their work and merges are now suspect. The billion dollar company who hired Collins in this hypothetical maintainer could be Baidu or Yandex or Saudi Aramco, and then people would be suspicious. It's one thing to have your task be "don't make people mad" but doesn't that change if people getting mad can give you unbounded retrospective work and responsibility? > "If these tasks fall under the "don't make everyone mad" job responsibility, then yes. [...] Was there actually a real maintenance issue? [...] Or, the community could raise the problem to Big Company, who can do the work of verifying that there is a problem and working with the maintainer to resolve it." As soon as the internet becomes aware that they can get anything merged ASAP by threatening to get mad, everyone will be mad about everything all the time. Whom at the BigCo will do the work of verifying whether there is a problem? I mean, let's put Lasse Collins on a specific team along with other employees who are expected to work 40-80 hour weeks while he isn't. The pressure on the team manager to ditch the maintainer and distribute his salary among the other team members would be constant. If those other team members see him doing less work for similar or more money it would be a morale killer and they would want to leave. If they also have to know his project well enough to follow all the drama and things people are complaining about and tease out what is and isn't a real problem and coerce him to do his job, sorry 'work with him', well, they won't be very motivated to do that. |
![]() |
|
What about just using 'web of trust', for example with GPG? If the user's key is signed by people that met up with the actual person, it would be much harder to make fake identities.
|
![]() |
|
They could do that, but you'd be able to see that nobody/few outside their cluster signed any of their keys. Let's say they have fake passports and physically appear at key signing parties. Now you're screwed because even your peers (that you thought know how to validate identities using passports) will get fooled. Read more on GPG's trust levels: https://www.gnupg.org/gph/en/manual/x334.html |
![]() |
|
I think it is unauthenticated from the point of view of SSH’s own authentication. The backdoor has its own credential, but the RCE is accessible if you don’t have an account on the system.
|
![]() |
|
What makes you say that? SSH, RDP, even hitting a web service are all valid cases of authorized remote code execution. It's not the remote or execution parts that are bad.
|
![]() |
|
I’m of the opinion that there are backdoors in most of our software and a lot of our hardware. xz just happened to be caught because it was hogging resources.
|
![]() |
|
Open source fundamentally does not work that way. There are many important open source contributors who work pseudonymously. Google's Know, Prevent, Fix blog post floated the idea of stronger identity for open source in https://security.googleblog.com/2021/02/know-prevent-fix-fra... and there was very significant pushback. We learned a lot from that. The fundamental problem with stronger identity is that spy agencies can create very convincing ones. How are distros going to detect those? |
![]() |
|
You make excellent points; I agree. Especially, a non-maintainer with a high-quality contribution gaining trust. Many times, (tired) maintainers are forced to "rubber-stamp" and merge such high-quality patches. It could be due to any number of (valid) reasons—a CVE fix, an involved performance fix that will take you weeks to load up on the context, enabling a hardware feature that's under semi-NDA, you just trust their work too well, maintainer fatigue, etc. What I'm saying is, in context of critical-path software, the identity of maintainers vs non-maintainers matters more. I'm not naively claiming that it'll "solve" the problem at hand, just that it's another layer in defense. For a critical software, you shouldn't be able to simply submit a "patch"[1] such as:
Commit it yourself, brazenly push it into Linux distros, and then anonymously sign off into the sunset with no trace. I'm sure you'll agree that there's a world of difference between a deeply entrenched, critical libray and a random user-space application.It's a messy situation. How much, if at all, "clever tech" can mitigate this human "trust issue" is an open problem for now. [1] https://git.tukaani.org/?p=xz.git;a=commitdiff;h=cf44e4b7f5d |
One thing that could be added, for the sake of completeness: in the part "Attack begins", toward the end, when they are pushing for updating xz in the major distros, Ubuntu and Debian are mentioned but not Fedora.
Looks like the social engineering/pressuring for Fedora started at least weeks before 2024 March 04, according to a comment by @rwmj on HN [1]. I also found this thread on Fedora's devel list [2], but didn't dig too much.
[1] https://news.ycombinator.com/item?id=39866275
[2] https://lists.fedoraproject.org/archives/list/[email protected]...