Saagar Jha

(replying to Saagar Jha)
The “lol I wouldn’t have done that” analysis is, IMO, like fixating on a “goto fail” level of bug–isolated, clearly wrong, obvious to fix–while there are also a thousand other memory corruptions in the language that are prohibitively expensive to fix. But for supply chains here

Saagar Jha

(replying to Saagar Jha)
The problem is that nobody can read all this code. That’s it. You can make the code 50% clearer or reduce the number of libraries loaded or increase auditing but there is so many orders of magnitude more code being written than is properly reviewed that this can’t be fixed
4 replies →
4 replies

Claudio Cicali

(replying to Saagar Jha)

@saagar nobody can read all the code, but AI can. Do you think putting AI in the pipeline of "what this software does" could me a smart thing to do in the future?

2 replies →
2 replies

Saagar Jha

(replying to Claudio Cicali)
@caludio I think it would be a smart thing to try at least but I have yet to see any AI system that can answer this accurately yet, especially when faced with an intentional backdoor

Noah Gibbs

(replying to Claudio Cicali)

@caludio @saagar Are we trusting it to do so?

Right now -- and probably forever -- AI's big problem is reliability. It can give you *an* answer, and under some circumstances a *right* answer, and security is so hard for humans because that's mostly not good enough.

AI seems to have a very human-like security problem profile. Which is a bad sign in this specific case.

Saagar Jha

(replying to Noah Gibbs)
@codefolio @caludio AI is IMO worse than humans currently because it is quite easy to trick it, far easier than it is to trick the average person

creator of #fediblock :verified::makemeneko:

(replying to Saagar Jha)
@saagar @caludio @codefolio easier to trick but on the other hand humans have limited endurance

Saagar Jha

(replying to creator of #fediblock :verified::makemeneko:)
@roboneko @caludio @codefolio Yes and in that aspect AI is much better. However replacing your fraud prevention person with a hundred children is unlikely to actually be a positive step even though they might be able to review more cases

creator of #fediblock :verified::makemeneko:

(replying to Saagar Jha)
@saagar @caludio @codefolio ok but what if we have the children flag things they think are really suspicious and otherwise leave the adult alone

I'm not saying AI is there just yet but it seems (to me at least) like a plausible next-year-or-so development

Saagar Jha

(replying to creator of #fediblock :verified::makemeneko:)
@roboneko @caludio @codefolio idk my experience with children is that they never leave adults alone except when they are doing something contrary to what the adults told them to do

Claudio Cicali

(replying to Saagar Jha)

@saagar @roboneko@bae.st @codefolio this is a beautiful metaphor👌


Misuse Case

(replying to Saagar Jha)

@saagar I think it is worth clarifying that the danger of deliberately or accidentally introduced vulnerabilities is not just a problem with this particular codebase or open source projects or anything like that. This is a problem with *everything* because it’s all so complex these days.

No single person “understands” it all. No small group of people does. Nobody can. It’s too big.

1/2

Misuse Case

(replying to Misuse Case)

@saagar And I see some of y’all saying “but AI can help with reviewing this big codebase.” No, I assure you, it cannot. If humans can’t understand these huge masses of code, then AI (which isn’t really that at all) certainly can’t, because AI doesn’t “understand” anything.

It can maybe find possible compliance/rulebreaking issues, generating many false positives along the way, but not holistic/architectural gaps to be exploited. You need humans to find those.

2/2


Phil Dennis-Jordan

(replying to Saagar Jha)

@saagar Yeah, this struck me as well: it’s kind of nuts that we’re apparently fine with things like sshd loading, and being able to load, a giant unbounded ball of transitive libraries. As annoying as the way it’s implemented is, Apple’s code signing/library validation setup would have presumably prevented this, even if the payload wasn’t glibc specific? Possibly even a (say) homebrew version of sshd. (I’ve not had a chance to look exactly how the code is injected.)

Greg Parker

(replying to Phil Dennis-Jordan)

@pmdj @saagar I don't think code signing and library validation would have prevented this attack. My understanding is that sshd intends to load liblzma (perhaps indirectly) and that the malicious code in liblzma is introduced by subverting the legitimate build process. No unsigned code being executed, no unauthorized libraries being loaded.

The mechanism by which code inside liblzma can interfere with other parts of sshd might be more difficult with other Apple security protections. But that might only be a speed bump, not a wall.

Saagar Jha

(replying to Greg Parker)
@gparker @pmdj Yeah the model here gives the attacker not just arbitrary code execution in the target process, but also the ability to introduce new code, which Apple has no way to mitigate against reliably

creator of #fediblock :verified::makemeneko:

(replying to Saagar Jha)
@saagar

> here is so many orders of magnitude more code being written than is properly reviewed that this can’t be fixed

if the focus is "core OS utilities" instead of "all code, everywhere" does this really need to be the case? it seems reasonable that security critical infrastructure might be held to a higher standard. ssh and inkscape are not remotely the same

Saagar Jha

(replying to creator of #fediblock :verified::makemeneko:)
@roboneko I think even critical infrastructure is quite large. Logging into a machine requires some sort of thing to approve that, all the dependencies it pulls in, language runtime, kernel, hypervisor, [unspeakable hardware horrors], … all to be good

Saagar Jha

(replying to Saagar Jha)
It makes me so sad because I want this to be fixed and I want to go “oh if we paid maintainers some money the problem would go away” but, like, it just doesn’t seem to work. There is just so much code. We are drowning in it. The complexity of our stacks is insane
1 replies →
1 replies

Janis

(replying to Saagar Jha)

@saagar I pasted some code into Gemini and asked it to comment the doc. It did. You're right there aren't enough humans to read all the code. What we need are bots that can go in and find what we're looking for.

Saagar Jha

(replying to Janis)
@janisf Gemini is not capable of explaining what this backdoor does, unfortunately

Saagar Jha

(replying to Saagar Jha)
Again, this isn’t to say that we shouldn’t do any of the obvious solutions. I want big companies who make billions in profit to invest in those first too. But I have yet to see an answer to the problem of “how do we prevent backdoors”. I think we might not be able to
2 replies →
2 replies

Ben Cohen

(replying to Saagar Jha)

@saagar this thread is a lone voice of reason on my timeline, surrounded by boosts of the “this is entirely because we don’t pay maintainers” narrative.

Saagar Jha

(replying to Ben Cohen)
@airspeedswift It’s one of those dangerous solutions because you look at it and it’s clearly better than the alternative (maintainers burn out and don’t get paid) and we *should* do it. But it doesn’t stop supply chain attacks in general (and maybe not even this one)
2 replies →
2 replies

Ben Cohen

(replying to Saagar Jha)

@saagar yeah it seems lots of people are confusing correlation (the original maintainer burnt out) with causation (the original maintainer handed off to someone, as it happens because they burnt out but seems like if they'd moved on without burning out – which is a thing – the same events could have happened)


Helge Heß

(replying to Saagar Jha)

@saagar @airspeedswift Aren’t both problems that Linux distri companies like RedHat and SuSE supposedly handle? (paying maintainers and also manage what exactly is packaged).

Noah Gibbs

(replying to Helge Heß)

@helge @saagar @airspeedswift

In theory, yes. In practice there is far too much code for them to review, too.

Also, the difficulty with profit/paid solutions is that *not* reviewing all that code, or reviewing it badly, is *always* cheaper than reviewing it, let alone reviewing it all well -- which is certainly impossible under current conditions.


Maximilian Mackh

(replying to Saagar Jha)

@saagar we probably need to re-think the computer paradigm. Local-first services/devices. I think the cloud was a giant mistake.

Saagar Jha

(replying to Maximilian Mackh)
@mmackh I agree but I think this is orthogonal

Saagar Jha

(replying to Saagar Jha)
But, perhaps, there is solace in the fact that this is basically all of computer security. We just shift around the calculus of which things are profitable to do as we steadily raise the bar everywhere. We can’t stop everything, but maybe it’s for the best that was a backdoor…

Saagar Jha

(replying to Saagar Jha)
…because, I mean, the thing we usually see is people getting hacked because their code is just broken, not backdoored. So maybe we’ve finally reached the point where the code was just functional enough to make trying this attractive. One can hope, at least

Saagar Jha

(replying to Saagar Jha)
*Or maybe not, which is the other thing I am a little hopeful about. The steps needed to find this were quite impressive but absent of any other information about backdoors, in particular the ones that actually stay hidden, this got discovered pretty quickly relatively speaking

Saagar Jha

(replying to Saagar Jha)
So like, statistically, it might be that making a backdoor that is actually undetectable for a while is really difficult. “Many eyes make all bugs shallow” and whatnot, except in a kind of different Bayesian version that nobody really likes but is a little reassuring
1 replies →
1 replies

i.grok

(replying to Saagar Jha)

@saagar I think there's some evidence for that, given the various commits to disable various checkers that were exposing that something hinky was going on in order to cover it up

The only reason this didn't get more attention is that our tools are too often noisy with false alarms

To me, that's an indication that making the attacks harder isn't a waste of time—and some of those tools didn't even exist a few decades ago, so we're making it better

Saagar Jha

(replying to i.grok)
@igrok I feel like these tools make it harder to make a backdoor but I was surprised that the attacker didn’t just change their backdoor to operate cleanly in those environments. Maybe they just thought this was easier

i.grok

(replying to Saagar Jha)

@saagar they were definitely rushing

Likely because systemd was about to disable their backdoor wrt sshd

But the tools definitely increased the profile and thus the risk. Not enough, but it complicated their lives enough to slow them down

Which is something we can be happy about

Saagar Jha

(replying to Saagar Jha)
An extreme case of Hyrum’s Law I guess, where people will accidentally and unknowingly become dependent on their code not being backdoored