Tim Mahoney

(replying to Saagar Jha)

@saagar so you’re saying that since Apple controls the hardware, Apple could potentially violate the privacy guarantees in a way that isn’t verfiable/visible externally?

1 replies →
1 replies

Saagar Jha

(replying to Tim Mahoney)

Tim Mahoney

(replying to Saagar Jha)

@saagar

So even if the software guarantees/attestation/etc remain secure, some sort of hardware change could compromise the system?

I find it hard to believe that a vulnerability like that would be overlooked. I guess I assume the flow would also include hardware attestation.

Or is it something else I’m missing?

Saagar Jha

(replying to Tim Mahoney)
@_tim______ Overlooked by whom? I don’t see the server. Apple can do whatever they want to it. Like, for example, it could have a little attachment on it that waits to pull decrypted data out of RAM (don’t take this example too seriously)
1 replies →
1 replies

Tim Mahoney

(replying to Saagar Jha)

@saagar overlooked by Apple. Unless the insinuation is that Apple purposefully left a vulnerability, which I’d find hard to believe.

You’re saying some sort of physical attachment to the hardware? I’m out of my area of expertise here, so I’m not sure what’s actually possible.

Saagar Jha

(replying to Tim Mahoney)
@_tim______ Not a vulnerability. This would be Apple acting intentionally to peek inside their servers
1 replies →
1 replies

Ben!

(replying to Saagar Jha)

@saagar @_tim______

“Our threat model for Private Cloud Compute includes an attacker with physical access to a compute node and a high level of sophistication — that is, an attacker who has the resources and expertise to subvert some of the hardware security properties of the system and potentially extract data that is being actively processed by a compute node.”

Saagar Jha

(replying to Ben!)
@enefekt @_tim______ Yes it’s good that they understand the actual threats well but the problem is their threat model starts off with an attacker with problematic capabilities. Like, “we model an attack that can already access data” is…not what Craig is saying in interviews

Saagar Jha

(replying to Saagar Jha)
@enefekt @_tim______ Normally you have a threat model that’s like “we think an attacker has arbitrary read” (not a desirable end goal) and it ends with “we prevent them from getting code execution” (desirable end goal). They start from an unfortunate place

Saagar Jha

(replying to Saagar Jha)
@_tim______ @enefekt If someone said “our threat model is that an attacker has access to your iMessage conversations” and they went on to say “we think it’s hard for them to get a *specific* message from your phone, without exfiltrating all of them” you’d be go “wtf”

Saagar Jha

(replying to Saagar Jha)
@_tim______ @enefekt You’d think that was an awful threat model–attackers have access already! But that’s exactly what they’re starting from here. And mind you, their protection is “we think we’d find them if they did a broad attack” not “it’s technically infeasible to do this”

Saagar Jha

(replying to Saagar Jha)
@_tim______ Like, Apple is selling that they are better than their competitors because even though their competitors can put “we don’t look at your data” in their privacy policy they can do it anyways. But Apple can also do the same thing

Tim Mahoney

(replying to Saagar Jha)

@saagar I believe they’re saying they _can’t_ do the same thing, but I might be wrong.

Saagar Jha

(replying to Tim Mahoney)
@_tim______ Their advertising says this. Their blog post does not corroborate it

Saagar Jha

(replying to Saagar Jha)
@_tim______ Who is checking for that? How often? There is a note about a third party audit but we don’t know anything about that yet. And I’m still not sure how much trust that would confer

Tim Mahoney

(replying to Saagar Jha)

@saagar That’s a good question. You’d hope that this whole system was provably secure even if someone has access to all the hardware, software, everything. I guess we’ll see as more information comes out.

Saagar Jha

(replying to Tim Mahoney)
@_tim______ See I don’t think we actually know how to do this yet at a theoretical level
2 replies →
2 replies

Tim Mahoney

(replying to Saagar Jha)

@saagar from what I gather, part of the hard problem here is running the code in an environment that is protected from peeking at the data even while the execution is in progress. Basically like Intel SGX, but not compromised.

The write up mentions the Secure Enclave. Maybe Apple figured it out?

Mark Pauley

(replying to Tim Mahoney)

@_tim______ @saagar I am assuming that if you cannot cryptographically guarantee the actual topology of the hardware it’s running on then you can never be 100% secure if Alice can’t even trust Bob. Maybe with homomorphic encryption?

Light

(replying to Mark Pauley)

@unsaturated @_tim______ @saagar
>Maybe with homomorphic encryption?
That's what I assumed was being talked about at first.

Mark Pauley

(replying to Light)

@light @_tim______ @saagar I’ve heard HE basically doesn’t work :tiredcat:


Dominic Hopton

(replying to Saagar Jha)

@saagar @_tim______ in many ways this whole space is riddled with ‘no liquids’ security theater. You only need *one* person to get through the ‘perimeter’ with a ‘magic dongle’ that reads stuff off the side of the rack. Or *one* person to open a gap. If you don’t get it through the first 10,000 times, it’s OK. But that 10,001th time…

Tim Mahoney

(replying to Tim Mahoney)

@saagar if any of my code or PR comments are involved, do you think the lawyers will enjoy my jokes?