Root Causes 489: Does AI Nullify E2EE?
Does AI kill end-to-end encryption? There is a contention that the presence of AI agents in the workstream will render your confidential information visible outside the encrypted communication channels and therefore that E2EE is pointless. We explore this argument.
- Original Broadcast Date: April 24, 2025
Episode Transcript
Lightly edited for flow and brevity.
-
Tim Callan
So Jason, you and I looked at somebody's blog post the other day, and this guy was making the argument that AI effectively nullifies the capability for end-to-end encryption.
-
Jason Soroko
Specifically DeepSeek.
-
Tim Callan
Specifically DeepSeek. So walk us through the argument on this.
-
Jason Soroko
The argument that was being made was what's the purpose of end-to-end encryption? He didn't say it creates risk in encryption, he said end-to-end encryption is dead. That's an extreme statement to make. The idea was that, well, if you're gonna interact with an AI system that's snooping around all your stuff, one of the other examples he gave was - I don't think he said Apple Intelligence, but he meant Apple Intelligence. In other words, AI on your phone, which is tracking a lot of things in order to understand your behaviors, in order to react as an AI.
-
Tim Callan
It's getting that stuff upstream of the encryption, because it's happening inside of the device or the software process before you go out to communicate with the other party.
-
Jason Soroko
So it's interacting with a lot of data about your behaviors. Your data overall. Like, we spent all this effort with the Signal app, and Apple iMessage end-to-end encryption, and he specifically talked about the lock on - which doesn't even exist anymore in a browser. In other words, TLS. Publicly trusted certificates. All that's dead. So it was throwing around end-to-end encryption, but also other forms of encryption, as well as just being why do we do all that when we're using DeepSeek at the same time. Well, let's address DeepSeek first.
In fact, maybe the next episode we're going to record, Tim, is going to be about Jason's AI stack, because I want to explain how I'm doing AI. It's a little different than most people. One of the ways I do things a little differently is I actually run the open-source DeepSeek-R1 model locally. In other words, I'm not going to a cloud server and typing in intimate things about myself to - -
-
Tim Callan
That stuff is staying on your hard disk.
-
Jason Soroko
100%. So the way I do AI is differently. Therefore, that argument in the blog post, that's the first one I want to shut down. So in other words, you can do AI more securely. In fact, that's what the next episode is going to be about.
-
Tim Callan
Sure. By the way, we've seen that isn't just you doing it. Like, that's a pretty common productization of AI. If you're buying an enterprise AI product, like if you're using Copilot as part of a Windows license or Microsoft Office license, that stuff is staying locally. That is designed that way on purpose, because the people at Microsoft understand that these concerns exist.
-
Jason Soroko
Offline models work really well. It was a small point in the blog, but at least we're making the point. Saying that end-to-end encryption is dead is like blaming an envelope for leaving a sensitive letter at the bar on the table. You don't blame the envelope because of the fact that you opened the envelope and then left the letter somewhere. Don't blame the envelope. The envelope did exactly what it should.
-
Tim Callan
It's not like you're gonna stop using envelopes.
-
Jason Soroko
Exactly. Why do you not stop using envelopes? Let's get back to the encryption and get away from the analogy. Encryption is giving you a guarantee of authenticity, integrity, confidentiality.
-
Tim Callan
Non repeatability. An uncertain, untampered nature, I guess that's integrity.
-
Jason Soroko
That’s important. Why in the world would you declare that dead? AI didn't do anything bad to that.
-
Tim Callan
Absolutely. If the point is, this is the, this is the real document that you really sent to me, and it was really you, and it was on this date, and I can prove that it's the real document and you sent it to me, even if AI is snooping on things it shouldn't be snooping on, it has nothing to do with any of that. It’s only the secrecy, only the privacy aspect that is compromised. If it even is.
-
Jason Soroko
I think that part of what you're seeing here is there's a few fallacies going on. We won't get into it too much. It's just, I think that people are confusing security and privacy. I think they're confusing the role of encryption and what it's actually doing. I think that that envelope analogy is the right one.
-
Tim Callan
I don't want to attribute too much that author’s motivations, because I don't know who it was. We just read this thing. But I think perhaps this wasn't so much a call to abandon encryption as it was being alarmed about a potential risk vector through AI that most of us aren't considering. I think that's valid.
-
Jason Soroko
It is valid, and it's been valid for a really long time. Way longer than AI has been around. AI is just accelerating the problem. everybody's heard the joke, but that's not quite a joke. Government asks you to put some information on a form, like fill out a census form, and people get all up in arms, and yet, they'll put 10 times more or 100 times more up on social media. About themselves.
-
Tim Callan
Or tell you anything, as long as they get free WiFi.
-
Jason Soroko
They’ll you anything. Exactly. Therefore, in order to have a personal assistant, even if it's a human personal assistant, that personal assistant is going to know a lot about you. If AI is serving that role, well, they're going to know a lot about you.
-
Tim Callan
Now I think to go back to part of what you said before, this sort of depends on what are we defining as the risk? So if there is a localized agent that indeed knows a lot about me, which it is storing locally, and if we feel that we have done an adequate job of protecting that local information store from breach, for want of a better word, theft, accidental leakage, then maybe, maybe we're good with that. Like, I get that, it is kind of a honey pot. If everything's all in that one spot, and you figure out how to get in and own that system and take it, then you get all the good stuff, or if there's some kind of error and it's accidentally leaking it when it shouldn't be, then that's a problem. But I think this is the normal world of secure software development that we're talking about.
-
Jason Soroko
Tim, are you comfortable using cloud-based spreadsheets?
-
Tim Callan
Absolutely.
-
Jason Soroko
Cloud-based word processors?
-
Tim Callan
Absolutely. Same problem. If I don't want people to know my calendar because I'm afraid I might be kidnapped because I'm a high value individual, high net worth individual, what's the real difference between somebody stole that out of Google Cloud and somebody got that off the cloud backup of my iPhone and somebody got it off of my AI tool. What you really just need to do is make sure that those systems are hardened as well as we know how.
-
Jason Soroko
So what you're saying is it's the same old story as always?
-
Tim Callan
I think so.
-
Jason Soroko
You just got to be smart about what you're choosing to do, what you're choosing to put there, and if you're a high net worth individual, you got to be extra smart, because people are after your stuff. But the claim that end-to-end encryption is dead, please don't ever believe any crap like that.
-
Tim Callan
I think maybe that was a click-baity title.
-
Jason Soroko
So click-baity and dumb. Really dumb, too.
-
Tim Callan
But I don't agree either.
-
Jason Soroko
It wasn’t even clever click bait. Even the underlying point, which is, hey, are there analogies to my privacy problems? It's the same one we've always been facing.
-
Tim Callan
That's my big argument here is, yes, these concerns are valid. No, these concerns are not new. This is the exact same struggle that we have had for decades and will continue to have for decades to come, which is trying to harden incredibly complicated systems that are getting more complicated all the time, that are evolving at faster rates all the time, against an increasingly well-resourced set of savvy attackers who have the opportunity to gain tremendous wealth if they can figure out how to defeat our systems, and this is just living.
-
Jason Soroko
So here's some real basic advice to what Tim just said. You've heard this before. Don't put sensitive information into ChatGTP-4o. On the cloud. Don’t put the same thing into DeepSeek-R1 in the cloud, if you happen to be using it not offline. What's the difference? So folks, be smart.
-
Tim Callan
Then, on the other hand, if I'm not worried about anybody figuring out my grandma's fried chicken recipe, then I can go ahead and put that in, and if other people enjoy good fried chicken, I can just be happy for them. So, that's the other side of it. It kind of depends on what it is.
-
Jason Soroko
I mean, the thing you want to be sensitive about, PII, customer information, stuff like that. Of course, really, really, really sensitive intellectual property, like code, you want to be using very specific kind of Copilot with your enterprise implementation is probably, really good example of how to do it right. There are lots of ways to do it wrong. So I really couldn't make any sense out of that blog post.
-
Tim Callan
Agreed with all of that. Again, my big argument is, I just think that's just another way of describing what we already understand as the normal state of affairs.