Hacker Newsnew | past | comments | ask | show | jobs | submit | Stefan-H's commentslogin

The point of E2EE is that only the people/systems that need access to the data are able to do so. If the message is encrypted on the user's device and then is only decrypted in the TEE where the data is needed in order to process the request, and only lives there ephemerally, then in what way is it not end-to-end encrypted?


Because anyone with access to the TEE also has access to the data. The owners can say they won't tamper with it, but those are promises, not guarantees.


That is where the attestation comes in to show that the environment is only running cryptographically verified versions of open source software that does not have the mechanisms to allow tampering.


That's insufficient. Code signing doesn't do anything against theft or malfeasance by internal actors. Or external ones, I suppose.

If the software can modify data legitimately, it can be tampered with.


The point of measured environments like the TEE is that you are able to make guarantees about all the software that is running in the environment (verified with the attestation). "If the software can modify data legitimately, it can be tampered with." - the software that makes up the SBOM for these environments do not expose administrator functions to access the decrypted data.


Just like your mobile device is one end of the end-to-end encryption, the TEE is the other end. If properly implemented, the TEE would measure all software and ensure that there are no side channels that the sensitive data could be read from.


By that logic SSL/TLS is also end-to-end encryption, except it isn't


When the server is the final recipient of a message sent over TLS, then yes, that is end-to-end encryption (for instance if a load balancer is not decrypting traffic in the middle). If the message's final recipient is a third party, then you are correct, an additional layer of encryption would be necessary. The TEE is the execution environment that needs access to the decrypted data to process the AI operations, therefore it is one end of the end-to-end encryption.


This interpretation basically waters down the meaning of end-to-end encryption to the point of uselessness. You may as well just say "encryption".


E2EE is usually applied in contexts where the message's final recipient is NOT the server on the other end of a TLS connection, so yes, this scenario is a stretch. The point is that in the context of an AI chat app, you have to decide on the boundary that you draw around the server components that are processing the request and necessarily need access to decrypted data, and call that one "end" of the connection.


No need to make up hypotheticals. The server isn't the final destination for your LLM requests. The reply needs to come back to you.


If Bob and Alice are in an E2EE chat Bob and Alice are the ends. Even if Bob asks Alice a question and she replies back to Bob, Alice is still an end.

Similarly with AI. The AI is one of the ends of the conversation.


So ChatGPT is end-to-end encrypted?


No, because there is a web server that exposes an API that accepts a plaintext prompt and returns plaintext responses (even if this API is exposed via TLS). Since this web server is not the same server as the backend systems that are processing the prompt, it is a middle entity, rather than an end in the system.

The difference here is that the web server receiving a request for Confer receives an encrypted blob that only gets decrypted when running in memory in the TEE where the data will be used, which IS an end in the system.


Is your point that TLS is typically decrypted by a web server rather than directly by the app the web server forwards traffic to?


Yes. I include Cloudflare as part of the infrastructure of the ChatGPT service.


See my other comment, but the answer here is resoundingly "No". For the communication to be end-to-end encrypted the payload needs to be encrypted through all steps of the delivery process until it reached the final entity it is meant for. Infrastructure like cloudflare generally is configured to be able to read the full contents of the web request (TLS interception or Load balancing) and therefore the message lives for a time unencrypted in the memory of a system that is not the intended recipient.


Go read a book on basic cryptography. Please.


I have read through Handbook of Applied Cryptography.


The image is watermarked from gemini, so presumably the author was trying to allay concerns that the important content was fake.


That doesn't answer the question of why you would use an LLM to blue your monitor when there are a thousand ways to do it yourself


Because bluing yourself is messy.

https://www.youtube.com/watch?v=9GYtgFdXCGE


Why not? You send the picture and ask to blur the monitor in plain text. It gives you back the picture with a blurred monitor.

That seems like a very easy way to do the job. What's the issue specifically?


The same reason today's inexperienced programmers depend totally on NextTailVibeJSFlare. It's all they know.


Honestly, if these image models still use diffusion with random seeds at their core, it might be actually more secure than blurring it yourself.


Yeah essentially this. The irony of about wanting to obscure information by submitting it to a model API isn't lost on me, but it was the easiest way I could think of. Wanted some way of making the most key content in my picture to be the only thing unblurred


An attacker with a privileged position on the network allowing them to eavesdrop (but not decrypt) traffic could use a bug like this to identify the device on the network associated with a phone number in Signal. Given nation state level adversaries, that seems like a significant privacy issue to me.


Cooperation under duress is still cooperation.


Many consumer devices can be selectively targeted for updates. The entities that control the update servers are controlled by the states that they are a part of. People seem to have forgotten that companies once felt the need to invent warrant canaries to warn when they had received non-public court orders. Presumably they can also be forced not to remove the warrant canary.

Edit: My first read had me interpret backdoor as any undetected means of gaining access to a device/system. I have updated by definition to mean using a flaw in the system left intentionally to gain access. This somewhat negates the need for my previous comment, but I'll leave this for illustrative purposes.


How user antagonistic changing code on IoT devices should be is highly dependent on the threat model for the devices. I'm happy to trust home users to flash their lightbulbs and door locks (though the company might not see that as acceptable to their brand reputation if their lock is compromised nonetheless), but I would prefer not to trust the hundreds of IT departments and engineering teams to properly vet the code they are flashing onto industrial control systems when lives are at stake - centralized authority and accountability with high visibility on the code base that is flashed to the devices is what is needed there.


Are you familiar with the academic field of security and the notion of trust in trusted computing? The IoT devices that is being discussed in the article are for industrial control systems, not necessarily your home lightbulb. The threat model is different. Do you want every municipal power company to be trusted to properly vet the code they are putting on these devices, or do you want to trust the device manufacturer to be the one who can put code on the devices?


Owner is still owner, be it someone who lives in a single family residence, or that of a municipality.

In my area, tornado sirens are unencrypted aand a simple recordable and replayable frequency. The cost to add an encrypted radio connection is $100k for the base station, and $25k per siren. There are 80+ sirens.

If this were open source, then a simple computer could he retrofitted to do this. But because they are highly proprietary, the county would be on the hook for $2.1M just to defend against an asshole with a HackRF.

FLOSS and open principles should matter to governments as well as individuals. Trading temporary easiness for no long term usability is utterly ridiculous. And you end up with a doorstop in the end either way.


And who can push new code after the manufacturer's bankruptcy? I've worked in IoT and I'd say the biggest security problems are in this order:

- Devices requiring Internet access for functionality that could have been done locally

- Hardware SDKs which are basically abandoned forks by manufacturers so IoT companies ship stone-age kernels and device drivers

- The usual stuff: too much complexity, lack of tests, bad documentation, meaning old parts of the software get forgotten (but remain exploitable)

Theoretical waxing about trusted computing and remote attestation does seem disingenuous when problems with non-certified firmware is probably not even in the top 10 in the real world. Notice how the article author mentions some scary attacks but conveniently omits how the attackers actually gained access?


What alternatives come to mind when asking that question? Not being in the PKI world directly, web of trust is what comes to mind, but I'm curious what your question hints at.


I honestly don’t know enough about it to have an opinion, have vague thoughts that dns is the weak point anyway for identity so can’t certs just live there instead but I’m sure there are reasons (historical and practical).


While YMMV, a fear response is a choice. You can have all the rational reasons to be afraid (like the bottom of your hierarchy of needs being unmet) and choose to act out of cold rationality rather than fear. Then it becomes a self-fulfilling prophecy - if you can act without fear even when there is justified reason to be afraid, you will be able to easily do so when it isn't justified.


Where I come from, "hav[ing] all the rational reasons to be afraid" and pretending otherwise is called a delusion. I prefer to see the world as it is.


"... is called a delusion". What I am suggesting is not delusion, it is mindfulness and cutting through delusion. When one is presented with something that elicits a fear response (whether the stimulus is rational or not) the goal is to quiet all of the "lizard brain" reactions, and instead formulate a well reasoned response. "Fear is the mind-killer" - while from fiction, still rings true to me - if you react out of fear you will short-circuit internal processes that are far better at long-term reasoning even when at the expense of short-term comfort.


I'm sorry, but that is delusional. It is not possible for humans to forego emotion in favor of logic.


It's really just about giving yourself enough time to think before you respond. That's the entire difference between a reaction and a response. You can use dialectical and cognitive behavioral therapies to help develop the tolerance to do that. Mindfulness and meditative practices like those in zen buddhism have proven helpful to me as well. Perhaps you're taking an extreme interpretation of my using the word "logic" and instead you could use "wise mind" or even just "considered thought" as the response in lieu of an emotional one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: