As mentioned in the comments, plain text keys aren’t bad because they are necessary. You have to have at least one plain text key in order to be able to use encryption

  •  Riskable   ( @riskable@programming.dev ) 
    link
    fedilink
    English
    89
    edit-2
    5 months ago

    This is a, “it’s turtles all the way down!” problem. An application has to be able to store its encryption keys somewhere. You can encrypt your encryption keys but then where do you store that key? Ultimately any application will need access to the plaintext key in order to function.

    On servers the best practice is to store the encryption keys somewhere that isn’t on the server itself. Such as a networked Hardware Security Module (HSM) but literally any location that isn’t physically on/in the server itself is good enough. Some Raspberry Pi attached to the network in the corner of the data center would be nearly as good because the attack you’re protecting against with this kind of encryption is someone walking out of the data center with your server (and then decrypting the data).

    With a device like a phone you can’t use a networked HSM since your phone will be carried around with you everywhere. You could store your encryption keys out on the Internet somewhere but that actually increases the attack surface. As such, the encryption keys get stored on the phone itself.

    Phone OSes include tools like encrypted storage locations for things like encryption keys but realistically they’re no more secure than storing the keys as plaintext in the application’s app-specific store (which is encrypted on Android by default; not sure about iOS). Only that app and the OS itself have access to that storage location so it’s basically exactly the same as the special “secure” storage features… Except easier to use and less likely to be targeted, exploited, and ultimately compromised because again, it’s a smaller attack surface.

    If an attacker gets physical access to your device you must assume they’ll have access to everything on it unless the data is encrypted and the key for that isn’t on the phone itself (e.g. it uses a hash generated from your thumbprint or your PIN). In that case your effective encryption key is your thumb(s) and/or PIN. Because the Signal app’s encryption keys are already encrypted on the filesystem.

    Going full circle: You can always further encrypt something or add an extra step to accessing encrypted data but that just adds inconvenience and doesn’t really buy you any more security (realistically). It’s turtles all the way down.

        • This particular scenario involves the MacOS desktop app, not the phone app. The link is showing just an image for me - I think it’s supposed to be to https://stackdiary.com/signal-under-fire-for-storing-encryption-keys-in-plaintext/

          That said, let’s compare how it works on the phone to how it could work on MacOS and how it actually works on MacOS. In each scenario, we’ll suppose you installed an app that has hidden malware - we’ll call it X (just as a placeholder name) - and compare how much data that app has access to. Access to session data allows the app to spoof your client and send+receive messages

          On the phone, your data is sandboxed. X cannot access your Signal messages or session data. ✅ Signal may also encrypt the data and store an encryption key in the database, but this wouldn’t improve security except in very specific circumstances (basically it would mean that if exploits were being used to access your data, you’d need more exploits if the key were in the keychain). Downside: On iOS at least, you also don’t have access to this data.

          On MacOS, it could be implemented using sandboxed data. Then, X would not be able to access your Signal messages or spoof your session unless you explicitly allowed it to (it could request access to it and you would be shown a modal). ✅ Downside: the UX to upload attachments is worse.

          It could also be implemented by storing the encryption key in the keychain instead of in plaintext on disk. Then, X would not be able to access your Signal messages and session data. It might be able to request access - I’m not sure. As a user, you can access the keychain but you have to re-authenticate. ✅ Downside: None.

          It’s actually implemented by storing the encryption key in plaintext, collocated with the encrypted database file. X can access your messages and session data. ❌

          Is it foolproof? No, of course not. But it’s an easy step that would probably take an hour of dev time to refactor. They’re even already storing a key, just not one that’s used for this. And this has been a known issue that they’ve refused to fix for several years. Because of their hostile behavior towards forks, the FOSS community also cannot distribute a hardened version that fixes this issue.

  • Restricting access to files within a user is why sandboxing is useful. It in theory limits the scope of a vulnerability in an app to only the files it can read (unless there is a sandbox escape). Android instead prevents apps from accessing other apps’ files by having each app run as a separate user.

    One way to keep the encryption keys encrypted at rest is to require the login password (or another password) to open the app, and use it to encrypt the keys. That said, if an adversary can read Signal’s data, they can almost certainly just replace Signal with a password-stealing version.

    • But can you trust that a user will pick a difficult to break password? They likely will pick something simple to remember but that is not a good password.

      The we are just back to essentially having a plaintext password because if the attacker has a good dictionary, it will be easy to crack.