Wysteria    Home    Archive    Feed

Signal Isnt as Secure as You Think*

Well, not exactly. While Signal itself and the Signal Protocol it operates on are secure, there are other things to worry about when considering privacy. That’s right, we’re talking threat modeling this time around.

Threat modeling refers to the basic idea of determining what you care about keeping private the most and what could go wrong. Reflecting on these potential attack vectors allows you to address those weaknesses, thus preventing them from ever being exploited in the first place. A basic example would be if you wanted to address advertisers and their data collection. You might read up on how advertisers target their ads, and then take steps to prevent it by installing something to block trackers. OWASP has some good articles on this, and goes much more in depth. Might be a worth a read for those interested in cybersecurity given its detail.

But I’m here more so to talk about something you may not have considered. Yes, time to go back to Signal. Signal itself is secure, so why would we need to give any more thought about it? Well, lets put it like this: a chain is only as strong as its weakest link. Signal is only one link in your chain, you have to consider things like your device or even your physical security.

But for most people, the most relevant thing is your device. If you cannot secure your device against threats you’re concerned about, then its not a stretch to assume that everything can be compromised on it. It may surprise you, but you might not even be able to trust your input method. There have been instances of data collected by virtual keyboard getting leaked. For something like this to occur, virtual keyboards first need to be collecting user data, and why wouldn’t they? Selling user data is extremely profitable, and they can get all the user data straight from the horses mouth. If you want to protect yourself against this, maybe consider switching up what virtual keyboard you use on your phone.

Another possible weak point is notifications. Its important to understand how it works on a high level. With Android, a service runs in the background. Apps that want notifications will connect to it, and the service will check with those apps every now and then if there are any notifications to render, and then render them. To put it simply, there is some piece of software that sits between your apps and displaying notifications. This could be an issue because the middleware can read all the contents of your notifications. Some people might have issue with this, given on Android the default (and effectively only) option is Google Play services. Which means if you have an Android device, Google sees 100% of the notifications you get. If you have an iPhone, you’re probably not much better off. While I can’t say if Apple collects and sells this data, both Apple and Google have cooperated with the US government and handed over notification data. Your threat model may be different, but regardless of what you need to defend against, its undeniable both major phone operating systems harvest notification data.

Realistically, how can you mitigate this? The best option would be to not use a phone, given how they’re always connected and have tons of sensors to collect data (GPS, microphone, camera, etc), but this isn’t feasible for most people. Luckily, the core of Android is open source, so privacy oriented forks of it such as GrapheneOS provide more secure alternatives. At the end of the day though, keep in mind who you’re trying to protect yourself against. If you just want to reduce information collected by advertisers, using a privacy respecting keyboard and having your messaging app hide message contents might be enough. But if you’re worried about other larger adversaries, go get advice from someone more qualified than a random person on the internet.