The writer is chief executive of Post-Quantum
My firm developed the world’s first “quantum-safe” instant messaging system in 2014. This means not even a mature quantum computer with code-breaking capabilities can decipher the encrypted text. When we made the system available through the App Store, it seemed like a victory for privacy at a time when the exploitation of user data was deemed out of control. The reality proved vastly more complex when our app appeared on an Islamic State recommended technical tools list.
We were aghast. A tool we had developed to help people assert their right to online privacy, to conduct their lives away from the prying eyes of Big Tech and to uphold individual liberty had been co-opted by an organisation committed to destroying those very same values. Even though we were achieving healthy daily downloads, the eventual decision to switch it off was relatively easy. We would not put profit before human lives. This experience was a microcosm of the debate that rumbles on to this day: what should be the trade-off between privacy and the protection of citizens from harm?
The issue was raised by UK home secretary Priti Patel last month when she called for stricter regulation of encrypted messaging, which has historically been used by criminals to exploit children. End-to-end encryption makes it extremely difficult for tech companies to monitor communications and intercept child abuse imagery.
Privacy rights groups counter that civil liberties may be eroded if access to encrypted messages is granted and our lives are opened to even greater supervision. It’s a difficult situation and one that must be handled with the utmost care. Should messaging platforms operate if the world’s governments are unable to intercept messages? Yet governments are tasked with protecting citizens and therefore quite naturally seek to deploy technology to identify the threat of terrorism or harm.
Clashes are inevitable. In India, Facebook-owned WhatsApp is suing the government over new “traceability” regulations that would force it to break end-to-end encryption. The rules, the company said, paves the way for “a new form of mass surveillance”. The potential for the misuse of government-sanctioned backdoors in encryption must not be overlooked. A backdoor for one is a backdoor for all, whether a government agency, a hacker or a malicious nation. What is needed is a “side door”.
Since 2014 we have worked on a middle-ground solution that ensures total security while allowing data access based on pre-agreed policies. The technology we use is “threshold cryptography” which works by chopping a secret into pieces so that an action, like accessing a message, is only possible if enough pieces are reassembled. A nuclear missile terminal where three keys are needed to launch is a similar principle.
Threshold cryptography offers a mechanism to govern legal access to encrypted messages — a “side door” that should only be accessed if multiple parties such as governments, companies and preferably courts each provide their key. This divides control and responsibility while limiting the ability of rogue actors to stroll through a backdoor.
Years of work went into our messaging system. Yet in the end, we removed it from the App Store and now only provide it to financial services companies and western governments for carefully selected and compliant internal use, such as secure messaging on the trading floor. We were not able to brush off the fear that an organisation such as Islamic State might use the technology.
In hindsight, we should not have been the ones to make the call. With several billion users on social media and messaging apps that increasingly use encryption, regulation is not easy. But at present it is the creators of these technologies who set the terms and limits of their use. Whether this is optimal, or ethical, should be scrutinised and debated. Private companies should not be responsible alone.