Saturday, February 15, 2025
No menu items!
HomeTechnologyWhat is an encryption backdoor?

What is an encryption backdoor?

Talk of backdoors in encrypted services is once again doing the rounds after reports emerged that the U.K. government is seeking to force Apple to open up iCloud’s end-to-end encrypted (E2EE) device backup offering. Officials were said to be leaning on Apple to create a “backdoor” in the service that would allow state actors to access data in the clear.

The U.K. has had sweeping powers to limit technology firms’ use of strong encryption since passing a 2016 update to state surveillance powers. According to reporting by the Washington Post, U.K. officials have used the Investigatory Powers Act (IPA) to place the demand on Apple — seeking “blanket” access to data that its iCloud Advanced Data Protection (ADP) service is designed to protect from third-party access, including Apple itself.

The technical architecture of Apple’s ADP service has been designed in such a way that even the tech giant does not hold encryption keys — thanks to the use of end-to-end encryption (E2EE) — allowing Apple to promise it has “zero knowledge” of its users’ data.

A backdoor is a term typically deployed to describe a secret vulnerability inserted into code to circumvent, or otherwise undermine, security measures in order to enable third parties. In the iCloud case, the order allows U.K. intelligence agents or law enforcement to gain access to users’ encrypted data.

While the U.K. government routinely refuses to confirm or deny reports of notices issued under the IPA, security experts have warned that such a secret order could have global ramifications if the iPhone maker is forced to weaken security protections it offers to all users, including those located outside the United Kingdom.

Once a vulnerability in software exists, there is a risk that it could be exploited by other types of agents, say hackers and other bad actors wanting to gain access for nefarious purposes — such as identity theft, or to acquire and sell sensitive data, or even to deploy ransomware.

This may explain why the predominant phrasing used around state-driven attempts to gain access to E2EE is this visual abstraction of a backdoor; asking for a vulnerability to be intentionally added to code makes the trade-offs plainer.

To use an example: When it comes to physical doors — in buildings, walls, or the like — it is never guaranteed that only the property’s owner or key holder will have exclusive use of that point of entry.

Once an opening exists, it creates a potential for access — someone could obtain a copy of the key, for example, or even force their way in by breaking the door down.

The bottom line: There is no perfectly selective doorway that exists to let only a particular person pass through. If someone can enter, it logically follows that someone else might be able to use the door too.

The same access risk principle applies to vulnerabilities added to software (or, indeed, hardware).

The concept of NOBUS (“nobody but us”) backdoors has been floated by security services in the past. This specific kind of backdoor typically rests on an assessment of their technical capabilities to exploit a particular vulnerability being superior to all others — essentially an ostensibly more-secured backdoor that can only be exclusively accessed by their own agents.

But by very nature, technology prowess and capability is a movable feat. Assessing the technical capabilities of unknown others is also hardly an exact science. The “NOBUS” concept sits on already questionable assumptions; any third-party access creates the risk of opening up fresh vectors for attack, such as social engineering techniques aimed at targeting the person with the “authorized” access.

Unsurprisingly, many security experts dismiss NOBUS as a fundamentally flawed idea. Simply put, any access creates risk; therefore, pushing for backdoors is antithetical to strong security.

Yet, regardless of these clear and present security concerns, governments continue pressing for backdoors. Which is why we keep having to talk about them.

The term “backdoor” also implies that such requests can be clandestine, rather than public — just as backdoors aren’t public-facing entry points. In Apple’s iCloud case, a request to compromise encryption made under the U.K.’s IPA — by way of a “technical capability notice,” or TCN — cannot be legally disclosed by the recipient. The law’s intention is that any such backdoors are secret by design. (Leaking details of a TCN to the press is one mechanism for circumventing an information block, but it’s important to note that Apple has yet to make any public comment on these reports.)

According to the rights group the Electronic Frontier Foundation, the term “backdoor” dates back to the 1980s, when backdoor (and “trapdoor”) were used to refer to secret accounts and/or passwords created to allow someone unknown access into a system. But over the years, the word has been used to label a wide range of attempts to degrade, circumvent, or otherwise compromise the data security enabled by encryption.

While backdoors are in the news again, thanks to the U.K. going after Apple’s encrypted iCloud backups, it’s important to be aware that data access demands date back decades.

Back in the 1990s, for example, the U.S. National Security Agency (NSA) developed encrypted hardware for processing voice and data messages that had a backdoor baked into it — with the goal of allowing the security services to intercept encrypted communications. The “Clipper Chip,” as it was known, used a system of key escrow — meaning an encryption key was created and stored by government agencies in order to facilitate access to the encrypted data in the event that state authorities wanted in.

The NSA’s attempt to flog chips with baked-in backdoors failed over a lack of adoption following a security and privacy backlash. Though the Clipper Chip is credited with helping to fire up cryptologists’ efforts to develop and spread strong encryption software in a bid to secure data against prying government overreach.

The Clipper Chip is also a good example of where an attempt to mandate system access was done publicly. It’s worth noting that backdoors don’t always have to be secret. (In the U.K.’s iCloud case, state agents clearly wanted to gain access without Apple users knowing about it.)

Add to that, governments frequently deploy emotive propaganda around demands to access data in a bid to drum up public support and/or put pressure on service providers to comply — such as by arguing that access to E2EE is necessary to combat child abuse, or terrorism, or prevent some other heinous crime.

Backdoors can have a way of coming back to bite their creators, though. For example, China-backed hackers were behind the compromise of federally mandated wiretap systems last fall — apparently gaining access to data of users of U.S. telcos and ISPs thanks to a 30-year-old federal law that had mandated the backdoor access (albeit, in that case, of non-E2EE data), underscoring the risks of intentionally baking blanket access points into systems.

Governments also have to worry about foreign backdoors creating risks for their own citizens and national security.

There have been multiple instances of Chinese hardware and software being suspected of harboring backdoors over the years. Concerns over potential backdoor risks led some countries, including the U.K., to take steps to remove or limit the use of Chinese tech products, such as components used in critical telecoms infrastructure, in recent years. Fears of backdoors, too, can also be a powerful motivator.

RELATED ARTICLES

Most Popular

Recent Comments