Why Apple’s walled garden is no match for Pegasus spyware | Technology | #ios | #apple | #iossecurity


You will, by now, have heard about Pegasus. It’s the brand name for a family of spyware tools sold by the NSO Group, an Israeli outfit of hackers-for-hire who sell their wares to intelligence agencies, law enforcement, and militaries around the world. developed, marketed and licensed to intelligence agencies, law enforcement and militaries around the world by the Israeli NSO Group.

An investigation by the Guardian and 16 other media organisations around the world into a massive data leak suggests widespread abuse of NSO Group’s hacking software by government customers. The company insists it is intended for use only against criminals and terrorists but the investigation has revealed that journalists, human rights activists and opposition politicians are also being targeted. Since our phones are increasingly external brains, storing our lives in digital form, a successful deployment of Pegasus can be devastating. Messages, emails, contact details, GPS location, calendar entries and more can be extracted from the device in a matter of minutes.

On Sunday, the Guardian and its media partners began to publish the results of the investigation into the NSO Group, Pegasus, and the people whose numbers appear on the leaked list:

The Guardian and its media partners will be revealing the identities of people whose number appeared on the list in the coming days. They include hundreds of business executives, religious figures, academics, NGO employees, union officials and government officials, including cabinet ministers, presidents and prime ministers.

The list also contains the numbers of close family members of one country’s ruler, suggesting the ruler may have instructed their intelligence agencies to explore the possibility of monitoring their own relatives.

The presence of a number in the data does not reveal whether there was an attempt to infect the phone with spyware such as Pegasus, the company’s signature surveillance tool, or whether any attempt succeeded. There are a very small number of landlines and US numbers in the list, which NSO says are “technically impossible” to access with its tools – which reveals some targets were selected by NSO clients even though they could not be infected with Pegasus.

There’s a lot more to read on our site, including the fact that the numbers of almost 200 journalists were identified in the data; links to the killing of Jamal Khashoggi; and the discovery that a political rival of Narendra Modi, the autocratic leader of India, was among those whose number was found in the leaked documents.

But this is a tech newsletter, and I want to focus on the tech side of the story. Chiefly: how the hell did this happen?

The messages are coming from inside the house

Pegasus affects the two largest mobile operating systems, Android and iOS, but I’m going to focus on iOS here for two reasons: one is a technical problem that I’ll get to in a bit, but the other is that, although Android is by far the most widely used mobile OS, iPhones have a disproportionately high market share among many of the demographics targeted by the customers of NSO Group.

That’s partly because they exist predominantly in the upper tiers of the market, with price tags that keep them out of the reach of much of the world’s smartphone users but still within the reach of the politicians, activists and journalists potentially targeted by governments around the world.

But it’s also because they have a reputation for security. Dating back to the earliest days of the mobile platform, Apple fought to ensure that hacking iOS was hard, that downloading software was easy and safe, and that installing patches to protect against newly discovered vulnerabilities was the norm.

And yet Pegasus has worked, in one way or another, on iOS for at least five years. The latest version of the software is even capable of exploiting a brand-new iPhone 12 running iOS 14.6, the newest version of the operating system available to normal users. More than that: the version of Pegasus that infects those phones is a “zero-click” exploit. There is no dodgy link to click, or malicious attachment to open. Simply receiving the message is enough to become a victim of the malware.

It’s worth pausing to note what is, and isn’t, worth criticising Apple for here. No software on a modern computing platform can ever be bug-free, and as a result no software can ever be fully hacker-proof. Governments will pay big money for working iPhone exploits, and that motivates a lot of unscrupulous security researchers to spend a lot of time trying to work out how to break Apple’s security.

But security experts I’ve spoken to say that there is a deeper malaise at work here. “Apple’s self-assured hubris is just unparalleled,” Patrick Wardle, a former NSA employee and founder of the Mac security developer Objective-See, told me last week. “They basically believe that their way is the best way.”

What that means in practice is that the only thing that can protect iOS users from an attack is Apple – and if Apple fails, there’s no other line of defence.

Security for the 99%

At the heart of the criticism, Wardle accepts, is a solid motivation. Apple’s security model is based on ensuring that, for the 99% – or more – for whom the biggest security threat they will ever face is downloading a malicious app while trying to find an illegal stream of a Hollywood movie, their data is safe. Apps can only be downloaded from the company’s own App Store, where they are supposed to be vetted before publication. When they are installed, they can only access their own data, or data a user explicitly decides to share with them. And no matter what permissions they are given, a whole host of the device’s capabilities are permanently blocked off from them.

But if an app works out how to escape that “sandbox”, then the security model is suddenly inverted. “I have no idea if my iPhone is hacked,” Wardle says. “My Mac computer on the other hand: yes, it’s an easier target. But I can look at a list of running processes; I have a firewall that I can ask to show me what programs are trying to talk to the internet. Once an iOS device is successfully penetrated, unless the attacker is very unlucky, that implant is going to remain undetected.”

A similar problem exists at the macro scale. An increasingly common way to ensure critical systems are protected is to use the fact that an endless number of highly talented professionals are constantly trying to break them – and to pay them money for the vulnerabilities they find. This model, known as a “bug bounty”, has become widespread in the industry, but Apple has been a laggard. The company does offer bug bounties, but for one of the world’s richest organisations, its rates are pitiful: an exploit of the sort that the NSO Group deployed would command a reward of about $250,000, which would barely cover the cost of the salaries of a team that was able to find it – let alone have a chance of out-bidding the competition, which wants the same vulnerability for darker purposes.

And those security researchers who do decide to try to help fix iPhones are hampered by the very same security model that lets successful attackers hide their tracks. It’s hard to successfully research the weaknesses of a device that you can’t take apart physically or digitally.

In a statement, Apple said:

Apple unequivocally condemns cyberattacks against journalists, human rights activists, and others seeking to make the world a better place. For over a decade, Apple has led the industry in security innovation and, as a result, security researchers agree iPhone is the safest, most secure consumer mobile device on the market. Attacks like the ones described are highly sophisticated, cost millions of dollars to develop, often have a short shelf life, and are used to target specific individuals. While that means they are not a threat to the overwhelming majority of our users, we continue to work tirelessly to defend all our customers, and we are constantly adding new protections for their devices and data.

There are ways round some of these problems. Digital forensics does still work on iPhones – despite, rather than because, of Apple’s stance. In fact, that’s the other reason why I’ve focused on iPhones rather than Android devices here. Because while the NSO Group was good at covering its tracks, it wasn’t perfect. On Android devices, the relative openness of the platform seems to have allowed the company to successfully erase all its traces, meaning that we have very little idea which of the Android users who were targeted by Pegasus were successfully affected.

But iPhones are, as ever, trickier. There is a file, DataUsage.sqlite, that records what software has run on an iPhone. It’s not accessible to the user of the device, but if you back up the iPhone to a computer and search through the backup, you can find the file. The records of Pegasus had been removed from that file, of course – but only once. What the NSO Group didn’t know, or perhaps didn’t spot, is that every time some software is run, it is listed twice in that file. And so by comparing the two lists and looking for inconsistencies, Amnesty’s researchers were able to spot when the infection landed.

So there you go: the same opacity that makes Apple devices generally safe makes it harder to protect them when that safety is broken. But it also makes it hard for the attackers to clean up after themselves. Perhaps two wrongs do make a right?

A full deck

After almost 20 years of stagnation, the market for video game consoles is seeing a shake-up that threatens to destabilise the triopoly at the top of the sector. Microsoft, Sony and Nintendo have been largely unchallenged in that time, but an announcement from Valve Corporation, the American publisher behind the PC gaming store Steam, could change everything.

Valve’s offering is the Steam Deck, a handheld gaming console with a 7in screen that resembles (and is clearly inspired by) the Nintendo Switch. But unlike the Switch, which is tightly controlled by the notoriously rigid Japanese developer, the Steam Deck will be an open platform: effectively a tiny PC in its own right, with access to the entire library of Steam games from day one.

The Steam Deck, which will launch in December with prices starting at £349, represents a continued shift away from the trend of the last 20 years of games consoles, which was a no-holds-barred push for superior processing power. Arguably, Nintendo had never taken part in that race, but the 2017 release of the Switch confirmed its focus on a different model: deliberately building a less powerful console to instead offer the flexibility to fit in people’s lives the way they want, rather than requiring a big TV, big box, and big electricity bill. The Steam Deck makes a similar bet about PC gaming, hoping that, as much as there’s a market for enormous gaming PCs, there’s an equally big market for a small and simple device which will just play games well enough.

It’s not the only transformation the market is going through. Simultaneously, Microsoft and Sony are betting on a different way to solve the same problem, pushing game streaming as the solution. Rather than shrinking the console, they’re instead proposing to place it in a datacentre miles away, and let you play games on your phone or computer over a fast internet connection instead. It’s an approach already trialled by Google’s Stadia to disastrous effect, but Microsoft in particular seems to be having a better shot at it.

And then, of course, there’s mobile gaming, the thing that shook up the industry in the first place. Candy Crush, Clash of Clans and Pokémon Go might not hit the covers of the enthusiast press, but the revenues of the mobile games industry tells its own story: with an $80bn income in 2020, it generated almost exactly half of the entire games industry’s cash. That, in turn, helps explain why companies such as Apple and Google are quite so eager to ensure that money spent on mobile app stores – and in mobile games – must go through their payment processing systems, and pay a hefty cut to the platform holders.

It’s crossovers like this that make me certain you can’t understand the modern technology sector without paying a bit of attention to the games industry. But I’m also aware that a few readers – including, she has informed me, my own mother –can’t stop their eyes from glazing over at the mention of it. So I want to do a quick straw poll: are you interested in gaming at all? If not, are you up for learning about it? Or would you rather I leave gaming to the gamers, and focus here on the other aspects of technology? Remember, you can always hit reply to this email to tell me about this or anything else.

Prove your age

As expected, TechScape readers seem split about the prospects of online age verification, and of ID checks on the internet more generally. Many, like Nic, were worried about the data protection implications:

Once I have given away that data I do not trust where it will be and who has it. And it’s not something I can change.

Martin pointed out that such a system is already widely used in South Korea, where it has had … mixed success:

South Korea … requires everyone have a resident registration number, which used to be collected by a lot of websites at registration to tie them to one user identity. In 2011 there was a big hacking incident and a bunch of fraud … it’s not a reliable system, basically.

But I was surprised by how strongly the supporters of the proposals backed them. For many of you, it’s simply a no-brainer. “I actually think that pseudonyms should be banned,” says Ted, “but if there is an argument for keeping them, then at least the social media companies would know who the people are.”

Jacky, who’s done some volunteer moderation work, complained about the sisyphean task of “just closing down one fake ID after another. But actually making it work would be a different thing – we really don’t want to end up with any one agency knowing too much.”

Most interesting to me were the various suggestions for other options. Some suggested that more platforms should introduce a “cooling-off period”, restricting accounts for a few days after creation; others proposed a split-tier internet, where you can continue to operate without proving ID, but lose the benefit of the doubt when it comes to auto-moderation and so on.

For what it’s worth, that latter is close already: Twitter, for instance, lets users automatically hide notifications from others who haven’t verified their ID with a phone number.

The wider techscape

I’m never sure how to view the latest boom in space tourism. On the one hand, it’s very true that these billionaire playboys are funding genuine technical advances. On the other, building mega-yachts requires genuine technical advances as well, but it’s still mostly just an argument for higher taxes. Gizmodo’s Dharna Noor lays out the case against:

Neither Bezos nor Branson has been particularly forthcoming about the environmental impact of their flights. But then that’s precisely the problem. The initial climate impact of an individual space tourist flight may be comparatively small, but they will add up. And each flight signals something more ominous to come.

Technically, much online tracking is “anonymised”: identifying details, such as IP addresses or usernames, are shorn off the dataset before it is sold on or used for ad targeting, and privacy is thus preserved. But in practice, there’s nothing stopping people from going through that data and “deanonymising” it – and, Vice’s Joseph Cox reports, there’s a whole industry for just that:

They do this by linking mobile advertising IDs (MAIDs) collected by apps to a person’s full name, physical address, and other personal identifiable information (PII). Motherboard confirmed this by posing as a potential customer to a company that offers linking MAIDs to PII.

Obviously my colleagues who worked tirelessly on the Pegasus Project should be proud of their work but I’m afraid this is the story of the week for me:

Classified details of the British army’s main battle tank, Challenger 2, have been leaked online after a player in a tank battle video game disputed its accuracy.

The player, who claimed to have been a real-life Challenger 2 tank commander and gunnery instructor, disputed the design of the tank in the popular combat video game War Thunder, arguing it needed changing. He claimed game designers had failed to “model it properly”.

To support his argument the player posted pages from the official Challenger 2 Army Equipment Support Publication – a manual and maintenance guide.



Original Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

eighteen − ten =