AR could be a privacy nightmare — or a chance to rethink computing | #firefox | #chrome | #microsoftedge

Augmentation also doesn’t just mean adding things to a wearer’s surroundings. It also means letting a computing platform capture and analyze them without other people’s consent. 

Take facial recognition — a looming crisis at the heart of AR. Smartphone apps have used facial recognition for years to tag and sort people, and one of the most intuitive AR glasses applications is simply getting reminded of people’s names (as well as other background information like where you met them). It’s also a potential privacy disaster.

In 2020, the Electronic Frontier Foundation sounded the alarm about AR glasses’ surveillance capabilities. Any company offering facial recognition through glasses, wrote Katitza Rodriguez and Kurt Opsahl, “could have a live audio/visual window into all corners of the world, with the ability to locate and identify anyone at any time.”

So far, AR systems have mostly sidestepped facial recognition. Phone-based platforms like Snapchat, as well as Facebook’s Portal, use face identification — which can detect facial features to add special effects but not match them against a database of specific people. Google refused to approve facial recognition apps in its 2013 Glass Explorer Edition headset, although one unofficial app raised lawmakers’ ire by trying it.

But the EFF’s concern wasn’t premature. Andrew Bosworth, an executive at Facebook and Meta, reportedly told employees the company is weighing the costs and benefits of facial recognition for its Project Aria glasses, calling it possibly “the thorniest issue” in AR. And outside AR, some people are pushing for a near-total ban on the technology. Researcher Luke Clark has likened facial recognition to “the plutonium of AI,” saying any potential upsides are far eclipsed by its social harms. AR is a ready-made testbed for the widespread public use of facial recognition, and by the time any potential harms are obvious, it might be too late to fix them.

It’s not clear how AR systems will make money either — and what kinds of behaviors the resulting business models will encourage. Some companies in the field, like Apple, are traditional hardware sellers. Others, like Facebook and Snap, made their names on ad-supported social networks.

Facebook has claimed it’s not yet looking at business models for its glasses, and Snap says advertising isn’t the only option, promoting the value of experiences like AR-powered retail. But even companies with no advertising background see its power. In one patent filing from 2017, Magic Leap imagined Starbucks detecting when a headset wearer looked at a branded coffee cup, then offering a coupon for the nearest Starbucks store.

Even basic AR applications, like mapping an apartment to place a virtual screen, could potentially gather a huge amount of information. (What’s the size of your living space? Which books are on your shelves? How healthy are the snacks on your kitchen counter?) Without robust privacy protections, it will be incredibly tempting to use that data for ads. And the more companies collect and store, the higher the odds you’ll see it used for something even more invasive — like setting your insurance premiums — or fall prey to a security breach.

Some groups are trying to get the jump on broader AR policy issues. The nonprofit XR Safety Initiative offers policy frameworks for safety and privacy in the industry, drawing on existing laws like Europe’s General Data Protection Regulations. In the corporate space, Facebook Reality Labs announced a set of Responsible Innovation Principles designed to allay fears about trust, privacy, and consent that have dogged the company. It also awarded a series of academic grants to study specific issues in AR, selecting proposals like “Social Tensions with Always-available AR for Accessibility” and “Anticipating Virtual Gossip — What are (Un)Intentional Dark Patterns in a Ubiquitously Augmented Reality?”

Early efforts at consumer hardware, though, haven’t navigated its pitfalls particularly well. Google found its 2013 Glass Explorer Edition headset banned from some bars and restaurants because the expensive device was seen as invasive and presumptuous, not futuristic and liberating. That shouldn’t have been surprising: around the same period, University of Washington researchers interviewed people in cafés where someone was wearing a mock AR headset, and the results were a roughly 50-50 mix of indifference and negativity. (Only a single person had a positive reaction.) But instead of planning around some very predictable anxieties, Google CEO Larry Page called privacy fears “not that big a concern.”



Original Source by [author_name]

Leave a Reply

Your email address will not be published. Required fields are marked *

eighty three − seventy four =