The EU’s Ambitious AI Regulations: Increasing Trust or Stifling Progress? | #Adroid | #security


European Union (EU) officials have proposed new rules that could restrict and even ban some uses of artificial intelligence (AI) within its borders. That could include some technology developed by U.S. and Chinese-based tech giants. The rules would be the most significant international effort to regulate the use of AI to date.

The Coordinated Plan on Artificial Intelligence 2021 Review, put forth by the 27-nation bloc, could set a new standard for technology regulation.

If passed, rules could impact how facial recognition, autonomous vehicles, and even algorithms that are employed in online advertising are used across the EU. It could also limit the use of AI and machine learning as it applies to automated hiring, school applications and credit scores. It would ban AI outright in situations deemed “risky,” including government “social scoring” systems where individuals are judged on their behavior.

Fundamental Rights Upheld?

This could be the first-ever legal framework on AI, and the EU has said the new Coordinated Plan with Member States would guarantee the safety and fundamental rights of people and business, while also strengthening AI uptake, investment and innovation across the EU.

“With these landmark rules, the EU is spearheading the development of new global norms to make sure AI can be trusted,” Margrethe Vestager, the European Commission’s executive vice president for the digital age, said in a statement. “By setting the standards, we can pave the way for to ethical technology worldwide and ensure that the EU remains competitive along the way.”

The EU has maintained that the new AI regulations could ensure that the Europeans can trust what AI has to offer, and would create flexible rules that could address the specific risks posed by AI-based systems. AI systems considered a clear threat to the safety, livelihoods and rights of people will be banned.

High-risk use of AI would include critical infrastructure, including transport, which could put the lives and health of citizens at risk; educational and vocational training, such as the scoring of exams; and law enforcement where it could interfere with people’s fundamental rights. In those cases, the high-risk AI systems would be subject to strict obligations before they could be put on the market, and would require logging of activity to ensure traceability of results, high quality of datasets, high level of security and accuracy, and appropriate human oversight measures to minimize risk.

“AI is a means, not an end,” explained EU Commissioner for Internal Market Thierry Breton.

“It has been around for decades but has reached new capacities fueled by computing power,” added Breton. “This offers immense potential in areas as diverse as health, transport, energy, agriculture, tourism or cyber security. It also presents a number of risks. Today’s proposals aim to strengthen Europe’s position as a global hub of excellence in AI from the lab to the market, ensure that AI in Europe respects our values and rules, and harness the potential of AI for industrial use.”

Future of AI

This isn’t the first time the EU has attempted to create regulation around new technology that surpasses anything else in the world. This has mainly been focused on privacy, including search results, but also in how personal information can be used by tech firms.

Now it addresses the developing technologies of AI and machine learning, but the question is whether such a hard line could limit the efforts by the tech giants. Or is this the best course of action to ensure privacy, security and to maintain a fair and level playing field for all involved?

“First and foremost, allowing technologies to be developed unilaterally without any oversight is an effective vote for market dominating behavior,” technology industry analyst Charles King of Pund-IT, told ClearanceJobs.

“We’ve seen it happen among tech industry behemoths, including Facebook and Google, and in countries, like China and government agencies in the U.S. and elsewhere,” King added.

The EU hold all the cards right now, and the tech firms will have to play by their rules – or ignore the European market. The latter really isn’t an option.

“Along with trying to place some restraints on potentially damaging behavior, the EU is also acting from a position of historic strength,” explained King. “The organization has aggressively pursued businesses that it believes are acting against the interests of consumers and markets, and has passed regulations – including GDPR – that have successfully influenced global businesses and markets.”

Will It Stifle Innovation?

One of the biggest concerns would be whether such strict regulations simply make it too hard for businesses to play by the EU’s rules. AI could be one of those areas where this could seem like stifling but in the end could ensure that control over the technology is, in fact maintained accordingly.

“The writer and futurist Arthur Clarke famously asserted, ‘Any sufficiently advanced technology is indistinguishable from magic,’” said Jim Purtilo, associate professor of computer science at the University of Maryland. “If that’s the definition, then today’s decision systems based on artificial intelligence technologies surely qualify as magic. They are subtle, tremendously complex methods which defy full explanation as to how a given result was computed.”

Part of the issue is in understanding exactly what is entailed by AI. While it is easy to think of the science-fiction version of a “thinking computer” or android, in reality it is just ever-more complicated algorithms.

“There has always been some aura of mystery to it. In some sense, AI is the area that stops being AI once we understand it,” added Purtilo.

“Many fairly ordinary forms of computing – for example logic systems and computer memory – once fell under a heading of AI research,” he noted. “What differs today is the scale of decisions that people will leave to machines. It used to be that at least programmers had an understanding of how their programs computed a result, but with AI the programmers generally can’t work back to justify how an outcome was reached. That program’s accuracy not a big deal when all it is doing is tagging photos of your friends in an album on your phone, but it becomes a very big deal for individuals who fall under suspicion of police based on wide deployment of facial recognition technology.”

Is the EU Going Too Far?

Given the understanding of what is, and to some extent what is not, AI, the question then becomes whether the EU is taking its regulation of AI too far?

“I’m not particularly afraid of computer programs, but I’m terrified of the bureaucrats who use them thinking they escape responsibility by pretending to be mere servants of science which, by virtue of AI methods, is somehow settled even if not explained,” warned Purtilo.

“I thus see the EU’s move as being less about AI than it is about policy,” Purtilo told ClearanceJobs. “Government practices should offer transparency and accountability, but as AI methods offer neither, these proposed regulations represent a first attempt to push back at opaque technologies that cloak the basis for impactful decisions.”

That could stifle AI’s development, or perhaps simply allow it to be better controlled and managed.

“Whether or not this latest effort will succeed is anyone’s guess,” said Pund-IT’s King. “But the EU is taking action because it believes it should and because it can.”



Original Source link

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Leave a Reply

Your email address will not be published. Required fields are marked *

6 + 2 =