While it will take some time before the proposed rules are adopted, the proposed regulations prohibit use for live facial recognition in public spaces, and restrict usage in areas that threaten people’s safety or fundamental rights. One argument made for these restrictions is that AI decisions can’t be explained, but instead of preventing the use of AI the regulations might consider simply requiring AI tools used in specific use cases be explainable – a capability well within the AI technology available today.
The regulations should also enable individuals to opt-out. I want AI to be working alongside my doctors to review my MRI and X-Rays; AI never gets tired and is increasingly able to detect issues doctors may miss:
“Presented at a news briefing in Brussels, the draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.
Some uses would be banned altogether, including live facial recognition in public spaces, though there would be some exemptions for national security and other purposes.
The rules have far-reaching implications for major technology companies including Amazon, Google, Facebook and Microsoft that have poured resources into developing artificial intelligence, but also scores of other companies that use the technology in health care, insurance and finance. Governments have used versions of the technology in criminal justice and allocating public services.
Companies that violate the new regulations, which are expected to take several years to debate and implement, could face fines of up to 6 percent of global sales.”
Overview by Tim Sloane, VP, Payments Innovation at Mercator Advisory Group