TechNews Pictorial PriceGrabber Video Sat Apr 27 16:36:41 2024

0


The Who, Where, and How of Regulating AI
Source: Artificial Intelligence


During the past year, perhaps the only thing that has advanced as quickly as artificial intelligence is worry about artificial intelligence.

In the near term, many fear that chatbots such as OpenAI’s ChatGPT will flood the world with toxic language and disinformation, that automated decision-making systems will discriminate against certain groups, and that the lack of transparency in many AI systems will keep problems hidden. There’s also the looming concern of job displacement as AI systems prove themselves capable of matching or surpassing human performance. And in the long term, some prominent AI researchers fear that the creation of AI systems that are more intelligent than humans could pose an existential risk to our species.

The technology’s rapid advancement has brought new urgency to efforts around the world to regulate AI systems. The European Union got started first, and this week, on 14 June, took a step forward when one of its institutions, the European Parliament, voted to advance the draft legislation known as the AI Act. But China’s rule-makers have been moving the quickest to turn proposals into real rules, and countries including Brazil, Canada, and the United States are following behind.

The E.U. and the U.K. offer a study in contrasts. The former is regulation-forward; the latter is laissez-faire.

Remarkably, some of the calls for regulations are coming from the very companies that are developing the technology and have the most to gain from unbridled commercial deployment. OpenAI’s CEO, Sam Altman, recently told the U.S. Congress in written testimony that “OpenAI believes that regulation of AI is essential.” He further urged lawmakers to consider licensing requirements and safety tests for large AI models. Meanwhile, Sundar Pichai, CEO of Google and its parent company, Alphabet, said recently that there will need to be “global frameworks” governing the use of AI.

But not everyone thinks new rules are needed. The nonprofit Center for Data Innovation has endorsed the hands-off approach taken by the United Kingdom and India; those countries intend to use existing regulations to address the potential problems of AI. Hodan Omaar, a senior policy analyst at the nonprofit, tells IEEE Spectrum that the European Union will soon feel the chilling effects of new regulations. “By making it difficult for European digital entrepreneurs to set up new AI businesses and grow them, the E.U. is also making it harder to create jobs, technological progress, and wealth,” she says.
What does the E.U.’s AI Act do?

The course of events in Europe could certainly help governments around the world learn by example. In April 2021 the E.U.’s European Commission proposed the AI Act, which uses a tiered structure based on risks. AI applications that pose an “unacceptable risk” would be banned; high-risk applications in such fields as finance, the justice system, and medicine would be subject to strict oversight. Limited-risk applications such as the use of chatbots would require disclosures.

On Wednesday, 14 June, as noted above, the European Parliament passed its draft of this law—an important step, but only a step, in the process. Parliament and another E.U. institution, the Council of the European Union, have been proposing amendments to the Act since its 2021 inception. Three-way negotiations over the amendments will begin in July, with hopes of reaching an agreement on a final text by the end of 2023. If the legislation follows a typical timeline, the law will take effect two years later.

Connor Dunlop, the European public policy lead at the nonprofit Ada Lovelace Institute, says that one of the most contentious amendments is the European Parliament’s proposed ban on biometric surveillance, which would include the facial-recognition systems currently used by law enforcement.


}

© 2021 PopYard - Technology for Today!| about us | privacy policy |