OpenAI CEO Sam Altman speaks during US senate hearing
OpenAI chief executive Sam Altman at a recent US senate hearing called for direct regulatory oversight of large language models © Patrick Semansky/AP

For regulators trying to get their heads around the new generation of artificial intelligence systems such as ChatGPT, there are two very different problems to confront. One is when the technology does not work as intended. The other is when it does.

Generative AI, which produces text or images automatically, has become a potential menace because of its awesome power and its uncontrollability — a noxious combination that presents a unique set of problems for anyone hoping to limit its capacity for harm.

Before ChatGPT, a consensus had largely formed around the objectives of AI regulation. All the attention was on trying to control the applications of the technology, with a particular focus on its use in high-risk situations such as healthcare. Now, an altogether different question has presented itself. When a smart, all-purpose chatbot has the ability to cause upheaval across a wide range of human activity, is it time to regulate the AI models themselves?

So-called general purpose technologies such as AI, which can be used for many different things, present a particular problem for regulators. With AI, it is difficult to separate benign uses from the more sinister. And AI makers admit they cannot explain exactly how the technology works, or predict when a particular input in the form of a prompt will lead to a particular output.

The encouraging news is that there is plenty of work going on to get to grips with the unique problems presented by the technology. For governments around the world, the question now is whether to step in and back these efforts with the force of formal regulation.

The so-called large language models that lie behind generative AI services such as ChatGPT fail on the most basic measure of effectiveness for any piece of technology: being able to clearly specify what they are intended to do, and then measure whether they achieve their objectives. There is little that is repeatable in their performance, and evaluations of their output are highly subjective.

In the US, the National Institute of Standards and Technology has been working with experts to try to come up with standards for how these systems should be designed, tested and deployed.

Another push has been for transparency, potentially making it easier to subject the models to a greater degree of outside scrutiny. Understanding what is happening inside a learning system such as LLM is not as straightforward as exposing the code in traditional software.

But makers of large models such as OpenAI and Google face reputational risk from the fear caused by such a powerful and opaque technology, and are keen to find ways to satisfy the hunger for more openness. After the chief executives of four of the leading AI companies visited Washington earlier this month, the White House announced they would submit their models for outside scrutiny at the annual Defcon cyber security conference this August.

Setting standards for safety processes, increasing transparency about the models’ workings and giving outside experts a chance to kick the tyres are all ways to increase assurances about LLMs. The question is what formal regulation is needed — and what restrictions should be placed on models deemed a threat.

One possibility, suggested by Alexandr Wang, chief executive of Scale AI, would be to treat AI the same as the GPS positioning system, limiting the most powerful versions of the technology for restricted uses. But it would be hard to impose limits like this in a competitive technology market. Another approach, recommended this week by OpenAI CEO Sam Altman, would be to subject LLMs to direct regulatory oversight. A licensing regime could be used to ensure they adhere to certain safety standards and have been appropriately vetted.

Despite the appeal, there are obvious drawbacks to this. It would risk carving out a separate market of large, regulated models under the control of a handful of companies with resources to operate in a highly regulated environment.

The current breakneck pace of development in AI would also cause problems. By definition, today’s most advanced models quickly become tomorrow’s routine pieces of software. At the same time, some of the capabilities currently available only in large, all-purpose models such as ChatGPT may soon also come from much smaller systems trained to handle narrower tasks.

None of this lends itself to easy answers. But with the technologists themselves calling for oversight of the smart bots, some form of direct regulation seems inevitable.

richard.waters@ft.com

Letter in response to this article:

AI development and use would benefit from more ‘friction’ / From Mikkel Krenchel, New York, NY, US

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments