AI adaptation
Are the standards and definitions for assessing harms are tied to a small group of people?
When it comes to AI governance, in addition to relevant legislation, what I want to emphasize is the “forward-looking” AI adaptation mechanism.
Why forward-looking?
For example, a few years ago, the Office of Science and Technology Newspaper and I made a video to show the public how easy it is to use cheap cell phones and laptops to deepfake my own image.
In this way, before large-scale damage occurs, we can take a step forward to prevent harm, and at the same time, let the public understand how to mitigate the risks in advance.
The so-called AI adaptation mechanism has two forward-looking directions: one is to quickly discover new harms through the participation of all people; the other is how to quickly inform practitioners after discovery, and to set a boundary in front of harms to guide practitioners to develop in a safe direction.
Because AI is a general-purpose technology, it is difficult to assess all possible harms in the laboratory R&D stage, so there should be a systematic way to understand the impact on society when it is actually deployed through at least biannual scrutiny.
But does this mean that the standards and definitions for assessing harms are tied to a small group of people?
Not exactly. We can adopt two approaches. First, we can adopt “voluntary notification”, which means that anyone who discovers potential hazards will have a place to raise their hands and discuss with others in a similar situation. Secondly, we can adopt a “random sampling” approach, whereby stratified sampling telephone interviews are conducted after determining the proportion of the population. However, this traditional method of conducting public opinion surveys has its drawbacks. Usually, only multiple-choice questions can be asked, and open-ended questions cannot be asked further.
To tackle this challenge, we can utilize the “deliberative polling” method, combining deliberative discussions and random sampling, by inviting several hundred or even thousands of statistically representative people across the country to join the video conference in groups, and brainstorm with each other. This will allow for in-depth discussions and allow more people to participate.
I hope that people who are not yet using AI, or who have just been affected by it, will have a chance to understand it through the deliberation process, and put it into their own life contexts: if their friends, family, and even their companies start to utilize advanced generative AI, how are they going to adapt? In this way, we will be able to plan ahead and respond to the needs of society immediately.