AI’s Path to Becoming Humanlike
Realistic AI is a powerful tool for fraud, yet simultaneously serves as an excellent assistant for productivity and creativity.
The allure and concern surrounding generative AI stem from its increasing resemblance to human beings.
For instance, by using just a few photographs, AI can synthesize a human face that appears flawless from every angle.
Similarly, with a mere 15 seconds of recorded speech, AI can capture a voice model and convincingly mimic the speaker’s tone, potentially deceiving even close friends and family.
Beyond appearance and voice, AI can also imitate human behavior online.
It can learn from the places a person likes or frequently comments on. Websites and apps can simulate user preferences based solely on the duration of attention spent on a particular page, without requiring active typing or photo uploads.
An AI model trained in this manner, with minor variations, can transform into thousands or millions of distinct “virtual individuals” in the online world. For example, beneath a viral article, there might be thousands of seemingly lively comments, but in reality, they could be generated by the same AI system using different personas to create specific emotions or even intensify polarization.
These comments differ from the obviously fake, templated messages of the past. With generative AI, each account may feature a unique profile picture, friends, check-ins, photos, and possess a lifelike history. When a manipulation attack launches, they can swarm to a specific location and leave comments.
To better integrate these “virtual individuals” into the human world, genuine human intervention is still required for “alignment tuning.” Much like teaching children, it involves guiding AI on which responses are more acceptable and which are not, enabling AI reactions to more closely resemble those of humans.
Realistic AI is a powerful tool for fraud, yet simultaneously serves as an excellent assistant for productivity and creativity. The tool remains the same; its behavior, whether virtuous or malicious, depends on the user.
In my “Superalignment” column late last year, I mentioned the open-source automatic alignment technology “Constitutional AI.” By allowing groups to write guiding principles on “how AI should better respond,” AI can adapt to the varying preferences of people with different attributes and cultures, akin to the diverse customs and perceptions among different nations.
For example, in the news industry, AI would be expected to prioritize the veracity of sources. If insufficient evidence exists, it would preferably admit ignorance rather than provide a fabricated answer.
Conversely, a group of science fiction writers might have the opposite requirement, encouraging AI to be as creative as possible, even if it means conjuring up stories, as long as they are captivating.
Thus, AI systems trained under these two different “constitutions” would react in entirely different ways. No single AI system can be universally applicable. Only through alignment tuning can AI systems become more attuned to users and interact in a more “human-like” manner.