A few months ago, “Clarkesworld,” the esteemed sci-fi magazine, temporarily halted submissions. The reason? A dramatic increase in machine-generated entries as a byproduct of the popularization of generative AI. The irony of AI-generated sci-fi and an overwhelmed editorial staff is no laughing matter — it reveals deep-seated concerns held by many.

The multifaceted capabilities of AI services extend to translation, art, coding and even inspiration, making them indispensable in numerous sectors. Yet, the “Clarkesworld” response is a mere symptom of the societal impact of AI. Even more concerning is the ability of AI to replicate faces and voices, leading to sophisticated fraud and eroding the bedrock of interpersonal trust. When weaponized by totalitarian regimes and cybercriminals, this poses a severe threat to democratic systems.

On top of these concerns, the most sophisticated generative AIs are each trained by a single organization. This leads to a one-size-fits-all model unintentionally echoing the biases of its creators. As a result, the diverse cultural nuances of a global audience are frequently neglected or mishandled.

At London Tech Week, which I attended at the invitation of the organizers in June, a cross-sector consensus emerged from intense discussions on such issues: The future of AI must avoid centralization and instead adopt a diverse democratic model. To maintain a delicate balance between the advancement of AI and safety, we need to move away from placing unwavering trust in a handful of developers. Instead, we must provide channels for citizens across the world to participate and collaboratively build a globally trusted AI assurance framework, hallmarked by shared ethical standards and usage principles.

During these dialogues, I reflected on one of my favorite sci-fi novellas, “The Lifecycle of Software Objects” by the tremendous Ted Chiang. The narrative features a future tech firm merging AI and VR to create virtual life forms. Two of the protagonists come to understand that AI, like a living organism, requires continual nurturing and education to evolve.

“Experience is algorithmically incompressible,” Chiang writes, reflecting on the process of raising virtual life. “If you want to create the common sense that comes from twenty years of being in the world, you need to devote twenty years to the task.”

This simple but profound insight provides a roadmap for AI democratization: Sustainable consensus can only be achieved through public participation. This is why I joined the Collective Intelligence Project (CIP), alongside industry partners such as OpenAI and Anthropic, in launching Alignment Assemblies. Taiwan is setting a global precedent in aligning AI values through civic deliberation, in what we term an “ideathon.”

“The Lifecycle of Software Objects” ends with the AI entities exhibiting independent thought, emotional intelligence and deep social connections due to the protagonists’ steadfast patience. I envision the future of AI, starting with pioneering Alignment Assemblies, to mirror this evolution. Let’s draw upon the rich tapestry of diverse experiences and the “wisdom of crowds” accumulated over thousands of years, and shape AI into trusted and wise partners dedicated to societal well-being.