Chinese AI Motherboard by Midjourney

Illustration created by Midjourney with the prompt “a photo of a Chinese flag made out of a large red printed circuit card –ar 16:9 –v 5.2”

 

China has laid out a blueprint for regulating generative AI – technology that powers chatbots like OpenAI’s ChatGPT and Google’s Bard. Overseen by the Cyberspace Administration of China, the new regulations set parameters on the public use of AI, though they don’t apply to research and technologies developed for use overseas.

The rules impose mandatory registration of algorithms with the government and require a “security assessment” for services with potential societal influence. Further embedding the government’s ethos in tech innovation, the law instructs adherence to “core socialist values” and prohibits certain illegal uses of generative AI.

Notably, these regulations could serve as a global reference, tackling hot-button issues such as copyright infringement and data protection. They set a precedent by including explicit requirements for generative AI companies to respect intellectual property rights – which is ironic, since China is the epicenter of worldwide copyright infringement.

The policy also clarifies privacy rights for users and outlines responsibilities for AI platforms to protect personal information. In a bold move, the regulations also encourage developers and suppliers to participate in crafting international AI rules (which may have some impact on global AI regulation).

The Alignment Problem

China’s regulatory requirements shine a bright light on one of the most interesting issues facing AI regulators: the alignment problem, which refers to the challenge of ensuring that AI systems (especially those that operate autonomously) act in a manner that aligns with human values, intentions, and objectives. Which raises the question: Whose human values?

A core aspect of the alignment problem is that it’s not always straightforward to specify what we want an AI system to do, and it’s even harder to specify what we want in a way that leaves no room for misinterpretation. If a machine learning system is given a goal without the proper context or constraints, it may find solutions that technically meet the goal but violate the spirit of what was intended.

For example, an autonomous cleaning robot programmed to minimize visible dirt might learn to scatter a dustpile when no one’s looking, so it can appear to be successful.

We May Be In Way Over Our Heads

Part of China’s new regulations require AI systems to “adhere to core socialist values.” Obviously, no system created in the United States (or any other democratic society) is going to require such constraints. However, that doesn’t mean that we can expect any AI systems to be trained equally or in alignment with anyone’s particular world view. We can’t get humans to align with human values (or even clearly define them), so what hope do we have of getting AI systems aligned? Which raises other questions, including, How will bots powered by AI systems trained on opposing ideologies change our world? What will weaponized generative AI cyberwarfare look like? Not warfare with AI-generated cyberweapons – the military will deal with that. Weaponized information that can be generated at low or no cost and relentlessly tested for efficacy.

An Enduring Conflict

We have been in an enduring cyberwar since the advent of cyberweapons. This is an arms race that will continue forever. We have been in various enduring propaganda wars since the dawn of communication. What we have never dealt with is the speed and adaptability of weaponized information at scale. This is very new. China just drafted regulations that will clearly separate both the capabilities and intentions of their AI models from others. In practice, the alignment problem is an incomplete description of a wide range of AI training problems. We are going to need all kinds of new tools to deal with this.

If you want to get a better understanding of how AI systems are trained and how generative AI and autonomous agents work, sign up for our free online course Generative AI for Execs.

Author’s note: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.

Get Briefed Every Day!

Subscribe to my daily newsletter featuring current events and the top stories in technology, media, and marketing.

Subscribe