Here's how China wants to regulate how people and companies use AI in the nation.
Whereas artificial intelligence has started losing its buzzword and is encompassing more of tech industry worldwide. The benefits are huge but the risks too. As such, according to a report in South China Morning Post China plans strict controls over how it applies AI. According to the paper, China is aimed at balancing the hazards and advantages of the technology.
China has released a fresh draft guideline and focuses in at least two areas. The first one is how the training data is utilized; it should be secured, and finally the issue of security of large language models, which are used in generative AI technologies such as Chat GPT or Baidu’s Ernie Bot.
What China wants to do
Therefore, the government of China wishes any datasets used for training AI models not to contain any copyright violation. It should also not infringe on individual privacy. The report suggests that any training of Daya should involve a process through which authorized data labellers and reviewers would have passed through security checks. In a nutshell, the data ought to be seen by human eye first, then created artificially.
In addition, while building of their LLMs when the developers should ‘be based on foundational models filed with and licensed by authorities’ once again in order of ensuring that authorities have knowledge of the information feed into these LLMs that are used for generative AI.
No ‘illegal content’
The list of defined ‘illegal’ content in China is quite broad, ranging from politically volatile materials. That is why the questions about Taiwan are recognized as illegal content. Other kinds of unlawful content include such as fake news, encouraging superstitions, and/or pornography. Drafting a Proposal to Blocking All AI Training Data Containing More Than 5% Illegal Content According to China Cyber Security Law.
The Chinese government has drafted a law and invited comments before October 25th.