Pune Media

Bangkok Post – China’s interface with AI development

A tablet screen displaying the travel path onboard a Baidu Inc driverless robotaxi in Wuhan, China. Bloomberg

With intense competition between the superpowers on the development of and interface with artificial intelligence (AI), does China favour some commitments which can converge with other countries?

Common ground is certainly possible depending on the elements at hand, while uncommon chasms may also protrude portentously.

Since 2023, China has been advocating the Global AI Governance Initiative. Basically, it espouses a cooperative and consensus based approach on AI development that is people-centred. It underlines national sovereignty against manipulation and disinformation, while promoting mutual respect between nations. It upholds the protection of personal data and related risk assessment and management, backed by research aimed at transparency and predictability of AI.

The term “ethics” enters its narrative to prevent discrimination, underpinned by ethical reviews of AI development. The initiative also claims space for the voices of multiple stakeholders and the interests of developing countries. As a corollary, it is agreeable to a role for the UN to establish an international framework to govern AI, linking development, security and governance.

This initiative was bolstered in September by the publication of its AI Safety Governance Framework which delineates the challenges and needed responses more specifically. This framework is a policy instrument which can be reinforced, in parallel, by a law or regulation. The framework categorises various key risks and highlights actions to deal with them, while also targeting the various stakeholders in the AI techno-stream.

It lists various inherent safety risks: they comprise risks from models and algorithms; risks from data; and risks from AI systems. These are compounded by risks in AI applications, in particular cyberspace risks, real world risks, cognitive risks and ethical risks. An example of risks from algorithms (which are basically techno-models or digital formulae aimed at producing various outcomes) is they are difficult to understand and must be more explicable and transparent to the public.

Risks from data include illegal collection of data and intellectual property (IP) breaches. Risks from AI systems include exploitation, whether direct and indirect. Cyber risks include cyber-attacks, matched by real world risks such as criminal activities. Cognitive risks are shaped by mono-focal (rather than plural) information which limits the potential for broad analysis by the user, thus resulting in the “cocoon” effect, while ethical risks include discrimination and the widening gap relating to information know-how.

As preferred actions, the framework advocates “explanation for the internal structure, reasoning logic, technical interfaces, and output results of AI systems, accurately reflecting the process by which AI systems produce outcomes”, as well as secure development standards in the research and development of AI. Personal data protection, respect of IP such as through copyright and patents, protection of the user’s legitimate rights to control and store the information, and responsible training, excluding sensitive data such as on nuclear weapons, may all help to address safety risks.

As for risks from AI system, there is a need to foster risk identification and mitigation to prevent negative acts emanating from AI systems (such as malicious attacks), while the other risks can be countered by security protection mechanisms, and conditions imposed on users’ applications to prevent harm, coupled with filtering and other protection against discrimination. These can be strengthened by the traceability of and through “tiered and category based management”, laws/regulation, self-regulation and capacity-building to deal with the range of risks.

Various stakeholders targeted by such actions include algorithm developers, AI service providers, specific users such as officialdom, and general users. More specifically in the legal field, China has taken a step by step approach. Even before evolving an AI law, the country passed a personal data protection law influenced by the European Union (EU)’s General Data Protection Regulation. The law underlined the consent-based approach to personal data, enabling access to and erasure of data used without such consent, and corrective action.

Since then, it has adopted a law to compel the registration of algorithms, as well as both registration and oversight of deep synthesis systems which might produce deep fakes or hallucinations, such as deceptive visual, voice or written content. It has also introduced ethical reviews of such systems.

As part of this gradual, legal approach, in 2023, the country opted for interim measures on generative AI, upholding core socialist values and underlining personal data protection and respect for IP. It also introduced mandatory labelling or watermarking of AI generated content. There is now in draft form a more comprehensive AI law which seeks to provide even broader coverage of users’ rights and privacy and IP related protection, as well as the rights of workers and disadvantaged groups.

It will entrench the need for safety risk assessment and obligations concerning critical AI having impact on life and freedoms by ensuring that they are registered and well supervised. Various compliance guidelines are to be introduced, with due regard for safety and security, access requirements, and obligations to identify those involved in the AI application. The flow of this draft law is based on human supervision and control rather than the laissez faire approach to AI’s self-automation.

On scrutiny, the common ground between these developments in China and other parts of the world includes the need to label AI generated content as part of consumer protection; privacy and IP protection; and human oversight.

However, there are at least two areas of divergence between this country and other parts of the world, now with an AI law. The country is much less open as a political system than the latter, and this may result in various guardrails which are more constraining for various rights and freedoms, in practice.

Moreover, there is extensive control from the top of the system, possibly leading to surveillance based on its claim of national security. This may result in the imposition of social scores in profiling persons in terms of their record, based on whether they are seen to be trustworthy. These anomalies need review to ensure compliance with international law. Intrinsically, there is thus the call for AI to be of service to humanity, with a sense of conscience.

Vitit Muntarbhorn

Vitit Muntarbhorn

Chulalongkorn University Professor

Vitit Muntarbhorn is a Professor Emeritus at the Faculty of Law, Chulalongkorn University, Bangkok, Thailand. He has helped the UN in a number of pro bono positions, including as the first UN Special Rapporteur on the Sale of Children, Child Prostitution and Child Pornography; the first UN Special Rapporteur on the Situation of Human Rights in the Democratic People’s Republic of Korea; and the first UN Independent Expert on Protection against Violence and Discrimination based on Sexual Orientation and Gender Identity. He chaired the UN Commission of Inquiry (COI) on Cote d’Ivoire (Ivory Coast) and was a member of the UN COI on Syria. He is currently UN Special Rapporteur on the Situation of Human Rights in Cambodia, under the UN Human Rights Council in Geneva (2021- ). He is the recipient of the 2004 UNESCO Human Rights Education Prize and was bestowed a Knighthood (KBE) in 2018. His latest book is “Challenges of International Law in the Asian Region”



Images are for reference only.Images and contents gathered automatic from google or 3rd party sources.All rights on the images and contents are with their legal original owners.

Aggregated From –

Comments are closed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More