
Thailand is drafting legislation to establish an artificial intelligence (AI) ecosystem and broaden AI adoption. The Electronic Transactions Development Agency (ETDA) concluded an online public hearing on the draft, intending to present it to the Cabinet by the end of July.
The development of Thailand’s AI regulatory framework has been grounded in soft laws and guidelines. Sak Segkhoonthod, senior advisor at ETDA, emphasised the need for an AI law to effectively manage the technology’s impacts.

Since 2022, Thailand has studied global models, particularly the EU’s AI Act, and crafted two draft laws on regulating AI-enabled services and promoting AI innovation. These will be merged to create the AI law, adopting a risk-based framework categorising AI systems into prohibited, high risk, and general use.
ETDA has spearheaded AI governance by proposing a four-tier system. The first tier encourages collaboration with other countries to enhance Thailand’s global standing in AI governance, aligning with UNESCO’s principles. The second tier involves sectoral regulators overseeing policies, the third focuses on corporate implementation with practical tools and guidelines, and the fourth promotes AI literacy at the individual level.
The AI legislation aims to protect users from potential risks, establish governance rules, and remove legal barriers, facilitating wider AI adoption. For instance, the Transport Ministry’s existing regulations do not accommodate autonomous vehicles. The new AI law will support innovations, enabling relevant agencies to develop specific AI laws and allow tech entrepreneurs to test AI in controlled settings. The draft also permits using previously collected personal data for public benefit AI systems under strict conditions.
The draft principles supervise AI risks, granting legal recognition to AI actions unless specified otherwise. AI remains a human-controlled tool, with actions attributable to humans. One may be exempt from AI-generated acts if the responsible party could not foresee the AI’s behaviour.
Sectoral regulators will define prohibited or high-risk AI applications, and AI service providers must adopt global risk management practices. Overseas AI service providers in Thailand will need local legal representatives. Companies must label AI-generated content to inform consumers.
Law enforcement
ETDA’s AI Governance Center (AIGC) will coordinate law enforcement, with sectoral regulators managing high-risk AI rules. Two committees will be formed: one for issuing frameworks and policies, and another for monitoring AI risks.
Feedback from 80 organisations, including Google and Microsoft, indicates support for the draft’s balance between prohibiting harmful uses and promoting innovation. However, concerns were raised about sectoral regulators’ readiness to supervise AI efficiently and AI sovereignty issues. Common benchmarking guidelines for Thai language models are under consideration.

Ratanaphon Wongnapachant, CEO of SIAM.AI CLOUD, supports the legislation to prevent misuse and enforce responsible AI practices. Pochara Arayakarnkul, CEO of Bluebik Group, cautioned that the definition of AI in the legislation could have significant implications. AI governance must consider multiple risk dimensions, as industries adopt AI differently, he said.

Touchapon Kraisingkorn, head of AI Labs at Amity Group, suggested objective criteria for defining high-risk and prohibited AI, using metrics like user numbers and potential damages. A tiered compliance framework for small and medium-sized enterprises, independent of a company’s age, and a certification programme for “AI auditors” are recommended, reported Bangkok Post.
He also advocated for an “AI incident portal” for transparency and trust. For labelling AI-generated content, a phased approach starting with a voluntary programme is suggested to address deepfakes and misinformation without undue industry burden.
The story Thailand plans AI legislation to boost adoption and governance as seen on Thaiger News.