TAFEP Hero Banner 2024 Nov Dec
Singapore to monitor the development and use of responsible AI through new Foundation

Singapore to monitor the development and use of responsible AI through new Foundation

Set up by the IMDA, the AI Verify Foundation comprises seven premier members among the large high-tech companies. 

To monitor the development and use of artificial intelligence (AI) responsibly, Singapore has created the AI Verify Foundation, as shared by Josephine Teo, Singapore’s Minister for Communications and Information at the ATxAI conference, a part of Asia Tech x Singapore (ATxSG). 

The foundation has been set up by Singapore’s Infocomm Media Development Authority (IMDA), and aims to utilise the collective power and contributions of the global open-source community to develop AI testing tools for the responsible use of AI. It also plans to enhance AI testing abilities and assurance to meet the needs of companies and regulators around the world.

AI Verify Foundation has seven premier members, namely, Aicadium (Temasek's AI Centre of Excellence), Google, IBM, IMDA, Microsoft, Red Hat, and Salesforce. These members are to formulate strategies and map out the development of AI Verify. The Foundation also includes more than 60 general members comprising local and multinational companies such as Adobe, DBS, Meta, SenseTime, and Singapore Airlines.

Launched as a minimum viable product for international pilot last year, over 50 local and multinational companies including IBM, Dell, Hitachi and UBS showed interest in AI Verify. The product is now available to the open-source community, benefiting the global community through a testing framework and toolkit that is consistent with internationally recognised AI governance principles, in alignment with those from EU, OECD, and Singapore. 

The product features an integrated interface to produce testing reports covering different governance principles for an AI system. It allows companies to have greater transparency about their AI through sharing these reports with stakeholders. 

Current AI governance principles such as transparency, accountability, fairness, explainability, and robustness will continue to apply to generative AI. The Foundation aims to bring in expertise from the open-source community to broaden AI Verify’s capability to evaluate generative AI as sciences and technologies for AI testing develop. 

Michaela Browning, VP Government Affairs & Public Policy, APAC, Google said: “The AI Verify Foundation is an important step in ensuring that AI is used for good and that its benefits are shared by everyone. Its work will help to promote transparency and accountability in AI development, and to ensure that AI systems are fair, unbiased, and safe.”


Thank you for reading our story! Please leave us a comment if you enjoy our content — take our 2023 Readers' Survey here.


Lead image / Minister Josephine Teo

Follow us on Telegram and on Instagram @humanresourcesonline for all the latest HR and manpower news from around the region!

Free newsletter

Get the daily lowdown on Asia's top Human Resources stories.

We break down the big and messy topics of the day so you're updated on the most important developments in Asia's Human Resources development – for free.

subscribe now open in new window