Singaporean authorities have revealed a framework and a set of innovative testing tools that assist companies across various sectors in enhancing governance, transparency, and accountability in their artificial intelligence (AI) applications.
This pioneering initiative, launched by the "Infocomm Media Development Authority (IMDA)" and the "Personal Data Protection Commission in Singapore (PDPC)", is named "AI Verify". It comes at a time when tech companies are rapidly striving to leverage the immense opportunities presented by the adoption of AI applications in various fields, despite the lack of mature global standards that mitigate the risks associated with developing such applications. These risks are part of the challenges the Singaporean initiative is targeting.
The framework and toolkit introduced by Singapore are currently available as "Minimum Viable Products (MVP)", meaning they contain enough features for testing purposes and can be revised to adapt to the feedback that companies provide. However, they are considered advanced in their design as the IMDA relied on international guidelines and principles concerning AI ethics in their development. These principles cover transparency, security, safety, fairness, and accountability. Additionally, the IMDA identified six main risks involved in adopting generative AI applications and presented a framework on how to address these risks, which include inherent cultural and demographic biases in application design, intellectual property rights violations, and more. Security testing and data management systems (including data privacy principles) were not included in this experiment since they generally have received considerable attention globally.
Nevertheless, the product does not aim to establish ethical standards as much as encouragee companies to test their AI systems according to the product's verification standards. It ensures that these systems and procedures genuinely perform as the companies claim that they do according to a set of AI governance principles and concepts adopted both internationally and locally in Singapore. In turn, the reports resulting from these tests contribute to increasing the transparency of company systems to investors, stakeholders, and customers, and bolster their confidence in private sector companies.
The tested application can be accessed through the cloud, which provides more protection, security, and ease of access. Currently, participation in the program is voluntary and open to companies wishing to conduct a self-assessment of their artificial intelligence applications, and benefit from published reports. The product consists of two elements: First, the governance testing framework, which defines the test standards and the required process; Second, the software tools used in conducting the technical test, which also record and save the results.
However, the team behind the “AI Verify” pilot faced some challenges that usually emerge in the efforts of regulatory bodies seeking to set standards for the commercial use of artificial intelligence. These challenges include defining a concept of fairness and setting a unified standard for it across different sectors, cultures, and contexts. Additionally, the rapid technological evolution of artificial intelligence in countless fields makes it challenging for the framework and accompanying verification software to keep pace with AI advancement. This requires continuous updating of product elements, which can be partially addressed by formulating and designing the product to be as flexible as possible, considering potential technological advancements.
On the other hand, the product can benefit from crowdsourcing opportunities and collaboration with technology solution providers to identify and fill gaps. In addition, Singapore is planning to draw on open-source communities to enhance the application’s ability and minimize the risks of adopting artificial intelligence.
Through inviting global and local companies to participate in the "AI Verify" initiative, Singapore aims to achieve several objectives. The most important of which is benefiting from companies' feedback and recommendations in order to ensure that it can meet the needs of different sectors as they continue to harness AI uses in ways that foster trust with both internal and external stakeholders.
Singapore also aspires to contribute to the development of globally recognized AI standards through the “AI Verify” initiative, especially as the developers of these standards take into account best practices worldwide. Singapore also seeks to facilitate interoperability between different artificial intelligence governance frameworks. Currently, it works with regulatory bodies and institutions developing global standards to achieve compatibility with existing frameworks, which provides opportunities to market local AI products and services to many foreign markets. Singapore also hopes to build a locally renowned community for testing AI applications in collaboration with like-minded industry leaders, policymakers, and civil society, ensuring its initiative aligns with the AI sector's requirements and gains the trust of relevant parties.
Different companies can benefit from AI Verify, especially owners of artificial intelligence systems who are keen to verify the performance of their systems based on globally accepted rules and principles in AI governance. This also benefits technology solution providers, AI application developers, and researchers wishing to submit new testing algorithms and models to the Media Development Authority. The product is also suitable for technology service providers wishing to offer consultancy services in AI testing to their clients, as well as companies looking to integrate the product with their services.
The IMDA hopes that the cumulative efforts associated with the product will eventually facilitate its emergence as a benchmark in the field of AI ethics.
It is worth mentioning that the Singaporean government has been adopting artificial intelligence applications for many years, leading it to work with more than 60 local and international companies and institutions to develop the "Model AI Governance Framework". This framework is well-received across various sectors, especially the financial and health sectors, where Singapore is actively adopting AI applications.
References:
- https://www.straitstimes.com/singapore/health/moh-agency-microsoft-to-develop-ai-tool-for-healthcare-workers-in-s-pore
- https://www.zdnet.com/article/singapore-puts-ai-on-the-cloud-to-boost-public-sector-deployment/
- https://www.straitstimes.com/opinion/forum/forum-singapore-has-made-moves-to-harness-full-potential-of-ai-and-ensure-responsible-use
- https://www.weforum.org/agenda/2023/01/how-singapore-is-demonstrating-trustworthy-ai-davos2023/
- https://www.smartnation.gov.sg/files/publications/national-ai-strategy.pdf
- https://aiverifyfoundation.sg/
- https://www.imda.gov.sg/resources/press-releases-factsheets-and-speeches/press-releases/2023/singapore-launches-ai-verify-foundation-to-shape-the-future-of-international-ai-standards-through-collaboration
- https://aiverifyfoundation.sg/downloads/Discussion_Paper.pdf