29/05/2024 09:22 AM
Opinions on topical issues from thought leaders, columnists and editors.

By: Lee Wan Sie

Artificial intelligence (AI) is forecast to reach a market volume of US$826.70 billion by 2030 (Statista) [1]. Its transformative power in value creation, boosting economic opportunities, and strengthening competitive advantage has put the emerging technology at the centre of national growth policies, including Malaysia’s.

In early May, Prime Minister Datuk Seri Anwar Ibrahim launched three initiatives that underscore his government’s commitment to nurture Malaysia’s AI ecosystem – the AI Talent Roadmap for Malaysia 2024-2030, the AI Faculty at Universiti Teknologi Malaysia, and the Malaysia AI Consortium.

As the potential of AI grows, so has concerns about the risks of using it, including the technology’s impact on privacy and security. People are wary of how AI technologies use and apply the data they collect.

In the past five years, consumer trust in AI has fallen globally from 61 per cent to 53 per cent (Edelman)[2]. There have also been instances where the AI models’ output does not perform as intended. When the AI model is not trained and tested against representative datasets, for example, there can be bias against certain populations.

As the technology matures and becomes more ubiquitous, organisations and companies are doing more to ensure that the AI systems they implement are not only making accurate, bias-aware decisions without violating data privacy but are also being used in a responsible manner. In recent years, both the public and private sectors have focused on developing guardrails and principles to ensure that AI is developed and deployed in a safe, trustworthy, and ethical fashion.

File photo

Walk the talk – demonstrating responsible AI use

To address the significant concerns many have over the unintended consequences of AI use, organisations and companies need to go beyond committing to responsible AI principles and do more to demonstrate to their stakeholders that they are implementing responsible AI in an objective and verifiable way.

Voluntary self-assessment is a start. AI Verify, which is the world’s first voluntary AI Governance Testing Framework and Toolkit, enables businesses to demonstrate their deployment of responsible AI through technical tests and process checks. Developed by Singapore’s Infocomm Media Development Authority (IMDA), AI Verify has two components.

Firstly, the governance testing framework specifies the testable criteria and the corresponding processes required to carry out the test.

Secondly, the software toolkit conducts technical tests and records the results of process checks.

AI Verify brings together the disparate ecosystem of testing sciences, algorithms and technical tools to enable companies to assess their AI models holistically in a user-friendly way. AI Verify also facilitates the interoperability of AI governance frameworks in multiple markets and contributes to the development of international AI standards. It encompasses a testing framework that is aligned with internationally accepted AI ethics principles such as those from the EU and OECD.

File photo

Maintaining a pro-innovation approach to upholding responsible AI use; public-private partnerships critical

While guidelines are key to safeguarding responsible AI use, it is important to ensure that these guidelines do not inadvertently restrict innovation. This light-touch, flexible approach to managing AI risks is reflected in the artificial intelligence governance framework published by the Association of Southeast Asian Nations (ASEAN) in February this year. The voluntary ASEAN AI Guide provides seven guiding principles and recommends best practices for implementing responsible AI in the region.

To truly move the needle on responsible AI governance, close public-private collaboration in discussions and action is vital. Only by working with Industry can we employ the collective power of public-private partnerships to advance AI testing tools, promote best practices and standards, and enable responsible AI.

The AI Verify Foundation, which was launched by the IMDA in 2023, brings together AI owners, solution providers, users and policymakers to support the development and use of AI Verify to address AI risks.

Companies including AWS, Google, Meta, Microsoft and Standard Chartered Bank have tested AI Verify and provided IMDA with valuable feedback on the framework. Such industry feedback is consistently channelled into the development of the framework to strengthen AI governance testing and evaluation.

File photo

Responsible AI underpins the technology’s future

The journey towards responsible AI has just begun and progress requires commitment from, and collaboration with, stakeholders across the AI ecosystem. At the upcoming ATxSummit, global government and business leaders as well as visionaries and industry experts will gather in Singapore to advance discussions around AI governance and explore partnerships to bridge the gap between AI's expanding capabilities and the necessary safeguards.

Gen AI, for instance, has given businesses the ability to generate content quickly and cheaply but the technology brings its own set of risks and challenges.

AI is poised to transform the global economy and will profoundly impact the way we work, live and play, bringing economic and social benefits for all. But its immense potential cannot be fully harnessed without addressing concerns about the risks of using AI and solidifying trust in the technology. The path ahead may be challenging to navigate but it is a path we must take.


Lee Wan Sie is Director for Data-Driven Tech at Singapore’s Infocomm Media Development Authority. In the area of AI, her responsibilities include driving Singapore’s approach to AI governance, growing the trustworthy AI ecosystem in Singapore, and collaborating with governments around the world to further the development of responsible AI. She is also responsible for encouraging greater use of emergent data technologies, such as privacy enhancing tech, to enable more trusted data sharing in Singapore.

About Infocomm Media Development Authority (IMDA)

The IMDA leads Singapore’s digital transformation by developing a vibrant digital economy and an inclusive digital society. As Architects of Singapore’s Digital Future, the agency fosters growth in Infocomm Technology and Media sectors in concert with progressive regulations, harnessing frontier technologies, and developing local talent and digital infrastructure ecosystems to establish Singapore as a digital metropolis.

[1] Statista, 2024, Artificial Intelligence - Worldwide

[2] Edelman Trust Institute, 2024, 2024 Edelman Trust Barometer Global Report

(The views expressed in this article are those of the author(s) and AWS and do not reflect the official policy or position of BERNAMA)