Once considered a far-fetched technological concept, artificial intelligence (AI) has entered our daily lexicon and become part of our everyday lives, often in ways that seem oblivious to us.
From the news we read online to the routes suggested by our navigation apps, AI is fast affecting the choices we make.
What comes next, however, is not just about AI influencing our options. It is about AI weighing in on the critical decisions that shape our society.
The debate has shifted. The question is no longer if AI will be part of decision-making, but how much authority we are prepared to hand over.
The rise of decision-making machines
AI systems are already being deployed in areas where speed, accuracy, and efficiency are paramount.
Algorithms help medical practitioners identify diseases at earlier stages, sometimes outperforming human specialists in detecting anomalies.
In education, AI tools guide educators and publishers in determining the most effective content formats.
These examples show that AI is not replacing human judgment outright. Instead, it is acting as a decision-support system, providing insights that would take humans far longer to calculate.
Yet the more reliable these systems become, the more likely it is that humans will defer to them. This gradual shift raises the question: at what point do we stop leading and start following?
The benefits we cannot ignore
There are valid reasons why AI is becoming so deeply embedded in decision-making. It can process enormous amounts of data in seconds, detect patterns invisible to humans, and propose solutions without fatigue or distraction. Best yet, it never complains.
The same applies in business and logistics. Retailers use AI to predict consumer demand with remarkable accuracy, while global supply chains exploit algorithms to anticipate disruptions and reroute goods before delays spiral out of control. In such contexts, the promise of AI is not just speed. It is resilience.
The risks that keep us awake
Yet we cannot discuss the benefits without confronting the risks. Algorithms are only as fair as the data they are trained on. When data reflects human biases, AI can amplify them.
This scenario has already been observed in hiring tools that unintentionally disadvantage certain applicants, or in predictive policing systems that may reflect existing social patterns.
Equally worrying is the risk of over-reliance. Humans may become so accustomed to AI-driven choices that they stop questioning outcomes.
The danger is not only in what AI decides, but in what we no longer decide for ourselves.
When nations rely on AI
Beyond individuals and businesses, governments are also adopting AI to improve governance and national planning. From traffic management to economic forecasting, the applications are proliferating.
It is not impossible to imagine a future where national strategies, even foreign policy options, are shaped by AI-generated recommendations.
The growing reliance on AI in national planning raises profound questions about sovereignty, responsibility, and trust.
Who is accountable if an AI-driven policy goes wrong? Can a country claim ownership over a decision that was crafted mainly by an algorithm?
These are not hypothetical issues. These are questions policymakers will have to address sooner rather than later.
Malaysia and the region
For Malaysia and the broader ASEAN region, the opportunities are vast. AI can help strengthen healthcare systems, improve disaster response, and expand digital education.
However, adoption must be guided by clear principles of transparency, accountability, and inclusiveness.
If AI is to play a bigger role in shaping tomorrow’s decisions, it should be done in a way that reflects local values and priorities, rather than relying on imported models from larger economies.
Shaping tomorrow together
AI is already with us, making small suggestions that shape the rhythm of our daily lives. But its real impact will be in the weightier decisions of tomorrow, from public health to national policy. The challenge is to embrace its benefits while guarding against its risks.
The future will not be shaped by AI alone, nor by humans alone. It will be defined by how we choose to share the responsibility of decision-making.
The decisions of tomorrow will carry both human and artificial signatures, and what matters most is that we remain conscious of the balance between the two.
-- BERNAMA
Zulkifli Musa is a Principal Assistant Registrar with Universiti Sains Malaysia.