he application of artificial intelligence (AI) now spans various fields and is easily accessible to people from all walks of life as long as they are tech-savvy.
However, AI – a branch of computer science developed to simulate human intelligence in machines that are programmed to think and act like humans – is a double-edged sword in that it can be used for beneficial or nefarious purposes.
In terms of the latter, AI can be applied to orchestrate sophisticated scams, one of which involves the use AI-generated deepfake technology that enables scammers to create realistic audio and video impersonations of trusted individuals.
While Malaysia has not recorded any case of crimes involving AI, numerous incidents of such nature have been reported in other countries.
In China, a businessman, identified as Guo, was nearly cheated of 4.3 million yuan (about RM2.8 million) in May last year after he was tricked by a scammer who used AI to impersonate his (Guo’s) close friend.
It was reported that the “friend” wanted to borrow some money and persuaded Guo to transfer the sum. Fortunately, the businessman realised he was being scammed after finding out his friend’s identity had been stolen and he had no knowledge of the transaction. Guo alerted the police and the bank involved and immediately recovered 3.4 million yuan, with ongoing efforts directed towards reclaiming the remaining funds.
In another case reported early last year in the United States, a woman was contacted by a scammer who said her daughter has been kidnapped and demanded a ransom for her release. To convince the mother her daughter had been abducted, the “kidnapper” used AI to spoof the voice of the girl.
This case shocked the US authorities as the cloned voice, generated using AI, was highly convincing.
Commenting on the use of AI by fraudsters, research fellow at the Data Science Centre, Universiti Malaysia Sarawak, Syahrul Nizam Junaini warned Malaysia will not be exempt from facing such crimes in line with the increasing use of AI technology in the country.
Research Fellow at the Data Science Centre, Universiti Malaysia Sarawak, Ts Syahrul Nizam Junaini
He said the sophistication of this technology will make it possible for cybercriminals to orchestrate financial scams, especially when personal data is stored in the cloud.
“These perpetrators often target individuals based on information gleaned from social media,” he told Bernama.
According to Syahrul Nizam, the AI used in scams typically involves sophisticated software capable of analysing and replicating an individual’s visual and audio characteristics.
“This technology can mimic one’s speech patterns, speaking style, intonation and even facial expressions to the extent that it becomes challenging to distinguish between genuine and fake,” he said, adding deepfake is an AI software with the ability to generate fake videos of individuals.
He said the sophistication of this technology will pose significant challenges to law enforcement agencies in handling crimes, particularly those involving scams and other financial offences.
“The use of AI technology to perpetrate criminal activities demands specialised expertise from law enforcement (to handle such cases), especially in the field of digital forensics as evidence for such fraud cases is in digital form.
“Therefore, understanding and knowledge of AI are crucial to gather and analyse evidence," he said.
HIRE MORE IT EXPERTS
Syahrul Nizam added to ensure the country is prepared to face cybercrime threats, authorities particularly the police force need to increase the number of information technology (IT) experts within their ranks.
Stressing the importance of enhancing the skills of existing officers to keep pace with technological advancements, he said they should be sent overseas to participate in related programmes as well as collaborate with international police to gain insights into how they handle AI-linked cases including the use of deepfake technology.
Bukit Aman Commercial Crime Investigation Department (CCID) director Datuk Seri Ramli Mohamed Yoosuf told Bernama last month the department anticipates a surge in police reports linked to AI due to the widespread adoption of the technology in Malaysia and globally.
He said AI can be misused and solving such cases could pose a great challenge to CCID. “Our investigative technology must be enhanced to keep up with the development of AI,” he added.
Bank Negara Malaysia (BNM) was quoted in a media report as saying that it too views AI technology as one of the 'new tools' that online fraudsters are likely to employ in the future.
Meanwhile, Universiti Tun Hussein Onn Malaysia Department of Information Security and Web Technology senior lecturer Dr Noor Zuraidin Mohd Safar suggested comprehensive collaboration among stakeholders as an early preventive measure against AI-related crimes.
Universiti Tun Hussein Onn Malaysia Department of Information Security and Web Technology senior lecturer Dr Noor Zuraidin Mohd Safar
“AI technology will constantly evolve and Malaysia must be prepared for this. Stakeholders including the police, cybersecurity (authorities) and BNM must have expertise in AI technology with their focus on e-commerce and e-banking," he told Bernama.
He also suggested that stakeholders leverage AI technology to prevent crimes.
“AI also has the capability to serve as a preventive tool as it can identify suspicious data,” he said, adding that a system should be developed to empower law enforcement agencies to detect activities perceived as, or at risk of being, fraudulent.
For this, the government must be prepared to invest in developing a secure cybersecurity system, he said.
Noor Zuraidin also proposed that existing laws related to technology and crime be amended to align with current developments.
Observing the current legislation may somewhat lack bite in addressing crimes involving AI, he said improvements are necessary particularly in terms of personal data protection and the misuse of AI technology.
“For me, this is crucial in ensuring society is protected from those who misuse this technology and ensuring justice for victims who have been deceived with AI technology," he said.
He also suggested that stakeholders collaborate with industry experts to curb the leakage of personal data.
Sharing tips to avoid falling victim to AI-related scams, Noor Zuraidin advised the public to ensure the person contacting them is legitimate.
“Verify the caller's identity, inquire about information such as staff number, landline number, address and so on especially if the caller claims to be from a bank. If in doubt, terminate the call.
“Most importantly, cultivate a sceptical attitude when verifying information provided by the caller,” he said.
He also reminded the public to create unique usernames and passwords to make it difficult for criminals to hack their bank accounts.
“In the meantime, authorities must consistently provide awareness about financial crimes involving AI to the public so that they become more vigilant,” he added.
Translated by Rema Nambiar