CRIME & COURTS

Can You Distinguish Between Reality And Deepfake?

20/02/2025 09:24 AM

By Syed Iylia Hariz Al-Qadri Syed Izman

KUALA LUMPUR, Feb 20 (Bernama) -- Imagine receiving a phone call from someone whose voice sounds exactly like a family member's, urgently requesting money in distress. However, in reality, it is the result of artificial intelligence (AI) manipulation.

This is one of the growing threats posed by deepfake technology in Malaysia, said CyberSecurity Malaysia (CSM) chief executive officer Datuk Dr Amirudin Abdul Wahab.

He said fraud cases involving voice, facial and digital identity forgery are becoming increasingly alarming as cybercriminals employ more sophisticated tactics.

One method is "voice cloning," which involves using AI to replicate an individual's voice to deceive victims through voice messages.

Another is "facial re-enactment," which involves superimposing a person's face onto videos to facilitate investment scams or spread false information.

“There have been cases of online romance scams and fake social media accounts created using AI-generated faces.

"These accounts are often used for corporate espionage, fraudulent investments and phishing attacks," he told Bernama recently.

Last year, Bukit Aman Commercial Crime Investigation Department director Datuk Seri Ramli Mohamed Yoosuf revealed that 454 cases of fraud involving deepfake technology were reported, with total losses amounting to RM2.272 million.

Deepfake technology operates by analysing and replicating facial features, expressions and voices using artificial neural networks, mainly through Generative Adversarial Networks.

With this technology, AI can seamlessly and accurately manipulate a person's face or voice in videos, making it almost impossible to distinguish from actual footage.

"Cybercriminals have exploited this technology to impersonate prominent figures like Tan Sri Dr Noor Hisham Abdullah, Datuk Seri Siti Nurhaliza and Datuk Lee Chong Wei to deceive victims," Amirudin said.

He said CSM has implemented proactive measures to counter this threat, including digital content assessments using advanced technology to detect deepfakes.

He added that CSM's Digital Forensics Department utilises AI and machine learning to detect manipulation of digital content.

"We use specialised software for image and video analysis, along with source verification technology and metadata analysis, to ensure content authenticity.

"By integrating these technologies, CSM can effectively address the threat posed by deepfakes," he said.

Currently, Malaysia has no specific laws regulating deepfake technology.

However, Amirudin said action against deepfake content can be taken under the Communications and Multimedia Act 1998, specifically Section 211, which prohibits distributing obscene, false or threatening content intended to harass others.

"The government is considering enacting specific legislation to regulate AI and deepfake technology," he added.

Meanwhile, CSM has implemented various awareness programmes, including the National Anti-Scam Roadshow and the National ICT Security Discourse (NICTSeD), to educate the public on cyber threats, including deepfakes.

"However, combating deepfakes faces significant challenges, including the rapid advancement of technology, the absence of specific laws and limited public awareness.

"Therefore, the public must stay vigilant and take precautions to protect themselves from becoming victims," he said.

-- BERNAMA

 

 


 

© 2025 BERNAMA   • Disclaimer   • Privacy Policy   • Security Policy  
https://bernama.com/en/news.php?id=2394394