search

Mobile data security reboot for increasing AI-driven threats

blog image 2025_security for ai

In the realm of mobile data security in our AI-enabled world, it’s critical to keep up to date with best practices for securing data in AI systems. It should come as no surprise to learn that this encompasses all the usual suspects, from data encryption to digital signatures, data provenance tracking, secure storage, and trusted infrastructure.

To help organisations using AI systems in their operations to navigate this new world, the Australian Signals Directorate (ASD) has released a paper providing essential guidance on securing data. It highlights the importance of data security in upholding the accuracy and integrity of AI outcomes while discussing potential risks arising from data integrity issues in AI development and deployment. There’s also a deep dive into three areas of data security risks in AI systems: data supply chain, maliciously modified (poisoned) data, and data drift. Be sure to download the paper from ASD, the principles provide a robust foundation for securing AI data and ensuring the reliability and accuracy of AI-driven outcomes. 

On the flip side, companies are being subjected to a steady rise in malicious activities that leverage AI that are growing in both sophistication and frequency. For example, in early 2024, a Hong Kong-based employee at UK engineering firm Arup was duped into transferring £20 million (approximately $41 million) to criminals, following a video call with senior management. The thing is, the employee hadn’t been talking to Arup managers at all, but to deepfakes created by artificial intelligence. As Rob Greig, Global Chief Information Officer of Arup said at the time, “This wasn’t a cyberattack in the usual sense, as no systems were compromised. A better term might be ‘technology-enhanced social engineering’.”

Meanwhile, fast forward to July this year, and it was reported that cybercriminals used voice-based phishing (or ‘vishing’) in a targeted attack on a Qantas call centre based in Manila, the Philippines. In this deepfake social-engineering scenario, AI-generated voices, impersonating employees were used to disarm support staff into handing over access credentials.

 

Strengthen your defences against AI-driven threats 

In the face of these sophisticated new threats, read on to find out what organisations should be doing to secure mobile data from the new generation of AI-enabled threats and cybercriminals.

1.    AI-driven security solutions

The best way to beat AI-driven scams is to use AI-powered cybersecurity tools. These tools use machine learning to identify unusual network behaviour, detect anomalies, and predict potential threats.

2.    Employee education 

It’s important to include the dangers of AI-generated phishing emails, deepfakes, and social engineering tactics as part of your security training

3.    Zero-trust security

With zero-trust security, employees are required to verify their ID whenever they try to access your network, providing an extra layer of security to prevent or contain data breaches.

4.    Multi-factor authentication

Likewise, multi-factor authentication adds an extra layer of security to business accounts, making it harder for attackers to gain unauthorised access.

5.    Regular security audits 

Scanning your systems for vulnerabilities can help identify and fix security gaps before attackers move in. AI can assist in this process by scanning networks for unusual activity and potential weaknesses.

 

Assess mobile data security

For a quick check of your mobile data security, assess your risk with our free calculator. It only takes five minutes to flag areas of concern. 

For a more in-depth view of your mobile data security posture, talk to us about conducting a security audit to help identify any potential vulnerabilities or weaknesses in your company’s communication channels and systems.  

>Get in touch

 

Mobile Data Security

Topics: Security