top of page

What is the impact of AI on cybercrime and cybersecurity?




AI is becoming increasingly commonplace in businesses and organisations of all sizes; and with its multitude of capabilities, there’s little wonder why it’s so popular. From taking care of those niggling repetitive tasks to opening up a world of learning and creative opportunities, AI can definitely have a positive impact. 

 

But where does AI stand when it comes to cybercrime and cybersecurity? Are those automated tasks you’ve set up opening the doors to cybercriminals? Or can AI actually help you to protect your business from cybercrime? The answer is murkier than you’d expect, as it’s actually a little bit of both. 


How is AI helping cybercriminals?  

AI-driven tools have streamlined cybercrime by automating processes, enabling cybercriminals to conduct more sophisticated attacks than ever before. For example, the use of AI bots in phishing scams is becoming more and more common. These bots look at online information to make messages that seem real and increase the likelihood of an individual falling for these scams.  

 

Even if a cybercriminal doesn’t use a bot, the easy access to tools like ChatGPT means it’s easier than ever for criminals to craft emails that sound real and don’t have the telltale signs of a phishing email like bad spelling or poor grammar.  

 

Another concern is deepfakes, which are fake media created by AI. These tricky creations can pretend to be high-ranking company members or spread false information. Deepfakes erode trust and facilitate social engineering attacks, manipulating perceptions in order to benefit malicious intent. And it isn’t just a concern on a smaller scale either, deepfakes can have extreme impacts. 

 

Vanessa Eyles, Managing Director of the West Midlands Cyber Resilience Centre says: 

“There is a lot of concern about AI (artificial intelligence) being used for misinformation, which is particularly pressing with elections due here and in the US next year.  

 

We’re already concerned about hallucinations in AI because of the drivel that LLMs (large language models) are fed: imagine the issues if one state looked to use AI to undermine another. There’s a need to show the provenance and trustworthiness of information being circulated. The concept of a watermark across video footage will be increasingly important.” 

 

The risks don’t stop at phishing scams and deepfakes. Cybercriminals are increasingly employing machine learning to analyse tons of data, pinpointing weak spots in software. This insight lets them find areas to exploit, leading to breaches and the theft of valuable/personal data. 


Can AI be used for defending against cyberattacks?

If this information has almost sent you running for the hills and preparing to banish AI from your business altogether, don’t panic, there’s a silver lining! AI can actually be used to protect against cybercrime in several different ways: 

 

Accelerated threat detection and response 

AI's prowess in analysing extensive datasets allows for swift identification of anomalies and potential threats, helping to give incident response a handy boost. Automating tasks like patch management enhances the agility of cybersecurity measures, ensuring timely updates and reducing vulnerabilities. It also ensures it doesn’t get pushed to the bottom of a to do list then forgotten about completely. 

 

Enhanced accuracy and efficiency 

AI-driven security systems exhibit superior efficiency in scanning and identifying vulnerabilities, surpassing traditional methods. Their ability to discern intricate patterns undetectable by human analysis elevates the accuracy of threat detection, fortifying defence mechanisms. 

 

Scalability and cost-efficiency 

The scalability of AI-driven solutions optimises resource allocation, economising cybersecurity efforts. For instance, rapid data processing and correlation enable proactive threat mitigation, minimising the impact of attacks. This means you won’t have to shell out a tonne of money to handle the impact of a data breach or deal with client loss from taking a hit to your reputation.  


What are the cons of using AI in cybersecurity? 

Now we’ve taken a glimpse into how AI can benefit your business when it comes to improving your cybersecurity, it’s only fair we keep the view balanced and talk about the risks. These risks include: 


Lack of transparency 

The opacity of AI algorithms presents challenges in comprehending decision-making processes. This lack of transparency hampers improvements and compromises the reliability of security measures, potentially leaving vulnerabilities unaddressed. 

 

Adversarial attacks and manipulation 

Cyber attackers can exploit AI's weaknesses by feeding it manipulated data. This can mislead AI systems, causing them to make incorrect decisions or overlook security threats. Such adversarial attacks can be used to evade detection and compromise systems. 

 Dependency risks and overreliance 

Relying heavily on AI in cybersecurity might lead to complacency or a false sense of security. Overreliance on AI could diminish the role of human expertise and oversight, creating a vulnerability if AI systems fail or are bypassed by sophisticated attacks. 

 

Ethical and privacy concerns 

The use of AI in cybersecurity raises ethical concerns, especially surrounding user privacy. AI systems might collect and process vast amounts of personal data, raising questions about data privacy, consent, and the potential for misuse or breaches of sensitive information.


Should AI be used in cybersecurity or not? 

So, here’s the big question; should your business be using AI for cybersecurity or not? We doubt it will come as a surprise when we say, it’s all about having a balanced approach! 

 

Taking advantage of AI's capabilities while acknowledging its limitations and potential risks is crucial. It could be employed as a supportive tool to assist current work, or to support growth. It seems that it is currently best used when complementing human expertise more than replacing it entirely. Striking a balance between harnessing AI's potential for enhancing security measures and addressing the ethical, transparency, and reliability concerns is key in ensuring you have a solid cybersecurity plan in place. 

 

 

Need some extra help with your organisation’s cyber security? Contact us today to find out how we can help. 

The contents of this website are provided for general information only and are not intended to replace specific professional advice relevant to your situation. The intention of The Cyber Resilience Centre for the West Midlands is to encourage cyber resilience by raising issues and disseminating information on the experiences and initiatives of others.  Articles on the website cannot by their nature be comprehensive and may not reflect most recent legislation, practice, or application to your circumstances. The Cyber Resilience Centre for the West Midlands provides affordable services and Trusted Partners if you need specific support. For specific questions please contact us.

 

The Cyber Resilience Centre for the West Midlands does not accept any responsibility for any loss which may arise from reliance on information or materials published on this document. The Cyber Resilience Centre for the West Midlands is not responsible for the content of external internet sites that link to this site or which are linked from it.

bottom of page