Artificial intelligence: When machine learning becomes the enemy

Artificial intelligence and machine learning are turning out to be a new, lucrative business field not only for companies, but also for hackers.

Artificial intelligence
© depositphotos, agsandrew

 

In the future, technologies based on artificial intelligence (KI), serve more and more to combat fraud, evaluate business processes even better in companies, optimize test procedures and develop new solutions for existing problems.

What sounds like a digital panacea for humanity and its economic cycle is in reality only one side of the coin. However, the ever-increasing digitization of industry, critical infrastructure and applications in the private sphere also creates new risks through the use of artificial intelligence.

The shadow side

Companies that use machine learning(ML) technologies are undoubtedly an attractive target for attackers. One conceivable scenario would be to penetrate the system with fictitious data records, manipulate it in such a way that algorithms make the wrong decisions or paralyze monitoring systems, for example, and thus cause considerable damage to the company.

This so-called adversarial machine learning, i.e. when ML-based technologies are modified by means of fake inputs, is a hot topic especially in the field of cybersecurity. Artificial intelligence arouses desires and cybercriminals also see the potential to abuse ML in their favor. They are able to use it to create and spread new malware or phishing campaigns. But they can also target their victims directly, tap valuable data, look for and exploit zero-day vulnerabilities, and hijack infrastructures such as Botnets abuse.

Spammers and Emotet open Pandora's Box

The fact that these mind games are no longer so far removed from reality is impressively demonstrated by current spam campaigns. Here, ML-based translation services are used to linguistically optimize the messages and send them to a wide variety of countries. Even the currently widespread downloader Emotet abuses ML technologies to target users even more.

He even cleverly bypasses Honeypots and botnet trackers. Emotet collects the data of its potential victims and sends it to the attacker's C&C server. Based on the evaluated information, the payload for the malware is then "calculated" individually. In addition, the malware is able to distinguish between human operators and virtual machines, which are mainly used by researchers.

The "arms race" continues

Artificial intelligence and machine learning are turning out to be a new, lucrative business field not only for companies, but also for hackers. The race between cyber security and cyber criminals is thus entering a new round with AI. Without machine learning, the operators of Emotet would currently not even be able to implement such expensive and elaborate self-defense mechanisms in a malware. This makes them all the more aware that machine learning will not only bring benefits and that the risks will increase over time. For this reason, a variety of highly effective detection modules such as DNA detection, advanced memory scanners, network attack protection and UEFI scanners exist alongside ML-based technologies. Only layered protection and human expertise can prevent cyber security from going nowhere.

Author: Thomas Uhlemann, Security Specialist, ESET

 

(Visited 108 times, 1 visits today)
h2> More articles on the topic

SECURITY NEWS

Bleiben Sie informiert über aktuelle Sicherheitsthemen – praxisnah und zuverlässig. Erhalten Sie exklusive Inhalte direkt in Ihren Posteingang. Verpassen Sie keine Updates.

Jetzt anmelden!
anmelden
You can unsubscribe at any time!
close-link