Be on guard: AI puts IT security at risk
Artificial intelligence is massively changing the threat landscape. It offers cyber attackers new ways to target identities and even bypass authentication mechanisms.
Artificial intelligence (AI) is influencing modern society at an unprecedented pace. ChatGPT and other generative AI tools offer many benefits, but they can also be exploited by attackers to cause a great deal of damage. CyberArk Labs has now taken a closer look at this evolving threat landscape to better understand what new AI attack vectors mean for identity security programs and to help develop new defense strategies.
Specifically has CyberArk analyzed three new attack scenarios.
AI Scenario 1: Vishing
Employees have become very wary of phishing emails and know what to look out for. With vishing, on the other hand, or voice phishing, this skepticism is often non-existent, opening up new opportunities for cyber attackers. AI text-to-speech models make it easy for them to leverage publicly available information, such as interviews of CEOs in the media, and impersonate corporate executives. By building trust with their target, they can gain access to credentials and other sensitive information. On a large scale, such vishing attacks can now be carried out using automated real-time generation of text-to-speech models. Such AI-based deepfakes are already commonplace and very difficult to detect. AI experts predict that AI-generated content will eventually become indistinguishable from human-generated content.
AI scenario 2: Biometric authentication
Facial recognition is a proven biometric authentication option for accessing devices and infrastructure. However, it can also be duped by attackers using generative AI to compromise identities and gain access to an enterprise environment. Generative AI models have been around for years. So one can ask the question: Why is there so much fuss about this now? In a word, it's scale. Today's models can be trained at an incredible scale. ChatGPT-3, for example, has 175 billion parameters, more than a hundred times the size of ChatGPT-2. This exponential growth of parameters supports realistic faking, even with respect to face recognition.
AI scenario 3: Polymorphic malware
In principle, generative AI can be used to write all types of code, including malware or polymorphic malware that can bypass security solutions. Polymorphic malware changes its implementation while retaining its original functionality. For example, it is possible for an attacker to use ChatGPT to generate an infostealer and continuously modify the code. If the attacker uses the malware to infect an endpoint device and retrieve locally stored session cookies, he could impersonate the device's user, bypass security defenses, and access target systems undetected.
Conclusion
The three AI-based cybersecurity threats show that identities are the primary target of attackers, as they provide the most effective way to obtain confidential systems and data. Consequently, the use of an identity security solution is essential for threat mitigation. This securely authenticates identities and authorizes them with the right permissions, giving them access to critical resources in a structured way. Malware-agnostic defense techniques are also important. This means that companies should also take preventive measures such as implementing the least privilege principle or conditional access policies for local resources (such as cookie storage) and network resources (such as web applications).
"AI-based attacks do pose a threat to IT security, but at the same time, AI is also a powerful tool for threat detection and mitigation," emphasized Lavi Lazarovitz, vice president of cyber research at CyberArk Labs. "AI will be an important component in the future to address changes in the threat landscape, improve agility and help organizations stay one step ahead of attackers."