GenAI creates a dual perspective of opportunities and threats, and as a global survey has shown, the extensive majority of cybersecurity leaders from around the world are afraid of its negative effects. A staggering 93% of respondents to CyberArk’s survey said they expected AI to negatively impact them, with AI malware and phishing topping the list. CyberArk’s 2024 Identity Security Threat Landscape Report, which surveyed 2,400 cybersecurity leaders in 18 countries, found that “cyber debt continues to grow with GenAI, the rise of machine identity, and the rise of third- and fourth-party risks.” .
The report shows that approximately 99 percent of organizations are using artificial intelligence in cybersecurity initiatives.
continued below
However, it also predicted “an enhance in the number and sophistication of identity-related attacks as skilled and unskilled criminals also enhance their capabilities, including artificial intelligence malware and phishing.”
A total of 93 percent of respondents predict that AI-powered tools will create cyber risk for their organization in FY25.
“Over the last 12 months, 9 out of 10 organizations have been breached by a phishing/vishing attack. These types of attacks will be more complex to detect because artificial intelligence will automate and personalize the attack process,” it said.
Overall, 93 percent of Indian organizations experienced two or more identity-related breaches in 2023.
“As we look to the year ahead, organizations can expect data leaks from compromised AI models, AI-based malware and phishing,” he added.
The report additionally expressed concerns about the growing spread of false information, especially with the upcoming elections.
“Prepare for a pivotal and disturbing decision in 2024 as over 4 billion voters prepare to elect leaders in over 60 countries and AI-powered counterfeit news campaigns become the weapon of choice for anyone seeking to influence election outcomes ” – it was written.
The report also cited machine identities as a vulnerable source of risk, “ripe for exploitation by bad actors with the ability to perform large-scale AI-based activities.”