

Artificial intelligence powers 80% of ransomware attacks, according to a recent study.With the help of AI, attackers can instantly generate deepfakes, launch automated phishing attacks, and break passwords.
AI-powered malware may adjusting tactics to get around defenses by analyzing security measures and adapting to different environments, unlike traditional malware that follows static attack patterns. Real-time attack strategy refinement by these sophisticated AI-driven threats makes them harder to identify and more dangerous to networks.
A new era in cybersecurity may begin with the publishing of a threat intelligence report in August by the US artificial intelligence (AI) company Anthropic. Anthropic AI solutions have been actively used by hackers to carry out espionage and fraud, as well as to create advanced malware, according to the study.
The chatbot Claude’s maker, Anthropic, claims that hackers exploited its tools “to commit large-scale theft and extortion of personal data.”
According to experts, large language models (LLMs) such as GPT-4, LLaMA 2, and other open-source technologies are being used by cyber criminals with the help of AI to evaluate stolen data in a matter of minutes. Things that used to take days now happen nearly immediately.
An ideal target can be identified using an algorithm in an AI-driven social engineering attack. Create an online identity and personality to communicate with the target of the attack. Create an attention-grabbing, realistic scenario. To engage the target, write customized messages, record audio messages, or generate video content.When a cyberattack takes place a deepfake is typically part of a social engineering campaign.
Generative AI is used in AI-driven phishing attack to generate realistic and highly tailored emails, SMS messages, phone conversations, and social media posts.



