

With the use of artificial intelligence, stalking is becoming faster and difficult to detect.With misuse of AI techniques like deepfakes, automated tracking, voice cloning, and AI chatbots, cyberstalking has changed in 2025, becoming more challenging to detect.
Artificial intelligence-powered chatbot applications known as “AI companions” are designed to imitate human speech in order to replicate interpersonal interactions. The conversations may take place vocally or via text.Sexually explicit conversations are made possible by certain AI companion apps, especially those that need paid subscriptions.
To degrade, harass, manipulate, and threaten their victims, cyberstalkers use various tactics like making harsh, provocative, or abusive comments on the internet,sending the victim obscene, threatening, or insulting emails or messeges,exposing the victim’s private details on the internet,using tracking devices to monitor the victim’s online activity,threatening or blackmailing the victim using digital technology.
The technology known as “generative AI” uses algorithms to generate original text, music, video, and image content. It creates new content that aligns with what it has learned by learning patterns and styles from vast volumes of data.Cyber stalkers often use generative AI technology.
Generative AI makes it possible for abusive or threatening posts, comments, or messages to be automatically created and shared rapidly across a variety of sites.Deepfakes that are convincing can be generated through generative AI and used for defamation, harassment, or revenge porn.To build emotional bonds with victims, generative AI may create lifelike profiles and imitate conversation.Generative AI and deepfakes can help facilitate online blackmail, or sextortion.Generative AI also use in doxing collecting and publicly sharing personal data on individuals without their consent.



