According to researchers, these AI chatbots—using custom techniques to bypass content restrictions—can quickly descend into dark, troubling scenarios, including child sexual exploitation and depictions of rape.
Experts from security outfit Permiso Security noted a sharp rise in attacks on generative artificial intelligence (AI) infrastructure, explicitly targeting platforms like Amazon Web Services (AWS) Bedrock.
These attacks have surged in the past six months, typically following accidental leaks of cloud credentials, often found in publicly accessible code repositories such as GitHub.
Permiso's investigation into compromised AWS accounts revealed that attackers were used stolen credentials to interact with the large language models (LLMs) hosted on Bedrock.
Alarmingly, none of the affected organisations had enabled logging—a feature that is disabled by default—leaving them blind to the activities carried out using their compromised systems.
Permiso researchers intentionally leaked a test AWS key on GitHub, this time with logging enabled, allowing them to monitor how attackers would misuse the access.
Within minutes, their decoy key was exploited to engage in a service promoting AI-powered sexual chat services.
According to their report: "After reviewing the prompts and responses, it became clear that the attacker was hosting an AI roleplaying service that leverages common jailbreak techniques to get the models to accept and respond with content that would normally be blocked."
"Almost all of the roleplaying was sexual, with some of the content straying into darker topics such as child sexual abuse," the report continued. Over just two days, Permiso recorded over 75,000 successful AI model interactions, most involving explicit content.
This revelation highlights a growing and disturbing trend in the misuse of AI technology, raising urgent questions about the security practices around cloud credentials and the safeguards needed to prevent such exploitation.