The subject line was urgent: “Immediate Action Required: Unauthorized Data Access Detected.” It looked real — the logo, the sender’s address and the panicked tone all screamed, “click this now.” An analyst at a public safety agency, highly trained to spot the usual red flags, only hesitated for a second to take action. Why? Because the phishing message wasn’t just convincing; it was flawless. It spoke his language, referenced a system his department used and lacked the clumsy grammar of typical phishing attempts. He clicked the malicious links and just like that, cyberattackers had his username and password — and with them, access to the agency’s credentials.
This isn’t a hypothetical scenario. It’s the new reality laid out in the Public Safety Threat Alliance (PSTA) 2025 Artificial Intelligence Threat Landscape report.
Phishing gets an AI upgrade
Artificial intelligence (AI) has significantly lowered the technical barrier to becoming a malicious cyber actor while accelerating the scale of malicious activity.
Nearly 44 percent of all AI misuse observed on the dark web involved phishing, making it the most significant threat to public safety agencies. AI tools make it simple for cybercriminals to produce content that is believable, scalable and adaptive, making it easier than ever to fool victims. Phishing has overtaken every other technique to become the top way cybercriminals gain initial access to public safety networks in 2025.
Generative AI has fundamentally reshaped the threat landscape by accelerating the attack tempo, lowering the skill threshold for cybercriminals and amplifying deception. These impacts are now extending into the public safety sector.
Public safety networks are the lifeblood of first responders. They support secure radios for tactical voice communications, phone calls and mission-critical data systems such as Computer-Aided Dispatch (CAD) and Public Safety Answering Points (PSAPs). A successful AI-driven phishing breach can quickly grant adversaries access to these highly sensitive systems. This threatens the ability of police, fire and EMS to coordinate and respond effectively to emergencies. The consequences extend far beyond data loss, potentially impacting emergency response and community well-being.
The commercialization of crime
The dark web has turned AI misuse into a profitable and scalable underground industry. Threat actors are no longer just experimenting; they’re monetizing these capabilities. They use subscription-based offerings that sell jailbroken AI prompts and self-hosted large language models (LLMs) designed explicitly for malware generation and creating phishing content.
What defenders must do
The PSTA assesses with high confidence that AI misuse will continue to expand through 2026. Traditional detection methods will struggle to keep pace with social engineering and well-crafted, personalized AI-generated phishing emails that can bypass standard defense mechanisms.
To address these evolving threats, detection methods must adapt. Defenders must emphasize behavioral and contextual detection to identify anomalies in emails and better defend against the latest phishing tactics. This allows for a real-time response to an active phishing campaign.
To mitigate human factors, the report emphasizes that continuous phishing training and situational awareness can effectively reduce human error. Training must evolve to recognize the sophistication of phishing attacks created by AI models.
Join the fight: access the full report
The information shared here is only the tip of the iceberg. To fully prepare your organization for the coming wave of AI-enhanced threats, you need the complete picture. The Public Safety Threat Alliance’s full analysis and threat intelligence are essential.
The Public Safety Threat Alliance (PSTA) is an Information Sharing and Analysis Organization (ISAO) that serves as a hub for sharing and collaborating on cyber threat intelligence within the global public safety community. Its team actively monitors and evaluates threats to public safety.
