AI Third-Party Risk: How OpenAI Tools Are Changing Cybersecurity (2025 Insight)

AI third-party risk in modern software supply chains

AI third-party risk is becoming the new frontier in cybersecurity. As developers rush to adopt AI-assisted coding tools and open-source components, new vulnerabilities are emerging — often faster than defense teams can react. The shift from open source convenience to AI-generated code is revolutionizing software development, but it’s also introducing unseen risks across the modern supply chain.


The Rapid Rise of AI Third-Party Risk

The digital mantra to “move fast and break things” has evolved into “build fast with AI.” Yet, speed introduces exposure. AI-enabled coding assistants now contribute to codebases worldwide — but they also hallucinate incorrect or non-existent dependencies. These phantom libraries are being exploited in a new type of threat called slopsquatting.

“AI hallucinations are no longer harmless quirks; they’re attack vectors,” warns Dr. Lena Kwon, cybersecurity researcher at Stanford University. “Malicious actors are leveraging AI systems’ mistakes to introduce poisoned software packages into trusted environments.”

A recent study by the University of Illinois Urbana-Champaign (2024) found that 19% of software packages recommended by AI models don’t exist, validating how deep this risk runs (Source).


Case Studies: From SolarWinds to Slopsquatting

Third-party risk has long challenged cybersecurity teams. The SolarWinds breach (2020) showed how a single corrupted update could provide a nation-state group with access to 18,000 organizations, including U.S. federal agencies. Similarly, the Log4Shell vulnerability (2021) demonstrated the catastrophic downstream effects of open-source dependencies (CISA Report).

“Every link in the supply chain can be a point of failure,” says Michael Torres, lead analyst at CyberEdge Group. “AI has simply multiplied those links by orders of magnitude.”

Now, attackers exploit the hallucination behavior of AI tools. When an AI assistant invents a package name that doesn’t exist, an opportunist can publish malware under that name. Once downloaded — often thousands of times — it quietly infiltrates production systems, as seen with the ccxt-mexc-futures package discovered on PyPI (Armis Labs Report).


The Shift Left Imperative

To counter AI third-party risk, both DevOps and cybersecurity must act “left of boom.” Proactive risk detection must begin before deployment, not after compromise.

“Visibility is the new firewall,” argues Rina Patel, CTO of SecureStack Innovations. “We cannot defend what we cannot see — every dependency, every AI-generated line of code must be traceable.”

New methodologies like Software Bills of Materials (SBOMs) mandated by U.S. government guidance in 2023 (NTIA Framework) have become essential. They catalog every component of an application, enabling rapid response when a vulnerability surfaces.

Static Application Security Testing (SAST), Dynamic Application Security Testing (DAST), and AI-enabled anomaly detection are now crucial layers of defense. Organizations are also incorporating AI hallucination filters — systems that cross-verify AI outputs against verified package registries before inclusion in code.


Human-in-the-Loop Security

While AI can accelerate development, human oversight remains essential. A software engineer cannot blindly trust an AI-generated dependency tree. Instead, developers must integrate multi-factor authentication, input sanitization, and manual review directly into their workflows.

As Dr. Andrea Koenig, Director of the MIT AI Risk Lab, notes: “AI doesn’t yet understand intent. Security must still come from the human developer who questions what’s being built — and why.”

By merging human review with automated visibility, teams can secure their codebases without sacrificing innovation.


Key Takeaways

  • AI third-party risk is redefining supply chain security in 2025.

  • Slopsquatting exploits AI-generated hallucinations of non-existent libraries.

  • Visibility via SBOMSAST, and DAST tools is essential.

  • Human oversight and secure coding practices remain irreplaceable.

  • The solution lies in AI-augmented, human-driven cybersecurity.

Related Article (Also Read)

7 Shocking Facts About Huntsville Ransomware Attack 2025

References

  1. CISA Log4J Vulnerability Guidance

  2. Armis Labs Q3 2025 Report

  3. University of Illinois AI Hallucination Research (2024)

  4. NTIA Software Bill of Materials Framework

  5. GitHub AI Coding Trends Survey (2025)

  6. SecurityWeek: Shai-Hulud Supply Chain Attack Report

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *