Cybersecurity and AI assistants: when code writes code… with vulnerabilities

Ai coding cyber

AI assistants pose new cybersecurity challenges. Their use in code development can expose companies to critical risks.

AI-powered coding assistants are transforming software development. But behind the productivity promises lie major security risks, sometimes underestimated by technical teams. In September 2024, ANSSI (France) and BSI (Germany) published a joint report that serves as a wake-up call for developers, CIOs, CISOs, and legal tech professionals.

In a world where AI writes code instead of humans, who checks the code?


🤖 Real opportunities for software development

AI Coding Assistants (Copilot, CodeWhisperer, Tabnine, etc.) already offer:

  • Automatic generation of code blocks,

  • Assistance with documentation, testing, and cross-language translation,

  • Improved developer comfort, especially for repetitive tasks.

But these assistants, based on LLM models, guarantee neither the quality nor the security of the generated code. So, are AI assistants and cybersecurity a risky combination?


🧨 Vulnerabilities often invisible… until exploited

The ANSSI-BSI report identifies several specific threats:

1. Security vulnerabilities in generated code

Even when assistants produce functional code, critical vulnerabilities are often present: SQL injections, authentication errors, poor user input handling…

🧠 « Code generated by AI should be considered untrusted by default », the agencies remind us.

2. Supply chain attacks

LLMs can « hallucinate » non-existent libraries or packages that developers will copy without verification. This opens the door to malicious package attacks (e.g., « typosquatting »).

3. Sensitive data leaks

Prompts may contain API keys, internal configurations, or personal data. This data could be used to retrain the model or be intercepted.

🎯 It is recommended to prohibit the use of personal accounts for these AI assistants in the context of sensitive or industrial projects.

4. Indirect prompt injection

A contaminated source file can inject commands into an AI assistant if it reads the code. This allows an attacker to hijack the AI without its knowledge.


✅ Key recommendations from the ANSSI-BSI report

Here are the best practices identified by the authorities:

Measure Objective
🔍 Systematic human review No generated code should be integrated without review.
🛡️ Strengthen security testing (DevSecOps) Productivity gains must be offset by enhanced audits.
☁️ Strict control of cloud tools Use enterprise accounts with clear contractual policies.
🧼 Prompt hygiene Never include credentials or confidential information.

⚖️ What are the legal implications?

Companies using these tools must integrate their use into their risk analysis (a GDPR and NIS2 obligation), especially when personal or sensitive data is involved.

🔒 The use of an AI assistant in a software project becomes a governance decision to be documented in the processing register or security policy.


📌 Key takeaways

AI assistants are useful… but not harmless. Used without safeguards, they can introduce security vulnerabilities, facilitate attacks, or generate compliance violations.

Companies must treat them like any other sensitive engineering tool: with procedures, documentation, testing, and human review.

Source: https://cyber.gouv.fr/en/actualites/anssi-and-bsi-publish-their-security-recommendations-regarding-ai-programming-assistants

Learn more: https://lawgitech.eu/category/intelligence-artificielle/

En savoir plus sur Lawgitech

Abonnez-vous pour poursuivre la lecture et avoir accès à l’ensemble des archives.

Poursuivre la lecture