AI Writes Code, Humans Must Secure It

2026-03-15
admin

When you ask an AI to generate code, build an app, or create a dApp, the response is primarily driven by the details from your prompt (how you define an instructions). If security isn't explicitly mentioned or emphasized, the AI might prioritize functionality, efficiency, or other specified aspects, potentially overlooking or under-emphasizing security vulnerabilities. This isn't due to malice—it's because AIs are trained to follow the user's instructions closely without assuming unstated requirements.

AI doesn’t intend to be insecure — it follows your instructions and the patterns it’s seen in training data. If your prompt focuses on functionality without specifying security requirements, the AI simply won’t prioritise protections against critical attack surfaces. The result? Code that works on the surface but is vulnerable under real‑world conditions.

A simple prompt like:

“Write a minimal login system for my app.”

…might produce something that compiles and launches, but it can easily skip or mishandle fundamental safeguards such as strong encryption, robust authorization, output validation, brute‑force protections, or guards against the most dangerous live attacks organizations see today.

That’s not because the model is malicious — it’s because security is not implicitly assumed. A model generates what you ask for, not what you didn’t ask for, and the usual “security defaults” many developers take for granted aren’t in its decision tree unless you force them into the prompt.

To understand why this matters, let’s look at actual attack categories that modern systems face — and then connect back to how AI‑generated code can miss them if security is not explicitly required.

Web Apps Today: Real‑World Top Risks You Must Protect Against

The OWASP Top 10 — a globally recognized industry standard — has been updated for 2026 to reflect the most frequently exploited and high‑impact vulnerabilities observed in live web applications. These are not academic concepts but categories tied to major breaches:

2025 OWASP Web Application Top Risks

  1. A01: Broken Access Control — unauthorized users accessing functions or data they shouldn’t.
  2. A02: Security Misconfiguration — open servers, default credentials, exposed APIs.
  3. A03: Software Supply Chain Failures — compromised dependencies or build systems.
  4. A04: Cryptographic Failures — improper use of encryption or authentication.
  5. A05: Injection — even in 2025 this remains a major real attack vector.
  6. A06: Insecure Design — lack of threat modelling and secure architecture.
  7. A07: Authentication Failures — flawed login/session mechanics.
  8. A08: Software/Data Integrity Failures — code or content tampering.
  9. A09: Logging & Alerting Failures — gaps that help attackers stay hidden.
  10. A10: Mishandling Exceptional Conditions — poor error handling leading to state leaks or crashes.

Source: https://owasp.org/Top10/2025/

These categories align with actual breach patterns — for example, misconfigured cloud storage leaving customer data exposed, or supply‑chain compromise via malicious NPM packages.

If you don’t prompt your AI to consider access control, dependency validation, cryptography, logging, and misconfiguration checks, the generated code may simply not include them.

AI‑Powered Systems & LLMs: A New Attack Surface

Large language models themselves introduce an entirely different class of risks. OWASP’s Top 10 for LLM Applications (2026) reflects threats we simply didn’t consider five years ago:

OWASP Top 10 for LLM Applications

  1. Prompt Injection — crafted inputs manipulate AI behaviour.
  2. Sensitive Data Disclosure — models inadvertently reveal secrets.
  3. Supply Chain Vulnerabilities — compromised models or libraries.
  4. Data/Model Poisoning — malicious training data introduces backdoors.
  5. Improper Output Handling — trusting AI output without validation.
  6. Excessive Agency — granting too much autonomy or privileges to the model.
  7. System Prompt Leakage — internal instructions exposed.
  8. Vector and Embedding Weaknesses — technical gaps in retrieval components.
  9. Misinformation — confidently wrong content with real impact.
  10. Unbounded Consumption — resource exhaustion and denial of service.

Source: https://genai.owasp.org/llm-top-10/

For example, if your app relies on an LLM to generate SQL or JSON without validating that output, you’re essentially introducing unfiltered user input into your backend — a recipe for real exploits. Prompt injection alone can force models to ignore constraints or output unsafe instructions.

Web3 & Smart Contracts: Where Bugs Cost Real Money

In blockchain environments, mistakes are dramatic and permanent. The OWASP Smart Contract Top 10 (2026) highlights categories that have repeatedly led to major exploitation and financial loss:

OWASP Smart Contract Top 10

  • Access Control Vulnerabilities — improper privilege checks.
  • Business Logic Vulnerabilities — flawed economic logic.
  • Price Oracle Manipulation — skewed external data feeds.
  • Flash Loan–Facilitated Attacks — complex abuse of lending logic.
  • Lack of Input Validation — unsafe parameters corrupt state.
  • Unchecked External Calls — reentrancy and state inconsistency.
  • Arithmetic Errors — rounding and logic issues.
  • Reentrancy Attacks — classic exploit vector.
  • Integer Overflow/Underflow — math exploits.
  • Proxy/Upgradeability Vulnerabilities — governance or initialization bugs.

Source: https://scs.owasp.org/sctop10/#2026-ranking

Millions have been drained from protocols with simple oversight bugs, and AI‑generated Solidity without explicit safety prompts can easily replicate these patterns.

Final tought:

No matter how carefully you craft your AI prompts or how secure the generated code appears, every developer and team must treat AI-generated applications as potentially vulnerable. Before going into production, it is essential to commission a reliable security audit, including full penetration testing and a comprehensive review of code, architecture, and dependencies. This isn’t optional — modern attackers exploit even small oversights, and AI-generated code can inadvertently introduce subtle vulnerabilities. A proper audit ensures that your application is resilient, compliant, and safe for users, bridging the gap between functional AI code and truly secure, production-ready software.

Join our team

If you're interested in joining our team to assist in researching modern threats across web3, please don't hesitate to reach out to us.

Contact Us

Ready for Action?

Don’t hesitate to contact us if you need more information.
Let's Go!
ALVOSEC
BTC: bc1qnn4zfqqtexl4fkjk2vz6tk74sn92x326wwn0ph

linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram