When you ask an AI to generate code, build an app, or create a dApp, the response is primarily driven by the details from your prompt (how you define an instructions). If security isn't explicitly mentioned or emphasized, the AI might prioritize functionality, efficiency, or other specified aspects, potentially overlooking or under-emphasizing security vulnerabilities. This isn't due to malice—it's because AIs are trained to follow the user's instructions closely without assuming unstated requirements.
AI doesn’t intend to be insecure — it follows your instructions and the patterns it’s seen in training data. If your prompt focuses on functionality without specifying security requirements, the AI simply won’t prioritise protections against critical attack surfaces. The result? Code that works on the surface but is vulnerable under real‑world conditions.
A simple prompt like:
“Write a minimal login system for my app.”
…might produce something that compiles and launches, but it can easily skip or mishandle fundamental safeguards such as strong encryption, robust authorization, output validation, brute‑force protections, or guards against the most dangerous live attacks organizations see today.
That’s not because the model is malicious — it’s because security is not implicitly assumed. A model generates what you ask for, not what you didn’t ask for, and the usual “security defaults” many developers take for granted aren’t in its decision tree unless you force them into the prompt.
To understand why this matters, let’s look at actual attack categories that modern systems face — and then connect back to how AI‑generated code can miss them if security is not explicitly required.
The OWASP Top 10 — a globally recognized industry standard — has been updated for 2026 to reflect the most frequently exploited and high‑impact vulnerabilities observed in live web applications. These are not academic concepts but categories tied to major breaches:
2025 OWASP Web Application Top Risks
Source: https://owasp.org/Top10/2025/
These categories align with actual breach patterns — for example, misconfigured cloud storage leaving customer data exposed, or supply‑chain compromise via malicious NPM packages.
If you don’t prompt your AI to consider access control, dependency validation, cryptography, logging, and misconfiguration checks, the generated code may simply not include them.
Large language models themselves introduce an entirely different class of risks. OWASP’s Top 10 for LLM Applications (2026) reflects threats we simply didn’t consider five years ago:
OWASP Top 10 for LLM Applications
Source: https://genai.owasp.org/llm-top-10/
For example, if your app relies on an LLM to generate SQL or JSON without validating that output, you’re essentially introducing unfiltered user input into your backend — a recipe for real exploits. Prompt injection alone can force models to ignore constraints or output unsafe instructions.
In blockchain environments, mistakes are dramatic and permanent. The OWASP Smart Contract Top 10 (2026) highlights categories that have repeatedly led to major exploitation and financial loss:
OWASP Smart Contract Top 10
Source: https://scs.owasp.org/sctop10/#2026-ranking
Millions have been drained from protocols with simple oversight bugs, and AI‑generated Solidity without explicit safety prompts can easily replicate these patterns.
Final tought:
No matter how carefully you craft your AI prompts or how secure the generated code appears, every developer and team must treat AI-generated applications as potentially vulnerable. Before going into production, it is essential to commission a reliable security audit, including full penetration testing and a comprehensive review of code, architecture, and dependencies. This isn’t optional — modern attackers exploit even small oversights, and AI-generated code can inadvertently introduce subtle vulnerabilities. A proper audit ensures that your application is resilient, compliant, and safe for users, bridging the gap between functional AI code and truly secure, production-ready software.