Welcome-to-The New Era of AI-Driven Development

Snyk Team
May 29, 2025
0 mins readArtificial intelligence is no longer a future consideration. It’s here — and it’s changing how software is built. Fast.
Enterprise teams are moving beyond AI pilots and proof-of-concepts. They’re rolling out real-world, high-value use cases and doing it at scale. According to IDC forecasting, AI spend will more than double by 2028. At the center of that surge is AI-assisted software development.
AI coding tools are pervasive, — and standardized practices haven’t been established. By 2028, it’s expected that 75% of enterprise software engineers will use them. And by 2030, 95% of code could be AI-generated.
AI tools have accelerated development. But the same speed that boosts productivity also magnifies risk. Traditional “shift left” guardrails weren’t built for AI-assisted coding, and they’re starting to crack. As developers take on more responsibility, security must evolve with them.
Faster coding means faster risk. Old guardrails weren’t built for this pace. To move forward safely, we need a new approach to development and security.
Models trained on flawed code. Many GenAI models train on open source code and publicly available data. These code examples often contain issues, passing vulnerabilities into AI-generated code. A recent study found that at least 48% of code snippets created by AI models included vulnerabilities.
Reliance on open source. Open source isn’t new — but AI makes reuse faster and easier. A report from Anaconda showed 32% of developers face security issues from open-source AI tools.
Package hallucination opens the doors for attacks. AI tools can ‘hallucinate’ fake packages. Attackers can preemptively upload malicious ones. Over 5% of commercial AI-generated code —a nd 22% from open source models — contained package hallucinations.
This list isn’t exhaustive, but it shows why security must evolve with AI-driven development.
The evolution is already underway. Developers are shifting from writing line-by-line code to guiding AI agents through high-level, goal-based instructions that generate entire features. A recent survey by Langbase found that software development is the top use case for AI agents.
Waiting to secure your AI development workflows is like waiting to set rules until your teenager moves out. It’s tempting to delay, but once autonomy and independent thought kick in, it’s too late. It’s imperative to create guardrails now before those platforms evolve into agent-based solutions that move faster, reach further, and offer less visibility.
That’s why we are building AI Trust.
The Snyk AI Trust Platform is developer-first and purpose-built to address AI security requirements across the SDLC — from code generation in the IDE to AI-native application deployment and runtime operations.
Let your devs move fast — without breaking things. Learn more about how to build safely at AI speeds in our eBook, Building AI Trust: Securing Code in the Age of Autonomous Development.
AI is redefining the developer experience
Build AI Trust and empower your team to develop fast and stay aligned with security goals.