{"id":5995,"date":"2025-06-13T19:43:11","date_gmt":"2025-06-13T17:43:11","guid":{"rendered":"https:\/\/aitrends.center\/ai-generated-code-is-here-to-stay-are-we-less-safe-as-a-result\/"},"modified":"2025-07-24T13:31:19","modified_gmt":"2025-07-24T11:31:19","slug":"ki-generierter-code-wird-sich-durchsetzen-sind-wir-deshalb-weniger-sicher","status":"publish","type":"post","link":"https:\/\/aitrendscenter.eu\/de\/ai-generated-code-is-here-to-stay-are-we-less-safe-as-a-result\/","title":{"rendered":"KI-generierter Code ist hier zu bleiben. Sind wir dadurch weniger sicher?"},"content":{"rendered":"<p>As 2025 rolls out, the way we create software is evolving fast. Coding is no longer just endless lines and perfect semicolons. Developers now work side by side with AI-powered assistants\u2014think GitHub Copilot, Amazon CodeWhisperer, ChatGPT\u2014to build apps quicker than ever. This shift in workflow, sometimes called \u201cvibe coding,\u201d has raised developer productivity, but it\u2019s also opened fresh doors for security headaches.<\/p>\n<p>Take Sanket Saurav, founder of DeepSource. He\u2019s worried that developers are leaning on AI-generated code without really checking what\u2019s under the hood. His concern? Flaws might slip in undetected, much like the major SolarWinds breach of 2020. Thorough code review and static analysis, he argues, are now more vital than ever to catch risky coding patterns before they become big problems.<\/p>\n<p>AI code generators can miss crucial details. They might misuse code libraries or even hallucinate\u2014dreaming up libraries that don\u2019t exist or suggesting old, dangerous ones. There\u2019s also \u201cslopsquatting,\u201d where attackers insert malicious packages with names similar to popular libraries. If an AI pulls in the wrong dependency, it could open the gates to hackers.<\/p>\n<p>Professor Rafael Khoury from Universit\u00e9 du Qu\u00e9bec en Outaouais has been digging into these issues. His research shows that, if you don\u2019t give AI enough context, it\u2019s quick to generate insecure code. But he sees hope: blending methods like FLAG (Finding Line Anomalies with Generative AI) with security checks. Essentially, FLAG spots suspect lines in AI-generated code, which teams can then double-check or ask the AI to refine. It\u2019s not a silver bullet, but it makes a dent in the vulnerability problem.<\/p>\n<p>Still, humans can\u2019t just sit back. Kevin Hou, who leads product engineering at Windsurf, believes projects should be broken into smaller, digestible chunks. That makes it easier to review what the AI produces, test better, and catch anything odd before it spreads.<\/p>\n<p>Windsurf\u2019s team has seen over 5 billion AI-generated lines and knows it\u2019s easy for quality to slip. To keep their developers informed and engaged, they focus hard on user experience\u2014making tools clear, surfacing context at the right time, and ensuring every code change is easy to understand and review.<\/p>\n<p>In this age of \u201cvibe coding,\u201d speed and convenience are tempting. But it\u2019s critical not to trade away safety. Hallucinated libraries, slopsquatting, and old-fashioned mistakes are real risks, but so are the defenses: smart tools, vigilant review, and a healthy skepticism of what AI suggests.<\/p>\n<p>The bottom line? Treat AI as a team member, not a replacement. Review what it writes. Test it. Trust, but always verify. That\u2019s the best way forward as coding grows smarter\u2014and riskier\u2014every year.<\/p>\n<p>Read the full story at: https:\/\/www.techtarget.com\/searchsecurity\/tip\/Security-risks-of-AI-generated-code-and-how-to-manage-them<\/p>","protected":false},"excerpt":{"rendered":"<p>As 2025 rolls out, the way we create software is evolving fast. Coding is no longer just endless lines and perfect semicolons. Developers now work side by side with AI-powered assistants\u2014think GitHub Copilot, Amazon CodeWhisperer, ChatGPT\u2014to build apps quicker than ever. This shift in workflow, sometimes called \u201cvibe coding,\u201d has raised developer productivity, but it\u2019s also opened fresh doors for security headaches. Take Sanket Saurav, founder of DeepSource. He\u2019s worried that developers are leaning on AI-generated code without really checking what\u2019s under the hood. His concern? Flaws might slip in undetected, much like the major SolarWinds breach of 2020. Thorough [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":5996,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[43,47],"tags":[],"class_list":["post-5995","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-agents","category-ai-news","post--single"],"_links":{"self":[{"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/posts\/5995","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/comments?post=5995"}],"version-history":[{"count":1,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/posts\/5995\/revisions"}],"predecessor-version":[{"id":6579,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/posts\/5995\/revisions\/6579"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/media\/5996"}],"wp:attachment":[{"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/media?parent=5995"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/categories?post=5995"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/aitrendscenter.eu\/de\/wp-json\/wp\/v2\/tags?post=5995"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}