Skip to content
Adventures in Augmentation

Google CEO says over 25% of new Google code is generated by AI

We've always used tools to build new tools, and developers are using AI to continue that tradition.

Benj Edwards | 144
Story text

On Tuesday, Google's CEO revealed that AI systems now generate more than a quarter of new code for its products, with human programmers overseeing the computer-generated contributions. The statement, made during Google's Q3 2024 earnings call, shows how AI tools are already having a sizable impact on software development.

"We're also using AI internally to improve our coding processes, which is boosting productivity and efficiency," Pichai said during the call. "Today, more than a quarter of all new code at Google is generated by AI, then reviewed and accepted by engineers. This helps our engineers do more and move faster."

Google developers aren't the only programmers using AI to assist with coding tasks. It's difficult to get hard numbers, but according to Stack Overflow's 2024 Developer Survey, over 76 percent of all respondents "are using or are planning to use AI tools in their development process this year," with 62 percent actively using them. A 2023 GitHub survey found that 92 percent of US-based software developers are "already using AI coding tools both in and outside of work."

AI-assisted coding first emerged in a big way with GitHub Copilot in 2021, and the feature saw a wide release in June 2022. It used a special coding AI model from OpenAI called Codex, which was trained to both suggest continuations to existing code and create new code from scratch from English instructions. Since then, AI-based coding has expanded in a big way, with ever-improving solutions from Anthropic, Meta, Google, OpenAI, and Replit.

GitHub Copilot has expanded in capability as well. Just yesterday, the Microsoft-owned subsidiary announced that developers will be able to use non-OpenAI models such as Anthropic's Claude 3.5 and Google's Gemini 1.5 Pro to generate code within the application for the first time.

While some tout the benefits of AI use in coding, the practice has also attracted criticism from those who worry that future software generated partially or largely by AI could become riddled with difficult-to-detect bugs and errors.

According to a 2023 study by Stanford University, developers using AI coding assistants tended to include more bugs while paradoxically believing that their code is more secure. This finding was highlighted by Talia Ringer, a professor at the University of Illinois at Urbana-Champaign, who told Wired that "there are probably both benefits and risks involved" with AI-assisted coding, emphasizing that "more code isn't better code."

The only constant is change

While introducing bugs is certainly a risky side-effect of AI coding, the history of software development has included controversial changes in the past, including the transition from assembly language to higher-level languages, which faced resistance from some programmers who worried about loss of control and efficiency. Similarly, the adoption of object-oriented programming in the 1990s sparked criticism about code complexity and performance overhead. The shift to AI augmentation in coding may be the latest transition that meets resistance from the old guard.

"Whether you think coding with AI works today or not doesn’t really matter," posted former Microsoft VP Steven Sinofsky in September. Sinofsky has a personal history of coding going back to the 1970s. "But if you think functional AI helping to code will make humans dumber or isn’t real programming just consider that’s been the argument against every generation of programming tools going back to Fortran."

Strong preferences about "proper" coding practices have circulated widely among developers over the decades, and some of the more extreme positions may seem silly today, especially those concerning quality-of-life improvements that many programmers now take for granted. Daring Fireball's John Gruber replied to Sinofsky's tweet by saying, "I know youngster[s] won’t believe me, but I remember when some programmers argued that syntax coloring in text editors would make people dumber."

Ultimately, all tools augment or enhance human capability. We use tools to build things faster, and we have always used tools to build newer, more complex tools. It's the story of technology itself. Draftsmen laid out the first silicon computer chips on paper, and later engineers designed successive chips on computers that used integrated circuits. Today, electronic design automation (EDA) software assists in the design and simulation of semiconductor chips, and companies like Nvidia are now using AI algorithms to design them.

Does that mean current AI models are capable of generating flawless, high-quality code that developers can just insert and forget? Likely not. For now, skilled humans with experience still need to be in the loop to ensure everything works properly, which seems to be the practice Google's CEO was touting in the earnings call. Like any tool, AI assistance in skilled hands may significantly accelerate a task—and yet a hammer alone cannot build a house.

Photo of Benj Edwards
Benj Edwards Senior AI Reporter
Benj Edwards is Ars Technica's Senior AI Reporter and founder of the site's dedicated AI beat in 2022. He's also a tech historian with almost two decades of experience. In his free time, he writes and records music, collects vintage computers, and enjoys nature. He lives in Raleigh, NC.
144 Comments