Senate Majority Leader Chuck Schumer, D-N.Y. – flanked by Sens. Mike Rounds, R-S.D., left, and Todd Young, R-Ind. – speaks to reporters after a bipartisan Senate forum on artificial intelligence Wednesday in Washington. Alex Brandon/Associated Press

After months of high-level meetings and discussions, government officials and Big Tech leaders have agreed on one thing about artificial intelligence: The potentially world-changing technology needs some ground rules.

But many in Silicon Valley are skeptical.

A growing group of tech heavyweights – including influential venture capitalists, the CEOs of midsize software companies, and proponents of open-source technology – are pushing back, claiming that laws for AI could snuff out competition in a vital new field.

To these dissenters, the willingness of the biggest players in AI – such as Google, Microsoft, and ChatGPT-maker OpenAI – to embrace regulation is simply a cynical ploy by those firms to lock in their advantages as the current leaders, essentially pulling up the ladder behind them. These tech leaders’ concerns ballooned last week when President Biden signed an executive order laying out a plan to have the government develop testing and approval guidelines for AI models – the underlying algorithms that drive “generative” AI tools such as chatbots and image-makers.

“We are still in the very early days of generative AI, and governments mustn’t preemptively anoint winners and shut down competition through the adoption of onerous regulations only the largest firms can satisfy,” said Garry Tan, the head of Y Combinator, a San Francisco-based startup incubator that helped nurture companies including Airbnb and DoorDash when they were just starting. The current discussion hasn’t incorporated the voices of smaller companies enough, Tan said, which he believes is key to fostering competition and engineering the safest ways to harness AI.

Companies like influential AI startup Anthropic and OpenAI are closely tied to Big Tech, having taken huge amounts of investment from them.

Advertisement

“They do not speak for the vast majority of people who have contributed to this industry,” said Martin Casado, a general partner at venture capital firm Andreessen Horowitz, which made early investments in Facebook, Slack, and Lyft. Most AI engineers and entrepreneurs have been watching the regulatory discussions from afar, focusing on their companies instead of trying to lobby politicians, he said.

“Many people want to build, they’re innovators, they’re the silent majority,” Casado said. The executive order showed those people that regulation could come sooner than expected, he said.

Casado’s venture capital firm sent a letter to Biden laying out its concerns. It was signed by prominent AI startup leaders including Replit CEO Amjad Masad and Mistral’s Arthur Mensch, as well as more established tech leaders such as e-commerce company Shopify’s CEO Tobi Lütke, who tweeted “AI regulation is a terrible idea” after the executive order was announced.

Requiring AI companies to report to the government would probably make it more difficult and expensive to develop new tech, Casado said. The order could also affect the open-source community, said Casado and Andrew Ng, an AI research pioneer who helped found Google’s AI lab.

As companies have scrambled to release new AI tools and monetize them since OpenAI released ChatGPT nearly a year ago, governments have wrestled with how to respond. Numerous congressional hearings have tackled the topic, and bills have been proposed in federal and state legislatures. The European Union is revamping AI regulation that has been in the works for several years, and Britain is trying to style itself as an AI-friendly island of innovation, recently hosting a major gathering of government and business leaders to discuss the tech.

Throughout the discussions, representatives from the most powerful AI companies have said openly that the tech presents serious risks, and that they’re eager for regulation. Enacting good regulation could ward off bad outcomes, encourage more investment in AI, and make citizens more comfortable with the quickly advancing tech, the companies have said. At the same time, being a part of the regulatory conversation gives the business leaders influence over what kinds of rules are developed.

Advertisement

“If this technology goes wrong, it can go quite wrong,” OpenAI CEO Sam Altman said at a congressional hearing in May. Lawmakers including Senate Majority Leader Chuck Schumer, D-N.Y., have said they want to regulate AI early, rather than taking a more laid-back approach like the government did with social media.

Days after Biden’s executive order, government representatives attending the U.K.-hosted AI Safety Summit signed a statement supporting the idea of giving governments a role in testing AI models.

“Until now the only people testing the safety of new AI models have been the very companies developing it. We shouldn’t rely on them to mark their homework, as many of them agree,” British Prime Minister Rishi Sunak said in a statement.

Demis Hassabis, CEO of Google’s DeepMind AI division, and Dario Amodei, CEO of Anthropic, both added their support to the statement. Spokespeople for Google and Anthropic did not comment. A spokesperson for Microsoft declined to comment but pointed toward congressional testimony from the company’s vice chair and president, Brad Smith, where he supported the idea of AI licensing by an independent government body.

A spokesperson for OpenAI declined to comment but referred to a tweet from Altman where he said that while he supported regulation for more established AI companies working on powerful AI models, governments should be careful not to damage competition.

Many of the big breakthroughs in tech over the last few decades have happened because developers have made their tech available to others to use for free. Now, companies are using open-source AI models to build their own AI tools without having to pay Google, OpenAI, or Anthropic for access to their models.

With Big Tech lobbyists working hard in Washington, those companies might be able to influence regulation in their favor – to the detriment of smaller companies, Ng said.

Critics of the emerging regulatory frameworks also say that they are based on exaggerated concerns about the risk of AI. Influential AI leaders, including executives from OpenAI, Microsoft, Google, and Anthropic have warned that AI poses a risk on par with pandemics or nuclear weapons to human societies. Many prominent AI researchers and businesspeople say the tech is advancing so quickly that it could soon outstrip human intelligence and begin making its own decisions.

Those concerns, which featured prominently at the U.K. AI summit, give governments cover to pass regulations, said Ng, who now regrets not pushing back against “existential risk” fears more strongly. “I just have a hard time seeing how humanity could go extinct,” he said.

Copy the Story Link

Only subscribers are eligible to post comments. Please subscribe or login first for digital access. Here’s why.

Use the form below to reset your password. When you've submitted your account email, we will send an email with a reset code.