12 Comments
Feb 9·edited Feb 9Liked by Dean W. Ball

I invest in European AI startups and the recent EU guidelines are ensuring European AI companies are all contemplating (and most likely will) move to the US before the need to comply with those guidelines. It is a lead pipe cinch that a regulation like this will ensure they move to someplace without such regulations (most likely places like Dubai or Singapore). So such regulations are beyond stupid, even if the goal is to (attempt) to provide for safer AI development – they will ensure the opposite.

Expand full comment

This may be a bad bill, but that doesn't mean there aren't concerns about increasingly powerful AI models.

What's your positive vision for regulation here? How would you propose society take steps to preclude dangerous AI from being developed and/or being misused?

Expand full comment
author
May 7·edited May 7Author

Great question! While articulating a positive vision was not the point of this piece, it is the broader goal of my Substack. I think working with industry and scientists to develop standards and protocols (for media validation/authentication, for DNA synthesis equipment, for AI models themselves, for agent-agent communication) is first on the list. Cybersecurity defenses are also essential for key government services; there is a great deal of public private collaboration that should happen (and is) in this area. Finally, governments should integrate AI into their own operations so that they can better understand the technology, improve government services, and be ready to scale law enforcement responses if certain malicious AI-enabled behavior proliferates.

That’s just the start, but there is definitely much more.

Expand full comment
author

Ultimately, though, there is probably no such thing as “precluding dangerous AI from being developed.” We don’t control the world. Instead, in my mind, policy should be focused on making society more resilient. For biorisk, for example, an AI model that can “make bioweapons” is not a coherent concept. This ultimately requires manufacturing, and it is highly non trivial. There are many bottlenecks in that manufacturing process that we can police more aggressively (and the federal government is starting to do so).

So I think it’s about having a very precise and grounded risk model and then countering those risks in the most realistic way possible. Because of the serious implications associated with policing the distribution of software on the Internet (global surveillance of digital communication being just one), the AI model itself is rarely the most productive or efficient thing to target.

Expand full comment

Unless we regulate it, AI is going to get to a point where a user can say "AI, tell me how to produce anthrax, give me a detailed plan to buy the machines and materials I need", and the user will get back an actionable plan. This capability is almost certainly less than 3 years away.

This is concerning, and I want us to take regulatory steps to curtail these capabilities. I agree with you that we should more aggressively regulate bio-manufacturing services, which seem like a big source of potential harms. We should regulate bio labs and bio-manufacturing services too, but IMO the best regulatory approach needs to be multi-faceted.

This isn't just about bioweapons either. I would really rather not have an open source GPT5 that's happy to tell users how to get away with murder either.

I work in the AI industry in California. And I work for a smaller company that wants to compete with OpenAI and Anthropic, so I'm very sensitive to concerns that regulation will hamper smaller companies while entrenching larger companies. But it costs hundreds of millions of dollars today to train models that are covered by this bill, and any company that can do that can afford to hire a compliance team.

Expand full comment
author

I hear that completely. A major problem is we all know that it won't cost hundreds of millions of dollars in the not-so-distant future to train models of this kind, absent an exogenous shock such as a war in Taiwan, or a widespread rise in energy costs. Today's dynamics won't persist forever, and ultimately models of the kind you describe are going to exist in the world.

My guess is someone with the wherewithal and resources to buy all the equipment needed to make anthrax probably does not need GPT-n to tell them how to do it. I live in Washington DC; people get away with murder here on a weekly basis. They don't need GPT-5 to tell them how to do that, either.

I do hear your broader point. I think the world you describe--where AI models that can help people do dangerous things--is an inevitable one. I also think that the knowledge of how to do dangerous things is but one small part of actually achieving the dangerous thing--this is something auto-didactic, reasonably intelligent, terminally online people (almost everyone who reads or writes AI focused substacks, me included) tend to underrate. But in general, it's true that various kinds of bad behavior will be more achievable by people who are so inclined. The goal we should all have is to keep up--to ensure that AI can be used for defense more than it can be used for offense. I do not see how 1047 helps in that regard.

Expand full comment

The good news is when all the AI developers flee California, the grid will stay up a little longer before it collapses.

Expand full comment

Like you, I admire Scott Wiener's leadership on pro-housing policy. Unfortunately, he's obviously listening to the wrong people on this one and, like many on the left, is instinctively pro-regulation despite what it's done the housing market.

I will also note that in long-standing Sacramento lingo, a legislator is not said to "sponsor" a bill but to "carry" it, the implication always being that he or she is doing the work of interest groups.

Expand full comment

The bill introduced, not approved?

Expand full comment
author

Yes, that’s right. It’s being considered by the California legislature.

Expand full comment

The Beach Boys haven't had a hit in years, and neither has California.

Expand full comment

OpenAI lobbyists did a terrific job drafting this bill.

Expand full comment