In early December, the EU reached a finalised agreement for the EU AI Act — its landmark AI regulation. But behind the shiny consensus was a multi-week delay and a 36-hour negotiating frenzy to get the deal over the table. The trilogue negotiations for the EU AI Act broke the record for the longest negotiation between the council, the commission, and the parliament. The reason? Politics.

Many policymakers warn against politicising AI regulation. I worry that by ignoring the political side of the AI sphere, we risk creating a future for AI that is unsafe and unstable.

The politics of AI safety

AI regulation has always been and will always be political. Any proposal with trade-offs will inevitably upset some people more than others. However, ignoring the politics involved means ignoring the trade-offs that come with your proposals. Ignoring politics means justifying any downside as “a means to an end." An unwillingness to honestly acknowledge the downsides means an inability to find compromise and mitigate concerns. Ignoring politics pushes things further towards an “us vs them” mentality and opens the door to serious damage being done. So let’s not ignore politics but instead embrace it.

To play politics, you must understand the current political landscape. I like to view actors lying along a spectrum, with concerns about openness and growth on one end and being more conscious of risk on the other.

On the risk end, you have those concerned with the more extreme risks from AI, like powerful AI models facilitating bioterrorism, cyberterrorism, or disempowerment. Crucially, current AI models aren’t yet powerful enough to do this, but there is consensus that the next generation of foundation models may be.

Then, you have those who are more concerned about some of the more moderate, but immediate, risks posed by AI. These include AI models being biased, being used to spread misinformation, or posing novel privacy threats.

On the other end of the spectrum, you have the pro-openness and growth crowd. The moderates here are people who oppose excessive regulation, arguing that the trade-offs for growth and innovation simply aren’t worth it. Then, at the extreme end, you have the techno-optimists. They believe that technology is always net beneficial to society. For them, pursuing technological progress isn’t a means to an end — it’s a goal in and of itself.

This side of the spectrum also broadly believes that open-sourcing powerful AI models will actually make them safer because of increased accountability and oversight, a belief strongly contested by the risk crowd.

Emmet Shear, the former temporary CEO of OpenAI, tweeted a political compass a few days before joining his new role, which offers another way of looking at these groups.

You can see them as split along two axes: whether future foundation AI models will be “equivalent to the internet” or “1,000,000 times as powerful," and whether that means we should accelerate or decelerate AI progress.

This framing is useful in understanding these groups’ motivations, but I think a single spectrum helps to better understand where prospective political coalitions may form. Imagine instead a simple one-dimensional spectrum where ‘most concerned about risk’ sits at one end and ‘and' most concerned about openness and growth’ sits at the other end.

Up until now, most AI safety progress has been a big tent project, with support from both extreme and moderate risk advocates and even some support from the pro-innovation crowd. This is because even though there was disagreement about how much should be done, everyone broadly agreed that something should be done.

It turns out that most of the policy solutions warranted by extreme risks, like increased transparency and interpretability of models, also solve a lot of the moderate risks, so there is ample room for political agreement there.

Governments were also surprisingly eager to get involved. They saw an opportunity to reclaim political control from AI labs — who currently wield outsized political power because they hold all the cards to both know and shape the future of AI development. This then made it easy to convince the pro-innovation crowd, with most governments making sure to frame their proposals as “pro-innovation approaches to AI safety.".

In a sense, all current progress up to this point was picking the “low-hanging fruit” of AI safety. This is why it’s given the perception of not being political — despite having been political to begin with. The problem is that this political coalition is fracturing.

The trade-offs between safety and innovation are shifting, and the pro-growth crowd is getting cold feet. Countries like France and Germany believe this could be their opportunity to catch up to the US on AI. If safety advocates want to keep strengthening AI safety, stabilising their political coalition is crucial. They have to offer something to the key players: the open-source community, tech startups, and big AI labs.

Give and take: what’s on the table?

One solution lies in the Biden administration’s recent executive order on AI: Compute Thresholds. Remember that although foundation models may be able to pose extreme risks in the future, they aren’t there yet. We only need scrutiny of sufficiently capable foundation models. The problem is that evaluating capabilities is quite difficult. The trick is to look at computing power. In general, the more computing operations you spend training a model, the more capable it will be. So, the Biden administration picked a threshold of 1026 operations, above which models would have to comply with strict reporting and regulatory requirements.

Safety advocates need to use compute thresholds as a political lever to appease their coalition. You can pick compute thresholds such that only big AI labs (who train the largest models) will actually have to comply with them, leaving room for European startups to catch up. You can also just choose to be quite liberal with your thresholds and pick them to be sufficiently high that they won’t kick into place for at least a few years, leaving room for growth and innovation while also protecting eventual safety. Compute thresholds should be an important lever that safety advocates use to balance their own political demands.

Another axis to look at is privacy law. Privacy laws like GDPR are an existing regulatory headache for all AI players. If safety advocates want to find support for more safety regulations, they could offer to weaken some privacy laws in return. An example of this could be laws around how long companies or individuals are allowed to hold onto users' data. There’s already plenty of precedent for this — governments have been weakening privacy for years in the name of “safety” from terrorists, so there’s no reason the same argument shouldn’t apply to AI safety.

From a safety perspective, it’s useful to be able to audit and ask companies what data went into training their models to better scrutinise them. Additionally, the way that AI companies use your data works differently from the traditional data privacy model. OpenAI isn’t retraining its model with every chat you have with ChatGPT — that would be woefully inefficient. Rather, it processes all new data in batch updates. The same applies when it needs to remove personal data from the model under GDPR requirements. It’s just not feasible for it to be able to retrain the entire model every time someone requests their data be deleted; this also must be done in batches. These are examples of areas where privacy laws could be amended or loosened, both to adapt to the realities of AI and to allow for better safety and monitoring.

This is another policy axis to use to rebuild a political coalition. Privacy law can be used as a carrot to appease big labs. You can go one step further and offer even looser rules for smaller startups and individuals.

One last policy to look at is anti-trust law. The current antitrust law makes it hard for AI labs to coordinate on AI safety. This is another bone you could offer to AI labs that wouldn’t just be safety-neutral but safety-positive.

With hindsight, we can see that safety advocates were unprepared to play politics when it came to the EU AI Act. That needs to change. There needs to be a wider conversation about the real trade-offs that come with any safety proposal so that we can then work to mitigate those trade-offs while also securing a safer future. Data privacy law, antitrust law, and compute thresholds are just a few of the many levers that can be used to rebuild the AI safety coalition. Safe AI is possible. We just need to weigh up the trade-offs in realistic terms and see the world as it is, rather than as we wish it to be.

This article was written by Alex Petropoulos. Alex is a fellow with Young Voices Europe based between the UK and Greece. He writes about AI and technology policy.