The EU and AI

Dragos Tudorache: I think there’s an inevitable Brussels effect to it 


Writing this the EU AI Act is all but approved by the European Parliament, but the Act is a both complex and controversial file, that has caused great debate and criticism in the member states but has also been welcomed by the industry in Europe. The road to the finished act has been long and the final act shouldn’t be wieved as a finished document, but the starting point of a continues process.

The AI-Portal has spoken to Dragos Tudorache (Renew Europe) of the LIBE Committee (Committee on Civil Liberties, Justice and Home Affairs) who was the rapporteur on the AI Act and in his opinion the act represents a balance of consumer and civil protection and a focus on innovation and growth in the AI ecosystem in the EU.

Dragos Tudorache is the vice president of the Renew Group in the European Parliament and the Chair of the Special Committee on Artificial Intelligence in the Digital Age (AIDA). He started his career as a judge in his homeland of Romania and between 2000 and 2005 headed the legal departments at OSCE (The Organisation for Security and Co-operation in Europe) and the UN missions in Kosovo. He has held a number of positions in the European Commission before being elected to the European Parliament in 2019. 

In his opinion it isn’t the AI Act or any other part of the EU’s digital agenda, that will prevent European companies in the ecosystem from becoming successful or being able to compete in the international market. He also thinks that the act provides a whole new framework for helping especially startups and small businesses in the AI ecosystem and the general digital agenda of the EU provides entirely new business opportunities. 

The interview has been edited for length and clarity.

Do you know where this risk-based approach came from?

I think it came from previous work that the commission had done, because prior to 2019, the commission had this high-level expert group on AI and they were trying, just like many other international organizations or multilateral fora, because there were other gatherings that were looking at AI before. OAC, UNESCO, GPAI, there were many fora like this that were starting to look into AI and to put forward principles for trustworthy AI. And the commission had done the same thing through this high-level expert group. 

When they realized that principles would no longer do, and that there was a need to go past principles, and voluntary commitment, and voluntary compliance, and go to hard law, then they asked the question, what is the best way to approach it? How do you regulate technology? As I always say, it’s a very sophisticated piece of technology, but it’s still a tool. It’s still a tool. It’s not necessarily the technology itself that needs to be surrounded by rules but it’s the use of that technology. How do you develop it? How do you deploy it? That’s where you can have risks. And the other element in the original thinking was that we didn’t want to over-regulate either. We wanted to make sure that we only intervene if and where necessary. Understanding there is a broad base of AI development out there which has no intersection with people’s rights or societal interests and therefore it should be unregulated.

It should remain free of regulation because you want creativity, you want growth there. So out of all of this thinking, they realized that the only way to look at uses is to look at the uses that create potential harms. And that’s the only way to look at the uses that create potential harms. Or risks. So that’s where the risk-based approach. Which contexts of AI are risk-prone?

So from that, then they developed the whole doctrine of the risk-based approach and the regulation itself. And I still consider to this day that it’s the, maybe I would not go as far as to say it’s the only way because there may be other jurisdictions that would come up with a way to look at it. But I think there are some jurisdictions that are willing to come out with a different angle, although from the interactions that I’ve had over the last four years with literally, not all, but quite a number of countries in the world who are considering now themselves how to govern AI. After analysis, everyone arrives at the same conclusion that the only way to enter an attempt to put some rules in place is through risk. because that’s what you want to address the risks of the technology in the way it interacts with society. 

But would you say that was the U.S. approach as well? Because the way I read the legislative that has come forth so far, it’s more in the sense of principles and standards, really, within the community, and not so much a focus on the risks.

I think there is no distinction between how the U.S. regards the risks of AI and how we regard it. In fact, I’ve been present to quite a number of the debates in Congress on it. And in fact, I was quite amazed by that I could close my eyes and I could literally be in a debate here in the European Parliament. I was there when they heard Sam Altman for the first time last year. I was there in the spring and a few other moments like this. So, in fact, in terms of understanding both the benefits but also the risks of AI, I don’t think there’s a major political divergence between us and the Americans. This was evident also in the way we aligned in the TTC discussions on this and so on and so forth. Where there is a distinction is what do you do as a next step. So, okay, if you admit that this is a technology that can bring about risks, and the risks are the same, discrimination, bias, this, that, in more or less the same sectors of activity or society.  Then the question is, okay, what do you do about it? And how do you address it? And this is where we differ. 

But we differ, number one, because we have very different systems still. We have a civil law system where you do need laws that are very specific, that are very prescriptive. A judge in the European country without a law that says this is the right to do that or an obligation to do that, there’s no room for judicial activism of the like that you have in the UK or in the US, which are common law systems. So that’s one element.

The second element is also in terms of mentality and historic approach. The US has been much more relaxed in terms of intervening in the market. If they see things coming up, which is actually right now happening, if you see the copyright cases that are popping up everywhere linked to AI, coming up from authors or content creators, this is going to spark action on the house. If you ask me what is going to be the first piece of AI regulation coming out of the US Congress, I think it’s going to be on copyright issue. Why? Because society is moving, the market is moving, courts are starting to intervene and supplement the lack of legislation. Through their rulings. And then Congress might react. So that’s also a difference in the way we approach it. But otherwise, if you unpack the executive order, for example, issued by the Biden administration, and you understand the philosophy in it, and many of the things they say, it’s not very different from what we say in the AI Act. Again, it’s the how that is different. We chose to go via hard rules. They still chose to go via voluntary commitments. We believe that voluntary commitment, if we look at the history of social media and misinformation, well, if you rely only on companies doing the right thing, it has a shortcoming.

Some would say that the risk-based approach, or rather the definitions of the different levels of risk, could be seen as too loose and so not encompassing enough for what it wants to do. What is your opinion?

There are many others who say they are too tight. As always, beauty is in the eye of the beholder, and there will be views in all directions, which I think is also what makes our work so difficult sometimes, so complex, because you have to find the middle ground between such diverging views. I actually think that we found the right balance in identifying the categories of risk, because one thing that we’ve tried from the very beginning with this Act is to balance out between the interests to protect, the rights and the values in society that we want to protect, from the intersection and the interaction with technology, and at the same time, not stifle innovation. But on the contrary, put the right framework in place with the right enablers in place to actually still encourage innovation in Europe, because you don’t want, as a result of regulation, to then have the effect of everyone wanting to migrate and innovate somewhere else. Through those lenses, if you look at the final product, I think the balance in how we define the risk is meant to achieve exactly that. So I’m saying which are the contexts in which AI is a risk, potential risk. I’m not unnecessarily labeling everybody as high risk. There’s a test to be passed to then be indeed considered a high risk and then go through compliance. We’ve done that through self-assessment, again, in order to lower the cost of compliance. It’s been constantly a tango, a juggling act of trying to put the right protection, but also leave room for innovation. And again, I think that that balance is there. And last but not least, all of the areas of high risk that I identified, they’re all adaptable and flexible, because we realize that we may not have captured everything right now, not may, certainly we have not captured everything that AI will be bringing to us.

We realize that this needs to remain a very open list where in the future, and we gave the mandate in the text for the future regulator, the AI office together with the commission, to constantly look at the reality on the ground and adapt. There will be new areas of risk that will come up that will have to be put on the list. Others that maybe will prove not to be right calibrated and maybe we need to get tighter or maybe looser in the way we define them. All that is open to the reality check that will come from how the market will evolve and how the technology will evolve.

But how did you try to differentiate between big actors and small actors? There is a discussion right now is whether the AI act will actually hinder the smaller actors, because the demands on control and so on is too strict. 

Well, again, the first step we’ve made is to recognize that if, which is what we had in the initial mandate of the parliament, if we put all foundation models into one bucket, then again, we’re going to hit unnecessarily with too much burden the smaller models. So that’s why first we differentiated. But the question was still, should we not have at least a bare minimum of responsibility for these models? And ultimately, the essence is why. And this is where size doesn’t matter. It doesn’t matter whether you’re a startup or Google. If the model that you produce can actually achieve those effects that are maybe not certainly, maybe you didn’t mean them, but then they actually produce the risk that you want to avoid, well, then the responsibility is still the same.

And also, because these models will be taken on and applications will be built upon them in myriads of directions. So, if something is wrong in the model, if the model starts hallucinating at some point, then the multiplying effect of that hallucination is enormous because these models are at the heart of many other systems and applications that are being built downstream. So that’s why we said there has to be a minimum set of responsibility for the ones that are creating these models, whether they are big or small. We kept it, in our view, as much to the minimum, this set, obligations towards the downstream system developers and application developers that you work with. You have to pass on certain data to them for them to then comply later on, but also some basic transparency obligations and documentation obligations when you release these models on the market. Are they too heavy? In my view, they are not. I can’t say that they are not insignificant. Of course they are significant. And if you’re a startup of three people, that’s extra work that you have to do that you didn’t have on your radar until now. 

Well, the talk of lawyers, from my point of view, is not entirely a false one. If you want to be extra careful, you can talk to a lawyer. But it’s not like with the GDPR. With GDPR, first of all, you had no standards. You had no technical standards for the GDPR. If you wanted to understand GDPR, you had to talk to a lawyer. 

What we’ve done with the AI Act is, number one, we gave a mandate for standards. By the time this law will produce effects in two years’ time, the technical standards will be there. Which means what? Which means that for the two, three engineers sitting in a garage, calling themselves a startup and developing a model, they will have their own language to read in the technical standards. To know what they can and cannot do. They don’t actually need to go and knock on the door of a lawyer to understand what the text requires because they have the technical standards. Which again, with the GDPR they didn’t have. 

That’s number one. Number two, we work on the basis of self-assessment, not third-party assessment. So again, you can actually do it on your own. Even if you’re two, three people, you can just self-assess. You don’t need to pay money for someone to do the compliance for you. 

And three, you can always go to the door of a sandbox. That’s why we’ve made the sandboxes mandatory in all member states. And we actually said they should go as down as possible, as granular as possible, even to municipal level. Why? Because we see that also as an enabler for small companies who want to be extra careful before going on the market. Okay, the technical standards, okay, I understood them, but I’m not convinced. I’m doing my self-assessment,

I’m still doubtful. Am I high risk? Am I not? Do I understand correctly? I don’t. Okay, then, and I don’t want to pay a lawyer. I don’t have money. Fine. You go to the sandbox. And the regulator with whom you interact there will have an obligation to take you by the hand, and there you test your assumptions. You look at the data sets you used, you can even test supplementarily there, and then at the end you go out of there knowing that you have pitched yourself correctly, to go on the market. So, again, it’s not totally effort-free, but at least the enablers are there to do it with as little cost as possible.

The way I read the text, it was also a matter of control, a way for regulators to check how dangerous these models are, and also to… in a sense, put a stop to it if it’s too dangerous.

I’m not saying that the effect of that interaction cannot be at times, and the regulator would say, listen, the way you’re thinking of this model of yours, or of this application of yours, beware. I mean, you bear the responsibility when you go on the market, but my advice as a regulator is that there your assumptions are incorrect, or that the impact, the way you’re looking at the impact of this, is incorrect because, in fact, you may produce these sorts of effects. And, in fact, be careful, you’re going into the high-risk category, or therefore you’ll have to do the following. Of course, afterwards, it’s your responsibility if you choose to listen or not listen, but there’s not an active responsibility or competence at the sandbox to actually be playing the stick. In fact, it’s more to play the carrot.

The sandbox is, in our imagination, at least in the way we meant them as legislators, were really places for safe trial and error, if you want, for safe innovation. Again, I’m uncertain of things. I want to double-check my thinking. I can do that together with the regulator. There are no stupid questions to ask. I can actually ask stupid questions, and the regulator will have to answer those stupid questions in order to get me as close as possible to presumption of compliance before going on the market.

But, as you say, it can’t be excluded that in that process, things would be revealed out of what that model is that might tell the regulator, listen, beware, you’re producing a little monster here. But, again, afterwards, it’s still the responsibility of each company to decide what to do next. If they choose to go on the market and then the regulator, not in the context of the sandbox, but in the context of wearing the market surveillance authority, comes in and finds them on the market with something that is not conformed with the legislation, then they will apply the sanctions as necessary.

You already talked a bit about listening to the debates in the states about, in terms of European legislation being, sometimes it seems to be a guide. Anu Bradford calls it “The Brussels Effect”, so how do you see the AI Act, the effects that this will have on the countries outside of the EU?

I think there’s an inevitable Brussels effect to it. Not because we would be consciously searching for it or try to impose it on the world, but simply because I don’t think there are many ways of regulating AI in the first place. So, if you decide, the first question is if you want to regulate. If you decide that you want to regulate and you get to the question of how, exactly for the reasons that I mentioned earlier about the risk-based approach and all that, I don’t think there are a million ways in which you can do it. Transparency is transparency everywhere. Explainability is the same everywhere. Data accuracy is the same everywhere. Data governance is the same everywhere. 

Therefore, because we’ve been, I think that we have been quite thorough in the way we’ve addressed this legislation, I think inevitably, and practically I can tell you that I’ve been talking to the parliaments in countries from Latin America to Asia to North America and all directions of the world, and everyone is right now considering something, and they are all looking at what we’ve done. There’s a certain element of inevitability to the so-called Brussels effect, even if you’re not actively trying to promote it. And there’s also a de facto consequence, which comes from the standards that I mentioned earlier. Because the standards would be industry-led, so they’ll be produced in standard-setting bodies by the industry sitting at the table. It’s the same companies. If you look at who sits in the European standard-setting bodies, it’s the same companies developing AI around the world. If they will be finding a consensus, because they work with consensus, if they will be finding a consensus on technical standards under the umbrella of the AI Act in the next two years, because that’s the mandate they have, they’re not going to go and work with different standards in the US and different standards in Australia and different standards in South Korea. If they agree and settle on a standard, they will be replicating that standard everywhere else. 

Do you think that this legislation, the AI Act, will set Europe back in terms of competition, of development?

Not at all. On the contrary. And in fact, even those, and there were a bit of skeptics at the beginning who were saying that, Europe was going to fall even further behind because of this regulation. I think even those are now changing opinion. For two reasons. First of all, the whole scare that companies will run away from Europe or not bring products on the market, is proven to be false. Everyone is bringing the models on the market, even with the regulation on the table. Number two, because in fact, the existence of standards creates predictability.

I’ve heard dozens if not hundreds of companies telling me in the course of these four years, listen, we need the standard to work on because otherwise there’s going to be a race to the bottom. And we want to make sure that we know what is in fact considered to be trustworthy AI so that we know how to work with it. We need that standard. And number three, and this is often disregarded, there’s a brand-new huge opportunity of trustworthy AI tools and measuring. AI cannot be evaluated by anything else but AI. I was in London last week talking to the newly established Safety Institute and they were saying that they are working with AI tools to evaluate the big frontier models. What I’m trying to say is that there is a brand-new world that in fact is being created as a result of the regulation of AI tools, auditing tools, evaluation tools. Basically, a new world of AI and the ecosystem that will be developed. I’ve heard estimates that by 2030 this is going to be something like a 2 trillion market in itself. And I think that creates a lot of opportunity for businesses in Europe, also elsewhere, but why not in Europe, to actually start developing these tools and in fact adapting to this world which creates a lot of opportunity, in fact much more in my view than it creates burdens.

What do you see as Europe’s strengths in this market?

I think where we have traditionally had our strength was on research, on creativity, developing ideas, with the weakness being that we were very bad at scaling things up. In fact there’s still a lot of ideas and creativity bubbling up in Europe, but for a lot of other reasons that have nothing to do with the regulation on AI or DSA [Digital Services Act] or anything else. They have to do with other type of regulation and with other more deeper-rooted cultural things specific to us. Well, why is it that we have a harder time starting up? Or a harder time scaling up? Or even a harder time getting to the summit of the mountain, in terms of developing tech giants? And that has to do with our approach to risk. We are not the kind of risk takers that others in other parts of the world are. In fact, we have a higher adversity to risk culturally, humanly, I don’t know how to put it, than others. Also, the legislation and the approach to bankruptcy for example is something that holds us back in here, in Europe, if you go bankrupt it’s a sin, you’re dead, you’re kaput, no one talks to you anymore. Go to the US, go to the Silicon Valley, you have to go bankrupt at least 10 times for people to even start talking to you seriously. It’s seen as a rite of passage. A completely different mindset to risk and to what it means to fail in order to succeed. 

Then there is the availability of easy money, what I call easy money, in the sense that, okay not money for which you have to go through a million hula hoops and fill in a hundred ton of papers, but actually having a coffee at Starbucks at the corner of the street and finding someone who’s willing to give you 10 million. It’s that sort of access to money, great idea, easy money, boom, you have a startup. That we don’t have. We don’t have the venture capitalists in Europe that you have in the US. Or the state driven innovation packages that we have in China. 

You have these things that have hold us back and then even if you make it to a certain level and that goes to the whole market accumulation that is historic by now, then no matter how big you are, someone comes and buys you up. And who is that someone that buys you up? Well, those that have the power to buy you up. And those are not in Europe. And I’ve talked to, to companies that have started things up in Europe. They knocked at the door of big European companies who said we don’t have the money to buy you up. We don’t have that kind of money. 

These are the underlying reasons why Europe is still behind compared to the other big players on the globe. It’s not the regulation that came in now. And if we want to change things, I think we need to work on those factors. On funding, on bankruptcy legislation, on pension funds, because again, these are funding, in fact, most of the innovation in the US. I’m still convinced that Europe can be a and continue to be a force. And now in the current context we are going to be forced more and more to stand on our own two feet and to assert ourselves as a union, as a continent. I think that would also play an important role in realizing that we need to get more serious in how we approach our dependence on raw materials and our ability to produce chips, our ecosystem to thrive innovation on AI, because without those, we will not only be falling behind in the world, we will not be in the race with other friendly competitors, as it was the case until now, but it may become a matter of survival in a world that is becoming much more aggressive, isolationist, and God forbids, if someone wins elections on the other side of the pond at the end of this year, I think we are going to get an even bigger shock that will push us, I see it also as an opportunity, that will push us into asserting ourselves in that way.

0 notes
27 views

Write a comment...

Din e-mailadresse vil ikke blive publiceret. Krævede felter er markeret med *