Mmedicine, industry, research, science, commerce, education, services, culture, sport, leisure… artificial intelligence (AI) is poised to profoundly change almost all aspects of our lives. Faced with this immense technological challenge of the century, it is up to politicians to provide the community with a secure legislative framework that reassures citizens and encourages entrepreneurs.
Europe did not wait for AI to become a buzzword to grab hold of it. We have been working on this for five years with the academic, scientific and entrepreneurial worlds, with NGOs, Member States, European parliamentarians… For years, we have trained the best engineers. We have recruited scientific talents to develop new AI languages. We have a number of entrepreneurs and innovative start-ups launched into the global race.
We also now benefit from the greatest computing power in the world – with the joint company EuroHPC – made available to the academic world and start-ups, and this is a prime asset for the emergence of an AI ecosystem. European. Last but not leastwe now have pioneering legislation, the IA Act, which aims to define and establish the outlines of proportionate risk management.
Thanks to this brand new and unprecedented regulation, Europe is the first continent to adopt a legal and technical corpus reconciling the best of both worlds. Let’s put an end to a sort of urban legend: no, the AI Act in no way leads to the ongoing revolution of artificial intelligence. On the contrary, it creates an environment conducive to its development. And it is, simultaneously, the guarantor of the essential security of our fellow citizens in the face of the advent of AI. This was the wish of the Member States and the European Parliament, following the last trilogue, which ended on December 8.
Social rating prohibited
By adopting a proportionate approach depending on the nature of the risks, the regulation defines the trust framework necessary for the use of AI in Europe. For the vast majority of applications, the measures can be summed up in one word: transparency.
Applications involving risks to our fundamental rights, for example in employment or education, will be required to obtain European certification before being placed on the market. The objective aims to ensure the reliability of the AI system meeting the requirements of security, human control and data governance. Other applications, such as social rating (social notation), will simply be prohibited, because they are contrary to our values.
You have 50% of this article left to read. The rest is reserved for subscribers.