top of page

The EU AI Act && Challenges Governing AI

The European Union (EU) is poised to make history with its groundbreaking Artificial Intelligence Act, the first comprehensive legal framework for AI in the world. This act aims to ensure the responsible development and use of AI, safeguarding fundamental rights and fostering innovation within the European bloc.

Why an AI Act?

AI presents immense potential for various fields. This could be as simple as helping me write a new function to unwrap a token, make graphics for a website, write a deck to help raise some capital to hire more people, or even write a blog post. This is computer aided work, and really just the next iteration of automation.

The term "automation" emerged gradually within the automotive industry around 1946. While the concept and practices related to automating processes existed centuries before, the specific term gained traction during that period. D.S. Harder, an engineering manager at Ford Motor Company, is mentioned as potentially contributing to its popularization around that time. He would later warn when people attribute pseudo-science to science, they make predictions meant to get themselves publicity and fame rather than to push the needs of the world further.

OK, I summarized that based on my own worldview, but close enough. Point is that the term "automation" seems to have emerged collectively within a specific industry as a way to describe the increasing use of automatic control systems. Automatic control systems had been used to direct anti-aircraft fire, which became increasingly computerized, and so others brought it into computing. This happened alongside the work of Norbert Weiner on cybernetics, and during the Cold War. Many scientists, including Weiner, took it hard when the US bombed Japan with nuclear weapons, and increasingly looked at weapons of mass destruction and automated weaponry as the evil twisting of their work. In other words, they were complicit to a vile new form of warfare.

Weiner seems to have been the first to raise concerns about automation and AI as a potential risk. Killing indiscriminately, rather than on an actual battlefield was akin to what the Nazis did during the bombing of London during World War II. But over the next generations of scientists and science fiction writers alike, AI also raised concerns about discrimination, bias, and the impact on privacy. Sometimes AI was meant to be symbol of some other cultural phenomena, like any writing mechanic - and frequently the capabilities of AI and robotics was exagerated. Ideas like the singularity, machines that can love, and cyborgs like Robocop weren’t possible in the 40s and 50s when they became popular, and still aren’t. Yet we have crossed a Rubicon like Caesar did in 49 BCE, and the world will never be the same.

The Framework of the AI Act

This is where the Artificial Intelligence Act comes into play. The EU seeks to address these concerns by establishing clear rules for AI development, deployment, and use. Here are some of the general features of the AI Act:

  • Risk-based approach: The act categorizes AI systems based on their potential risks. High-risk systems, such as facial recognition, will face stricter regulations, including human oversight, data-minimization, and transparency requirements.

  • Ban on specific AI uses: Certain practices, like social scoring and real-time biometric identification in public spaces, are outright banned due to their high risk of abuse.

  • Focus on transparency and explainability: Developers must demonstrate how their AI systems reach decisions, ensuring fairness and avoiding discrimination.

  • Strong data protection: The act emphasizes compliance with existing data protection regulations like GDPR, ensuring responsible data collection and use.

  • Oversight and enforcement: Member states will establish independent oversight bodies to enforce the act and impose sanctions for non-compliance.

They are looking for a few impacts. One is to increase trust in AI. By addressing risks and setting clear standards, the act aims to build trust in AI among citizens and businesses, fostering responsible innovation. Another is to level the playing field. The act applies to all actors operating in the European market, creating a level playing field for both domestic and international companies. Because of how many organizations operate in the EU, this ends up having a global impact, often no matter where a company who offers a service is based. Finally, the EU is again seeking to leverage their influence to provide global leadership on the developments of AI. The EU's approach (at least this is their hope) could serve as a model for other countries and regions looking to regulate AI.

There are challenges to such a model. One is implementation. This includes translating the act into clear national regulations and ensuring effective enforcement. They also seek to establish a balance between innovation and risk mitigation. In this way, they seek to strike the right balance to avoid stifling innovation while safeguarding rights. Finally, they seek that alignment mentioned earlier. The EU's approach needs to be compatible with ongoing international efforts to harmonize AI regulations. AI is one aspect of international regulatory efforts - climate action, warfare, and immigration are others.

That’s the real challenge, though. Here’s the problem with a single nation (or collection of nations) deciding to legislate technological advancements. It will happen elsewhere. Which directions it flows is somewhat unknown. Evolutions in technology aren’t always predictable. Further, it doesn’t address things like keeping children safe, retraining those displaced by AI, and then there’s the issue that litigation in arrears is the only way to really regulate models run in your walled gardens. Those will open issues in the courts that could take years or decades to flush out. But the EU's AI Act is still a significant step towards fostering responsible AI development and ensuring its ethical use. While challenges remain, the initiative's focus on risk management, transparency, and fundamental rights protection lays a solid foundation for the future of AI in Europe and potentially beyond. It reads better than most attempts to limit technology by politicians - even if not everything covered is technically possible today.

11 views0 comments

Recent Posts

See All


bottom of page