Trump unveils AI plan that aims to clamp down on regulations and 'bias'

Trump details AI framework to challenge ‘bias’ and streamline regulations

Former President Donald Trump has announced a new artificial intelligence project that focuses heavily on reducing federal oversight and tackling what he terms political partiality within AI systems. As artificial intelligence quickly grows in numerous fields—such as healthcare, national defense, and consumer tech—Trump’s approach marks a shift from wider bipartisan and global endeavors to enforce stricter scrutiny over this advancing technology.

Trump’s newest proposition, integral to his comprehensive 2025 electoral strategy, portrays AI as a dual-faceted entity: a catalyst for American innovation and a possible danger to free expression. At the core of his plan is the notion that governmental participation in AI development should be limited, emphasizing the need to cut down regulations that, according to him, could obstruct innovation or allow ideological domination by federal bodies or influential technology firms.

Aunque otros líderes políticos y organismos reguladores en todo el mundo están desarrollando marcos orientados a garantizar la seguridad, transparencia y uso ético de la inteligencia artificial (IA), Trump está presentando su estrategia como una medida correctiva frente a lo que considera una creciente interferencia política en el desarrollo y uso de estas tecnologías.

At the heart of Trump’s plan for AI is a broad initiative aimed at decreasing what he perceives as excessive bureaucracy. He suggests limiting federal agencies’ ability to utilize AI in manners that may sway public perspectives, political discussions, or policy enforcement towards partisan ends. He contends that AI technologies, notably those employed in fields such as content moderation and monitoring, can be exploited to stifle opinions, particularly those linked to conservative perspectives.

Trump’s proposal suggests that any use of AI by the federal government should undergo scrutiny to ensure neutrality and that no system is permitted to make decisions with potential political implications without direct human oversight. This perspective aligns with his long-standing criticisms of federal agencies and large tech firms, which he has frequently accused of favoring left-leaning ideologies.

His strategy also involves establishing a team to oversee the deployment of AI in government operations and recommend measures to avoid what he describes as “algorithmic censorship.” The plan suggests that systems employed for identifying false information, hate speech, or unsuitable material could potentially be misused against people or groups, and thus should be strictly controlled—not in their usage, but in maintaining impartiality.

Trump’s artificial intelligence platform also focuses on the supposed biases integrated into algorithms. He argues that numerous AI systems, especially those created by large technology companies, possess built-in political tendencies influenced by the data they are trained with and the objectives of the organizations that develop them.

Although experts within the AI sector recognize the dangers of bias present in expansive language models and recommendation algorithms, Trump’s perspective highlights the possibility that these biases might be exploited purposely instead of accidentally. He suggests strategies to examine and reveal these systems, advocating for openness concerning their training processes, the data they utilize, and the potential variations in outcomes influenced by political or ideological settings.

Her proposal does not outline particular technical methods for identifying or reducing bias; however, she suggests the creation of an autonomous entity to evaluate AI tools utilized in sectors such as law enforcement, immigration, and digital interaction. She emphasizes that the aim is to guarantee that these tools remain “unaffected by political influence.”

Beyond concerns over bias and regulation, Trump’s plan seeks to secure American dominance in the AI race. He criticizes current strategies that, in his view, burden developers with “excessive red tape” while foreign rivals—particularly China—accelerate their advancements in AI technologies with state support.

In response to this situation, he suggests offering tax incentives and loosening regulations for businesses focusing on AI development in the United States. Additionally, he advocates for increased financial support for collaborations between the public sector and private companies. These strategies aim to strengthen innovation at home and lessen dependence on overseas technology networks.

En cuanto a la seguridad nacional, la propuesta de Trump carece de detalles, aunque reconoce el carácter dual de las tecnologías de IA. Promueve tener un control más estricto sobre la exportación de herramientas de IA cruciales y propiedades intelectuales, especialmente hacia naciones vistas como competidores estratégicos. No obstante, no detalla la forma en que se aplicarían tales restricciones sin obstaculizar las colaboraciones globales de investigación o el comercio.

Interestingly, Trump’s AI strategy hardly addresses data privacy, a subject that has become crucial in numerous other plans both inside and outside the U.S. Although he recognizes the need to safeguard Americans’ private data, the focus is mainly on controlling what he considers ideological manipulation, rather than on the wider effects of AI-driven surveillance or improper handling of data.

The lack of involvement has been criticized by privacy advocates, who claim that AI technologies—especially when utilized in advertising, law enforcement, and public sectors—could present significant dangers if implemented without sufficient data security measures. Opponents of Trump argue that his strategy focuses more on political issues rather than comprehensive management of a groundbreaking technology.

Trump’s AI agenda stands in sharp contrast to emerging legislation in Europe, where the EU AI Act aims to classify systems based on risk and enforce strict compliance for high-impact applications. In the U.S., bipartisan efforts are also underway to introduce laws that ensure transparency, limit discriminatory impacts, and prevent harmful autonomous decision-making, particularly in sectors like employment and criminal justice.

By advocating a hands-off approach, Trump is betting on a deregulatory strategy that appeals to developers, entrepreneurs, and those skeptical of government intervention. However, experts warn that without safeguards, AI systems could exacerbate inequalities, propagate misinformation, and undermine democratic institutions.

The timing of Trump’s AI announcement seems strategically linked to his 2024 electoral campaign. His narrative—focusing on freedom of expression, equitable technology, and safeguarding against ideological domination—strikes a chord with his political supporters. By portraying AI as a field for American principles, Trump aims to set his agenda apart from other candidates advocating for stricter regulations or a more careful embrace of new technologies.

The suggestion further bolsters Trump’s wider narrative of battling what he characterizes as a deeply rooted political and tech establishment. In this situation, AI transforms into not only a technological matter but also a cultural and ideological concern.

Whether Trump’s AI plan gains traction will depend largely on the outcome of the 2024 election and the makeup of Congress. Even if passed in part, the initiative would likely face challenges from civil rights groups, privacy advocates, and technology experts who caution against an unregulated AI landscape.

As artificial intelligence continues to evolve and reshape industries, governments around the world are grappling with how best to balance innovation with accountability. Trump’s proposal represents a clear, if controversial, vision—one rooted in deregulation, distrust of institutional oversight, and a deep concern over perceived political manipulation through digital systems.

What we still don’t know is if this method can offer the liberty alongside the protections necessary to steer AI progress towards a route that rewards society as a whole.

By Mattie B. Jiménez