When the Algorithm Obeys the State
On July 23, 2025, the Trump Administration unveiled a sweeping set of executive orders and policy measures aimed at reshaping how Artificial Intelligence (AI) is regulated, deployed, and funded in the United States. At first glance, this might look like a typical push for technological leadership—talk of innovation, cutting-edge infrastructure, and a patriotic rally to “win the AI race.” But if you look closer, what you find is a coordinated effort to seize ideological control over the very infrastructure of knowledge.
This isn’t a policy tweak. It’s a power play.
What the AI Directive Actually Does
Touted as the “AI Action Plan,” the initiative lays out a coordinated push to rewire federal policy around artificial intelligence. It begins with three new executive orders that strip away environmental safeguards and diversity requirements for the construction of AI data centers—essentially clearing the way for rapid infrastructure growth with fewer checks. At the same time, the administration has imposed a new procurement rule that blocks federal agencies from contracting with any AI vendor deemed “woke” or ideologically biased, a vague standard that effectively rewards companies aligned with the regime’s values. The plan also threatens to withhold federal AI funding from any state that enforces stronger regulations than those approved by the White House, putting pressure on governors and legislatures to fall in line or risk losing resources. And finally, it includes provisions to fast-track the export of U.S.-developed AI systems abroad, with a particular emphasis on expanding military applications—marking a clear intention to extend ideological control not just domestically, but globally.
“Non-Woke” Means Non-Neutral
The administration frames this as a return to ideological neutrality, painting AI tools as compromised by “woke Marxist lunacy.” But in practice, “non-woke AI” is a dog whistle for an ideological purge—given to contractors pledging allegiance to a specific political vision.
Here's what that looks like in action. Under the new directive, AI systems that include language about racial equity, LGBTQ+ inclusion, climate change, or reproductive rights risk being disqualified from federal use altogether. Educational tools designed to help students understand systemic racism, or customer service platforms built by small businesses that aim to use inclusive language, may no longer be eligible for government support. The message is clear: any tool that challenges conservative orthodoxy—or even acknowledges the existence of marginalized experiences—will be shut out of publicly funded development. Dissent, in this new ecosystem, is defunded.
In short, only ideologically vetted systems get taxpayer dollars.
Why This Is Authoritarian in Form and Function
It may not look like tanks in the street, but this is a classic authoritarian tactic—soft coercion through infrastructure.
The new AI policy consolidates ideological control by design, leaving no room for resistance. States that attempt to set their own standards—whether to protect against bias, promote transparency, or reflect local values—now risk losing critical federal funding. This is about starving opposition. At the same time, the administration is using federal procurement as a tool for market capture. By withholding contracts from companies that don't align politically, they create a de facto ban that shapes both the message and the medium. Regulatory preemptions go even further, stripping states of their ability to safeguard their residents from harmful or discriminatory AI. And all of it feeds into a deeper effort to shape public narrative through technology itself. The systems that power education, law enforcement, and public services are being transformed into delivery vehicles for ideology—either reinforcing political conformity or excluding entire perspectives from view.
Why Kentuckians Should Care
Kentucky isn’t Silicon Valley—but that's exactly why this matters here.
Across Kentucky, we could see a rise in data centers, especially in areas hungry for economic investment. But those facilities would likely come with fewer environmental protections than ever before. That means more strain on local resources like air, water, and electricity, with communities bearing the cost. At the same time, local tech startups and government contractors may find themselves at a crossroads: uphold inclusive design principles and risk losing federal support, or comply with the administration’s ideological demands to stay financially afloat. Schools and nonprofits in places like Louisville, Lexington, and throughout the state could see their access to AI tools shrink, especially those designed to reflect real-world diversity and equity. And once “woke” becomes a disqualifying label, anything from academic curricula to mental health chatbots to telehealth applications could be dismissed, not because they’re ineffective, but because they acknowledge lived experience the administration wants erased.
If it’s inside your computer, it can be inside your mind, and that’s the danger.
What This Means for Everyday People
The effects of this policy won’t just play out in headlines—they’ll show up in everyday places where people live and work. An employee visiting a local job center might never know that the AI resume-screening tool used to match them with opportunities was stripped down because the more inclusive version was deemed “too woke” for federal approval. A small business owner trying to build a chatbot that speaks respectfully to all customers could lose out on vital grant funding simply because their approach doesn’t align with the administration’s ideological filter. In public schools, AI-powered tutoring programs designed to help students think critically about bias or history might be quietly replaced with “neutral” tools that avoid hard truths. And in disadvantaged communities, where access to technology is already limited, the resources that do arrive may come pre-sanitized, offering services stripped of relevance, voice, and equity because the systems that could have met people where they are were disqualified before they ever got through the door.
Taking a Stand in Kentucky
To push back against this growing consolidation of power, Kentuckians have real options—and we should act now.
Start with state-level resistance. Kentucky’s legislature and executive branch can take a stand by affirming our right to develop and support technology that prioritizes equity, environmental justice, and human rights. Even if federal policy leans toward ideological control, we can write our own laws to protect inclusive AI and shield public institutions from coercive funding threats.
Local governments can lead too. Procurement offices in cities like Louisville and Lexington should move quickly to establish clear, enforceable standards for AI tools used in public services. By writing fairness and transparency into local contracts, we send a message: federal ideology won’t dictate how we serve our communities.
Community education is critical. Civic leaders, journalists, educators, and librarians across the state can play a frontline role in demystifying AI. We need public conversations about how algorithms work, where bias creeps in, and why inclusive design isn’t just ethical—it’s essential. Workshops, teach-ins, editorials, even public school curriculum can help Kentuckians become literate in the systems shaping their lives.
Finally, we can build alternatives. Tech doesn’t have to come from the top down. We can support open-source AI projects—tools created by and for communities, grounded in transparency and accountability, not political loyalty tests. Whether it’s local universities, nonprofit tech labs, or regional collaboratives, we can fund models that serve people—not power.
Resisting this moment means doing more than opposing bad policy. It means choosing to build something better—right here, from the ground up.
Final Word
On July 23, 2025, the Trump Administration didn’t just roll out tech policy. It rolled out ideological infrastructure. That matters everywhere, but especially in Kentucky, where the stakes are uneven and the costs are often hidden.
If we let this pass, our daily tools become ideological filters—shaped not by what best serves our communities, but by what fits a political test. And that becomes engineered obedience.
Kentucky deserves AI that reflects its diversity, not its division.
Want more?
Read AP’s explainer on the policy’s Silicon Valley roots — AP News
Learn how federal AI procurement will exclude “woke” vendors — Financial Times
See the backlash from state leaders and advocacy groups — Business Insider