by Paul Rempfer
Within the next five years, every U.S. agency deploying artificial intelligence will face a defining question: Who governs the ethical boundaries of AI systems shaping federal decisions?
After decades working across defense and intelligence, from the FBI and CIA to global critical infrastructure protection—I’ve seen how technology always moves faster than governance. We’ve watched this cycle before: new capability, rapid adoption, and only later the hard questions about trust, accountability, and control. AI is following the same trajectory at machine speed.
Recent Office of Management and Budget (OMB) memoranda, including M-25-21 and M-25-22, encourage agencies to accelerate AI adoption, expand infrastructure, and streamline compliance efforts. It’s a necessary step toward modernization, but it also means the guardrails must evolve in parallel. As timelines compress, ethics, assurance, and transparency must remain mission enablers and not afterthoughts.
The issue is not whether agencies should use AI, but who defines its boundaries, how those boundaries are tested, and what safeguards exist before algorithmic outputs begin influencing real-world policy. Understanding how agencies can close that gap and why ethical readiness is now as critical as technical readiness is the key.
Why the Question of “Who Governs AI” Can’t Wait
Today, there are more than 12,000 commercially available AI tools—a 700% increase since 2020, according to Stanford’s 2025 AI Index. Yet few undergo consistent ethical or security vetting. This simple fact exposes a larger dilemma: who governs AI itself?
A 2025 GAO report identified nearly 100 separate AI-related directives across federal agencies spanning data, risk, and workforce, but without a unified governance framework. In contrast, the European Union’s AI Act classifies systems by risk level and requires proof of compliance before deployment.
Here in the U.S., we depend largely on voluntary frameworks like the NIST AI Risk Management Framework to interpret ethical and safety obligations. This decentralized approach means each agency must interpret and operationalize ethical guardrails on its own, defining standards before mandates exist. Waiting for federal AI regulation to catch up could take years, and by then, technology will have already outpaced the policy.
As federal policy remains fragmented, the ethical boundaries of AI are increasingly being set by the private sector, raising new questions about accountability and national values.
The Private–Public Divide: Who Sets the Rules?
In the absence of formal regulation, private companies have become the de facto arbiters of AI ethics in federal contracting.
- OpenAI’s Preparedness Framework sets thresholds for catastrophic model risk, but no one outside of OpenAI has the ability to see its rules.
- Anthropic’s Constitutional AI uses self-defined moral rulesets to guide model alignment, posing significant transparency risks regarding how outcomes are shaped.
- Google’s AI Principles commit to transparency, fairness, and harm avoidance, but they are neither externally auditable nor modifiable by end users.
Together, these frameworks reveal a troubling pattern: corporations set the ethical terms, while government agencies operate under opaque, proprietary boundaries they did not define. These initiatives are important, but they are not enforceable. They are promises, not policies. Each reflects the company’s internal philosophy and values, not a shared public standard.
As I often tell agency leaders, “The government cannot outsource its conscience or its rules.”
Building the infrastructure for ethical governance before a crisis forces the issue requires a proactive approach. This involves vetting AI vendors with clear licensing criteria, creating acquisition roadmaps that balance compliance against federal laws and policies with mission agility, and translating broad principles into measurable performance metrics and auditable outcomes.
This approach aligns with findings from organizations like Brookings, which emphasize that trustworthy AI begins with vendor accountability—not technical sophistication.
Lessons from History: Oversight Always Lags Innovation
The tension between innovation and oversight isn’t new. When digital health and wearables entered the mainstream, regulation again lagged behind technology. The AHIMA Foundation warned in 2022 that many consumer health apps collecting sensitive data operate outside HIPAA’s protections, exposing users to identity theft and data misuse.
AI now sits in a similar unregulated gray zone, collecting, generating, and learning from sensitive data faster than policy can adapt. Without early guardrails, the cost isn’t just operational, it’s erosion of public trust.
The Pew Research Center’s 2025 report found that most experts see AI as transformative, but a majority of Americans remain skeptical of its reliability and integrity. This is proof that technical progress alone doesn’t equal public confidence; once trust is lost, it takes decades to rebuild.
What Agencies Can Do Now
While regulation evolves, agencies can take four immediate steps to align AI ethics with mission execution:
- Map the ethics supply chain. Know where the data originates, who trained the model. Agencies should require vendor transparency into the large language model (LLM) rules and training data to audit how embedded values and bias mitigation mechanisms influence outputs.
- License the vendor, not just the software. Vetting must extend beyond the code. CIOs and CISOs should establish AI vendor licensing standards (much like facility clearances for defense contractors) to ensure trusted sources across the supply chain.
- Use AI to verify AI. For example, my firm partners with Seekr.ai, whose verification engine cross-checks AI outputs for factual accuracy, bias detection, and source authenticity before such results inform policy or public communications.
- Promote literacy and transparency. Ethical literacy is the foundation of readiness because understanding how AI thinks is the first step to ensuring it serves public values. Literacy frameworks should be grounded in the same philosophy that shaped health information literacy: better knowledge leads to better outcomes. Education, not enforcement, is the first line of defense.
Readiness Beats Reaction
AI ethics in federal contracting is navigating uncharted terrain. Regulation will evolve, but good governance and ethics cannot wait for enforcement. Innovation will not slow down, and accountability cannot be optional, or we risk seriously detrimental outcomes.
At PCI, we view this challenge as a call to action. Ethical AI isn’t about slowing down progress. It’s about embedding trust at the speed of change. Every algorithm deployed in the federal ecosystem should be transparent, verifiable, and aligned with our nation’s values. We’re helping agencies establish standards to vet AI vendors, verify outputs, and build literacy frameworks that turn ethics from aspiration into operational control. Readiness always beats reaction, and the leaders who act now will define how responsibly AI serves the American public tomorrow.