An illustration featuring a large open laptop displaying a detailed mechanical blueprint or CAD diagram on its screen. To the right of the laptop, two small human figures stand side-by-side: a man in a professional suit holding a briefcase and clipboard, and a woman in an industrial jumpsuit and cap. Scattered across the foreground are various mechanical components, including several gears of different sizes and two coiled springs, all set against a neutral, tan background.


AI in government employment has become one of the most contested workforce issues in higher education and public policy. For students considering government careers, and for university leaders designing programs to prepare them, the picture right now is complicated. Significant job losses are already happening. New roles are being created. And the rules governing how AI gets used in the public sector are still being written.

This article maps where the losses are concentrated, what roles are growing, and what it all means for people preparing to enter or advance in government work.

Metric

Figure

Federal workers who left government, Jan 2025–Jan 2026

386,826

Share of federal workforce eliminated by March 2026

~9%

Jobs directly attributed to AI cuts (Challenger, 2025)

~17,000

Employers planning workforce reductions due to AI (WEF 2025)

41%

Public sector workers with a clear AI strategy (Q4 2025)

37%


The Scale of What Has Already Happened

The numbers at the federal level are striking. By March 2026, roughly 9% of the federal workforce had been eliminated, with about 300,000 civilian federal job cuts announced since the start of 2025. According to executive outplacement firm Challenger, Gray & Christmas, DOGE-related actions were the leading cause of job cut announcements in 2025, with 293,753 planned layoffs connected to DOGE activities.

AI is a separate but overlapping force. The same Challenger data attributes around 17,000 job losses to AI adoption since the start of 2025, with most cuts announced in the second half of the year. That figure likely undercounts the real total, since many firms categorize AI-driven eliminations under broader "technological updates."

These two forces, political restructuring and automation, are not always easy to separate. The IRS is a useful example. The agency lost roughly 40% of its IT workforce and nearly 80% of its senior technology leadership during the 2025 restructuring, then introduced AI tools into internal workflows shortly after. Whether that sequence represents genuine modernization or workforce reduction dressed up as innovation is a question with no clean answer.


Which Government Roles Are Most Vulnerable

Not all government jobs face the same pressure. Roles built around high-volume, repetitive tasks are the most exposed. According to Route Fifty's public sector analysis, the areas receiving the most automation attention are administrative and clerical functions, including document processing, data entry, scheduling, and basic citizen inquiries, alongside regulatory compliance tasks like permit processing and automated inspection systems.

Benefits processing is another pressure point. AI systems already handle eligibility checks and claims review at scale. IBM's AskHR processes 11.5 million interactions annually with minimal human oversight. Similar logic is being applied to government-facing services.

IT experts warn that when government AI systems make errors, citizens face significant difficulties appealing decisions not made by humans, since AI is not built for human reasoning, context, or accountability.

Entry-level workers face the sharpest near-term risk. Stanford research shows a 13% employment decline for workers aged 22–25 in AI-exposed occupations since late 2022, while workers over 30 in the same fields have seen 6–12% employment growth. For recent graduates trying to enter government work, the traditional entry point, the routine administrative or analyst role, is narrowing.


The Aging Workforce Factor

AI adoption in government is not happening in a vacuum. The federal workforce has a demographic problem independent of automation. OPM Director Scott Kupor has noted that close to half the federal population is within 10 years of retirement age, creating a pipeline challenge that predates DOGE.

This matters because it shapes how AI gets used. As AI tools absorb the tasks of departing employees, the urgency to replace them drops, which produces the same result as cuts: fewer government jobs available to new graduates.

Pandemic-era shifts reinforced this trend. Remote work normalized email-and-spreadsheet workflows, which happen to be among the most automatable. Roles that moved fully remote between 2020 and 2022 are now clear candidates for task automation.

In high-income countries, jobs most vulnerable to AI-driven task automation make up 9.6% of female employment, nearly three times the proportion for male jobs at 3.2%. Government workforces, which tend to skew female in administrative tiers, face an uneven distribution of this risk.


New Roles Emerging in the Public Sector

The picture is not all contraction. Real demand is growing in specific areas, though it requires different credentials than the roles being cut.

AI governance and compliance. Under OMB Memorandum M-25-21, every federal agency must publish both an AI Strategy and an AI Compliance Plan covering how it will govern high-impact AI systems. Writing, auditing, and enforcing these plans requires people who understand both policy and technical systems.

AI oversight and accountability. Human accountability for automated decisions in benefits administration, law enforcement, and healthcare is a legal requirement. Agencies cannot deploy AI and remove human review without significant legal exposure.

Policy analysts using AI tools. Deloitte's Center for Government Insights identifies AI-assisted policy analysis, including simulating long-term outcomes and processing large datasets, as one of the clearest near-term augmentation opportunities in government.

Cybersecurity. AI expands the attack surface for public sector systems. Demand for cybersecurity specialists is growing regardless of broader job cuts.

Technical talent via federal programs. In December 2025, the Trump administration launched the U.S. Tech Force, hiring 1,000 engineers and specialists to build AI infrastructure within the federal government, geared toward early-career professionals on two-year placements.

The catch: 77% of AI jobs require master's degrees and 18% require doctoral degrees. The new roles are real, but they are not accessible to graduates without advanced technical or policy credentials.


AI as a Capacity Tool, Not Just a Replacement Mechanism

Some governments are using AI to close staffing gaps rather than reduce headcount.

New Jersey's state government is a useful example. State IT leaders have explicitly framed AI as a workforce augmentation tool, using it to smooth out routine tasks and free workers for higher-level responsibilities, with clear leadership messaging that job replacement is not the mission.

The logic is straightforward in resource-constrained agencies. Many state and local governments cannot compete with private sector salaries to fill technical roles. If AI handles the high-volume, low-judgment parts of a job, existing staff can focus on work requiring human discretion.

Research from the Center for AI Safety and Scale AI found that current AI agents complete only a fraction of practical multi-step government tasks at a professional standard. The gap between controlled demos and reliable production performance is still significant.

As of Q4 2025, only 37% of public sector workers say their organization has a clear AI strategy, compared with 53% in the private sector. That gap means a lot of government AI adoption is happening without clear governance, which creates both risk and opportunity for people with relevant skills.


How AI Is Reshaping Professional Identity in Government

Beyond job titles, AI is shifting what government work looks like day to day.

The role of the government worker is moving from task executor to AI supervisor. Staff increasingly validate, oversee, and correct AI outputs rather than produce information themselves. That is a meaningful shift in what skills matter.

The American Federation of Government Employees has said it does not have blanket opposition to AI, but does not expect current federal implementation to prioritize worker protection, as agencies are simultaneously cutting workforces and deploying automation. The union's concern is practical: there are two possible versions of an AI-integrated government, one where workers are redeployed to higher-value tasks, and one where they are eliminated. Right now, the federal government is running closer to the second version.

The reskilling gap is real. The Government Accountability Office warned in 2023 of a severe shortage of digital expertise in government, including in AI. The 2025 workforce reductions made this worse. GSA's 18F and the DHS AI Corps were both cut during the DOGE period, removing institutional AI capacity at precisely the moment agencies needed it most.

Public trust is a compounding problem. A Pew Research Center study from October 2025 found that only 44% of Americans trust their government to regulate AI effectively, and globally, more people trust the EU to regulate AI than the United States.


New Policy Challenges Governments Must Manage

AI adoption inside government creates governing responsibilities that did not exist five years ago.

Algorithmic accountability is the most immediate. When an AI system denies a benefit, flags a person as high-risk, or recommends a contract cancellation, responsibility has to land somewhere. A ProPublica investigation found that DOGE-era AI-generated code used to flag VA contracts hallucinated values, and at least two dozen incorrectly flagged contracts had been canceled before the errors were identified.

Workforce transparency legislation is moving through Congress. A bipartisan bill from Senators Warner and Hawley would require federal agencies and publicly traded companies to submit quarterly reports to the Labor Department detailing layoffs, new hires, and positions left unfilled because of AI.

Data privacy and security remain unresolved at scale. Government AI systems handle sensitive citizen data across health, tax, immigration, and benefits. Current governance frameworks are still catching up to deployment realities, and the legal and political consequences of misuse are significant.


What This Means for Students and University Leaders

~9%

of the federal workforce cut by March 2026

-13%

employment drop for 22–25 year-olds in AI-exposed roles

+25%

wage premium for workers with AI skills

77%

of new AI government roles require a master's degree

Students targeting government careers need to treat AI literacy as a baseline requirement, not a differentiator. Workers with AI skills now earn wages on average 25% higher than those without, and that gap is showing up in government recruitment. Graduates who can use AI tools, understand their limitations, and apply them within a policy or legal framework will have a clearer path into the roles that are growing.

University programs in public administration, policy, and law need to integrate AI not as elective content but as core curriculum. George Mason University's Schar School launched a joint degree with the College of Computing in 2023 to address this gap, pairing policy education with applied computing skills. That model is worth examining for other institutions.

The government jobs that will hold up under AI pressure are those requiring judgment, accountability, public trust, and cross-agency coordination. Programs that build those capacities alongside technical fluency are better positioned than those treating AI as an add-on. The entry-level administrative pathway into government, which historically absorbed large numbers of graduates, is contracting. The replacement pathways require more specific preparation than most current curricula provide.