OpenAI released a 13-page policy document today called Industrial Policy for the Intelligence Age: Ideas to Keep People First. It’s ambitious. It proposes robot taxes, a national public wealth fund seeded partly by AI companies, automatic safety net triggers tied to displacement metrics, containment playbooks for rogue AI systems, and pilots of a 32-hour workweek framed as an “efficiency dividend.” Sam Altman told Axios that the scale of what’s coming is comparable to the Progressive Era and the New Deal.
I read the whole thing. And I want to engage with it seriously, because the ideas matter. But I also want to say something that I think is missing from the conversation, something that becomes visible only if you’re operating at the institutional layer where these impacts actually land.
Universities sit at the intersection of almost everything this document talks about. We are workforce development engines, research enterprises, employers of tens of thousands, and the training ground for the next generation of workers whose careers will be shaped by whatever policy regime emerges. We hold sensitive data, manage federal compliance obligations, and operate complex enterprise systems that keep all of it running. If transformative AI is coming, and I believe it is even if the timeline is debatable, the university is where the policy meets the pavement.
He proposes distributing AI-enabled research infrastructure broadly across universities, community colleges, hospitals, and regional hubs. Good. It talks about portable benefits that follow individuals across jobs and industries. Good. It calls for modernizing the tax base away from payroll and toward capital gains as automated labor displaces human labor. That’s a real conversation worth having. And it proposes that workers should have a formal voice in how AI is deployed in their workplaces, something I believe in deeply.
But here’s where I want to push further, because I think there’s a conversation that we aren’t having yet.
We are already in the early days of deploying AI agents that perform real institutional work. Not chatbots answering FAQs. Agents that process transactions, triage requests, route approvals, generate reports, monitor systems, and make decisions within defined parameters. The trajectory is clear: these agents are going to take on more responsibility, operate with more autonomy, and become embedded in workflows that currently depend on human staff. So here’s my question: if an agent is doing the work of a full-time employee, shouldn’t it be governed like one?
I don’t mean this as a thought experiment. I mean it operationally. At UT Austin, every employee has an Enterprise ID, an EID. That EID is the key to everything: system access, role-based permissions, org chart placement, budget allocation, position control, performance accountability. Our Workday HCM instance manages the lifecycle of every employee from hire to retire. Now imagine an AI agent that manages reimbursement exception processing, or monitors infrastructure and initiates remediation workflows, or handles first-pass review of procurement requests against policy. That agent consumes resources. It has a cost. It operates within a reporting structure. It needs access controls. It needs to be auditable. And someone, a human, needs to be accountable for what it does.
As of today, none of us have an institutional framework for this. Agents float in a governance gap. They aren’t in Workday. They don’t have position numbers. They aren’t reflected in our staffing models or our budget structures. They aren’t covered by the HR lifecycle processes that ensure every human worker has clear accountability, supervision, and a paper trail. And yet they are increasingly doing work that, if a human were doing it, would absolutely require all of those things.
This isn’t a technology problem, it is a human capital problem. OpenAI’s document talks about shifting the tax base from payroll to capital gains as automated labor grows. That’s a macro policy question. But at the institutional level, the equivalent question is: how do we account for an agent’s labor in our workforce planning? If an agent handles the equivalent of two FTEs worth of procurement review, does that show up in our staffing model? How does it affect position requests? Budget justifications? If we’re reporting headcount to the Board of Regents or to federal agencies, do we need a parallel accounting for agent capacity? And what about accountability? When a human employee makes an error in a compliance-sensitive process, there’s a clear chain. When an agent makes that same error, who owns it? The developer who built it? The product owner who scoped it? The CIO whose organization deployed it? We need to accelerate toward conversations to these questions.
Do we need something like a UT Agent Registry, a formal institutional record for every AI agent that performs work on behalf of the university? A governed registry that captures what the agent does, what systems it accesses, what authority it has, who supervises it, and how its performance is measured and audited. The equivalent of an EID. A position description. A reporting line and a professional development budget. This might sound like bureaucracy. It’s not. It’s the same governance discipline we apply to every other resource that operates on behalf of the institution. We don’t let humans access sensitive systems without identity management, role-based access, and a clear accountability chain. We shouldn’t let agents do it either.
At the UT System level I am responsible for exposing what AI tools we have in our environment and what types of work they do, but nothing at the level I am working through.
OpenAI’s policy document is forward-looking in many ways, but it still frames the AI transition primarily as something that happens to workers and to economies, with governments and companies managing the fallout. What it doesn’t reckon with is that institutions like universities are going to be running hybrid workforces, humans and agents, long before the national policy framework catches up. We will be making these decisions in our ERP systems, in our identity platforms, in our governance structures, whether or not Washington has figured out the robot tax question.
At UT Austin, we’ve been working toward this, but the writing on the wall is becoming more and more clear with each experiment and through each observable outcome. UT.AI is our common AI environment where data protection, privacy, accessibility, security, and institutional identity are the foundation. The self-healing campus concept I wrote about last month is built on the premise that agentic AI will enable domain experts to build personal, ephemeral interfaces to institutional data. But that vision only works if the foundation is right, and part of getting the foundation right is being honest about the fact that agents are becoming part of our workforce and we need to govern them accordingly.
I don’t have all the answers. But I think the conversation needs to start with a simple recognition: the line between a tool and a worker is blurring, and our institutional frameworks haven’t caught up. OpenAI may be right that we need a new industrial policy. But we also need a new human capital policy, one that accounts for the non-human actors that are increasingly doing institutional work alongside our people.
