Tag Archives: UT.AI

Do Our Agents Need Identities?

OpenAI released a 13-page policy document today called Industrial Policy for the Intelligence Age: Ideas to Keep People First. It’s ambitious. It proposes robot taxes, a national public wealth fund seeded partly by AI companies, automatic safety net triggers tied to displacement metrics, containment playbooks for rogue AI systems, and pilots of a 32-hour workweek framed as an “efficiency dividend.” Sam Altman told Axios that the scale of what’s coming is comparable to the Progressive Era and the New Deal.

I read the whole thing. And I want to engage with it seriously, because the ideas matter. But I also want to say something that I think is missing from the conversation, something that becomes visible only if you’re operating at the institutional layer where these impacts actually land.

Universities sit at the intersection of almost everything this document talks about. We are workforce development engines, research enterprises, employers of tens of thousands, and the training ground for the next generation of workers whose careers will be shaped by whatever policy regime emerges. We hold sensitive data, manage federal compliance obligations, and operate complex enterprise systems that keep all of it running. If transformative AI is coming, and I believe it is even if the timeline is debatable, the university is where the policy meets the pavement.

He proposes distributing AI-enabled research infrastructure broadly across universities, community colleges, hospitals, and regional hubs. Good. It talks about portable benefits that follow individuals across jobs and industries. Good. It calls for modernizing the tax base away from payroll and toward capital gains as automated labor displaces human labor. That’s a real conversation worth having. And it proposes that workers should have a formal voice in how AI is deployed in their workplaces, something I believe in deeply.

But here’s where I want to push further, because I think there’s a conversation that we aren’t having yet.

We are already in the early days of deploying AI agents that perform real institutional work. Not chatbots answering FAQs. Agents that process transactions, triage requests, route approvals, generate reports, monitor systems, and make decisions within defined parameters. The trajectory is clear: these agents are going to take on more responsibility, operate with more autonomy, and become embedded in workflows that currently depend on human staff. So here’s my question: if an agent is doing the work of a full-time employee, shouldn’t it be governed like one?

I don’t mean this as a thought experiment. I mean it operationally. At UT Austin, every employee has an Enterprise ID, an EID. That EID is the key to everything: system access, role-based permissions, org chart placement, budget allocation, position control, performance accountability. Our Workday HCM instance manages the lifecycle of every employee from hire to retire. Now imagine an AI agent that manages reimbursement exception processing, or monitors infrastructure and initiates remediation workflows, or handles first-pass review of procurement requests against policy. That agent consumes resources. It has a cost. It operates within a reporting structure. It needs access controls. It needs to be auditable. And someone, a human, needs to be accountable for what it does.

As of today, none of us have an institutional framework for this. Agents float in a governance gap. They aren’t in Workday. They don’t have position numbers. They aren’t reflected in our staffing models or our budget structures. They aren’t covered by the HR lifecycle processes that ensure every human worker has clear accountability, supervision, and a paper trail. And yet they are increasingly doing work that, if a human were doing it, would absolutely require all of those things.

This isn’t a technology problem, it is a human capital problem. OpenAI’s document talks about shifting the tax base from payroll to capital gains as automated labor grows. That’s a macro policy question. But at the institutional level, the equivalent question is: how do we account for an agent’s labor in our workforce planning? If an agent handles the equivalent of two FTEs worth of procurement review, does that show up in our staffing model? How does it affect position requests? Budget justifications? If we’re reporting headcount to the Board of Regents or to federal agencies, do we need a parallel accounting for agent capacity? And what about accountability? When a human employee makes an error in a compliance-sensitive process, there’s a clear chain. When an agent makes that same error, who owns it? The developer who built it? The product owner who scoped it? The CIO whose organization deployed it? We need to accelerate toward conversations to these questions.

Do we need something like a UT Agent Registry, a formal institutional record for every AI agent that performs work on behalf of the university? A governed registry that captures what the agent does, what systems it accesses, what authority it has, who supervises it, and how its performance is measured and audited. The equivalent of an EID. A position description. A reporting line and a professional development budget. This might sound like bureaucracy. It’s not. It’s the same governance discipline we apply to every other resource that operates on behalf of the institution. We don’t let humans access sensitive systems without identity management, role-based access, and a clear accountability chain. We shouldn’t let agents do it either.

At the UT System level I am responsible for exposing what AI tools we have in our environment and what types of work they do, but nothing at the level I am working through.

OpenAI’s policy document is forward-looking in many ways, but it still frames the AI transition primarily as something that happens to workers and to economies, with governments and companies managing the fallout. What it doesn’t reckon with is that institutions like universities are going to be running hybrid workforces, humans and agents, long before the national policy framework catches up. We will be making these decisions in our ERP systems, in our identity platforms, in our governance structures, whether or not Washington has figured out the robot tax question.

At UT Austin, we’ve been working toward this, but the writing on the wall is becoming more and more clear with each experiment and through each observable outcome. UT.AI is our common AI environment where data protection, privacy, accessibility, security, and institutional identity are the foundation. The self-healing campus concept I wrote about last month is built on the premise that agentic AI will enable domain experts to build personal, ephemeral interfaces to institutional data. But that vision only works if the foundation is right, and part of getting the foundation right is being honest about the fact that agents are becoming part of our workforce and we need to govern them accordingly.

I don’t have all the answers. But I think the conversation needs to start with a simple recognition: the line between a tool and a worker is blurring, and our institutional frameworks haven’t caught up. OpenAI may be right that we need a new industrial policy. But we also need a new human capital policy, one that accounts for the non-human actors that are increasingly doing institutional work alongside our people.

The Self-Healing Campus Concept

I was in a meeting with Microsoft executives this week talking about AI adoption, roadmaps, what comes next, the usual choreography of these conversations. And I found myself saying something I hadn’t planned to say: We are going to use AI to create a self-healing campus.

I want to try to explain what I meant because from what I am seeing emerge in the marketplace is nothing short of revolutionary. But I have to start somewhere that might seem like a detour: with a platform, and with institutional responsibility. Because the self-healing idea only works if the foundation is built right and building that foundation is the hardest part of the work.

Universities are complex institutions. We hold sensitive data about hundreds of thousands of people. We have federal compliance obligations, privacy commitments, accessibility requirements, and a duty to protect the identities and records of our students, faculty, and staff. Any vision of AI-powered self-service that ignores those realities isn’t innovation, it’s shadow IT with a new name. The history of higher education IT is full of well-intentioned workarounds that created breaches, compliance gaps, and technical debt of their own.

So before the self-healing can happen, someone has to build the environment that makes it safe. That is what we are doing at UT Austin with UT.AI.

UT.AI is our attempt to build a common AI environment where the institutional protections are not an afterthought — they are the foundation. Data protection, privacy, accessibility, security, and institutional identity are baked in from the start, so that everything built on top of it inherits those guarantees by default. The goal is not to restrict what the community can do with AI. It is to make it possible for them to do more, safely.

With that foundation established, here is the idea that excites me. Tools like Claude Code, GitHub Copilot, and OpenAI Codex are making it possible for people who are not professional software engineers to build working software. A procurement officer who understands the byzantine logic of a university purchasing workflow can now describe what they need and have a running application in an afternoon. A researcher who has spent twenty years building expertise in a domain can translate that expertise into automation without waiting for IT to prioritize their ticket. The knowledge that has always lived in people’s heads, contextual, accumulated, irreplaceable, can now be turned directly into tools.

This is significant. But it isn’t the part that struck me most. What if you don’t have to wait for an institution to build the interface you need? What if you just describe it — and it appears?

Here’s the thought that hit me: if I can open Claude Code and describe a web application and have it built in front of me, then I am one step away from a world where I experience the web entirely on my own terms. Not the interface someone else designed for me. Not the portal IT built three years ago that nobody has the budget to modernize. My interface. The one that surfaces exactly the information I need, in exactly the format I want, through a gateway I described in plain language.

Think about what that means for a university. We have decades of accumulated technical debt — systems built for a version of the institution that no longer exists, interfaces designed around assumptions about how people work that stopped being true years ago. Every year, IT organizations like mine make triage decisions about what gets modernized and what stays on life support. We do the best we can. But the backlog is real, and it grows faster than we can address it.

The traditional answer is more resources, better prioritization, smarter governance. All of that still matters. But agentic AI introduces a different answer: what if the community doesn’t need us to fix all the interfaces, because they can make their own?

A faculty member who needs the grants management system to present data in a specific way shouldn’t have to wait for a system modernization project. They should be able to describe what they want — “show me my pending awards grouped by sponsor, sorted by close date, exportable to a format my department administrator can actually use” — and have that view rendered for them, connected to the underlying data, without touching the legacy system at all. The underlying system doesn’t have to change. The interface layer becomes personal, ephemeral, generated on demand.

This is what I mean by self-healing. Not that the old systems get fixed. But that the community builds around them, over them, through them and in doing so, the debt stops mattering the way it used to. The pain point that drove the ticket to IT gets resolved by the person who felt it, in the moment they felt it, using tools that are already available.

The interface layer becomes personal, ephemeral, generated on demand. Technical debt stops mattering the way it used to.

But here is why the platform layer is not optional. Every one of those personal gateways needs to know who you are, what data you are authorized to see, and how that data can be used. It needs to meet accessibility standards so that the self-service future is actually available to everyone, not just the technically confident. It needs to enforce the same privacy and security guarantees that govern every other institutional system. Without a common platform that provides those things by default, “self-healing” becomes “self-inflicted harm at scale.”

UT.AI is our attempt to thread that needle. Build the platform first; with institutional identity, data protection, privacy, accessibility, and security as non-negotiables. Then open it up so that the community can build on top of it, around it, through it. The freedom is real, but it is freedom within a responsible envelope, not despite one.

At UT Austin, we are already investing in the tools and infrastructure that make this possible. Through a year of UT Spark we have learned that going alone at the platform layer isn’t enough, through the AI Studio we are launching this fall, through the agentic capabilities we are putting in front of our community. This concept forces us to shift in how we think about what IT is for.

We are not just in the business of building and maintaining systems. We are in the business of enabling people to do their best work safely, equitably, and in ways that hold up under scrutiny. If agentic AI means that more and more of that enabling happens through description rather than deployment, through personal gateways rather than institutional portals, then our job is to make sure the foundation is solid enough that the community can build on top of it without putting themselves or the institution at risk.

That is a campus that heals itself. And I think it is closer than it looks.

Copilot Reflection: First 90 Days

Even in the middle of summer, I’m continually reminded of the energy that pulses through our campus. It’s an energy fueled by curiosity, by a relentless drive to learn, and by a community that believes deeply in the power of innovation. Over the past several months, that energy has found a new outlet through our Microsoft Copilot Initiative—a key pillar in our broader UT.AI strategy.

When we launched the Copilot Initiative, our goal was simple but ambitious: to transform the way we work, collaborate, and solve problems across UT Austin. By integrating Microsoft 365 Copilot tools into our workflows, we set out to empower our staff to reclaim time, enhance productivity, and build the digital fluency that will define the next era of higher education.

The results so far have been impressive, with more to come. More than 1,200 staff members have participated in workshops, webinars, and hands-on labs. One in three participants now reports saving 1–2 hours per day—time they’re reinvesting in creative, strategic work that moves our university forward. Over 90% of our colleagues rated these learning experiences as exceptional or above average. These numbers are impressive, but what excites me most are the stories behind them: staff using Copilot to draft emails, summarize complex documents, organize workflows, and transcribing meetings to more quickly arrive at impactful descion-making. We’re not just adopting new tools—we’re reimagining what’s possible.

Of course, transformation isn’t always easy. We’ve encountered challenges around license allocation, data governance, and the quirks of moving from Box to SharePoint. But these are exactly the kinds of problems that signal real change is underway. They push us to ask better questions, to iterate, and to build solutions together.

What stands out from our interviews and feedback is a hunger for more: more cohort-based learning, more job-specific scenarios, more opportunities to experiment and grow. This is the heart of what makes UT Austin special. We are, at our core, a community of perpetual learners.

Looking ahead, I’m excited for what’s next. Later this month, we’ll gather for our AI Summit Week to share use cases and deepen our engagement. We’re rolling out expanded webinars and train-the-trainer workshops, building the internal capacity we need for sustained, campus-wide adoption. And as we do, we’ll continue to listen, to adapt, and to celebrate the creativity and resilience of our staff.

The Copilot Initiative is just one part of our larger UT.AI vision—a vision where technology is not just a tool, but a catalyst for lifelong learning and a culture of innovation. My hope is that we keep pushing the boundaries, keep asking what’s possible, and keep learning together. Because at UT Austin, the future isn’t something we can wait for. It’s something we build, one experiment, one workshop, one bold idea at a time. Here’s to always learning.

Here is a little summary of what we are experiencing from our post training workshop feedback:

AreaKey Findings
Training Reach1,200+ staff trained across UT Austin
Time Savings33% saved 1–2 hours/day; 56% saved 1–2 hours/week; 11% saved 1–2 hours/month
Satisfaction90%+ rated sessions as exceptional or above average
ProductivityCopilot used for drafting emails, summarizing documents, organizing workflows, project planning
AdoptionHigh demand for continued learning; strong interest in cohort-based and job-specific training
ChallengesLicense allocation, data governance, platform inconsistencies (Box vs. SharePoint)
Cultural ImpactStaff appreciated transparency and the university’s commitment to digital transformation