Category Archives: AI

Do Our Agents Need Identities?

OpenAI released a 13-page policy document today called Industrial Policy for the Intelligence Age: Ideas to Keep People First. It’s ambitious. It proposes robot taxes, a national public wealth fund seeded partly by AI companies, automatic safety net triggers tied to displacement metrics, containment playbooks for rogue AI systems, and pilots of a 32-hour workweek framed as an “efficiency dividend.” Sam Altman told Axios that the scale of what’s coming is comparable to the Progressive Era and the New Deal.

I read the whole thing. And I want to engage with it seriously, because the ideas matter. But I also want to say something that I think is missing from the conversation, something that becomes visible only if you’re operating at the institutional layer where these impacts actually land.

Universities sit at the intersection of almost everything this document talks about. We are workforce development engines, research enterprises, employers of tens of thousands, and the training ground for the next generation of workers whose careers will be shaped by whatever policy regime emerges. We hold sensitive data, manage federal compliance obligations, and operate complex enterprise systems that keep all of it running. If transformative AI is coming, and I believe it is even if the timeline is debatable, the university is where the policy meets the pavement.

He proposes distributing AI-enabled research infrastructure broadly across universities, community colleges, hospitals, and regional hubs. Good. It talks about portable benefits that follow individuals across jobs and industries. Good. It calls for modernizing the tax base away from payroll and toward capital gains as automated labor displaces human labor. That’s a real conversation worth having. And it proposes that workers should have a formal voice in how AI is deployed in their workplaces, something I believe in deeply.

But here’s where I want to push further, because I think there’s a conversation that we aren’t having yet.

We are already in the early days of deploying AI agents that perform real institutional work. Not chatbots answering FAQs. Agents that process transactions, triage requests, route approvals, generate reports, monitor systems, and make decisions within defined parameters. The trajectory is clear: these agents are going to take on more responsibility, operate with more autonomy, and become embedded in workflows that currently depend on human staff. So here’s my question: if an agent is doing the work of a full-time employee, shouldn’t it be governed like one?

I don’t mean this as a thought experiment. I mean it operationally. At UT Austin, every employee has an Enterprise ID, an EID. That EID is the key to everything: system access, role-based permissions, org chart placement, budget allocation, position control, performance accountability. Our Workday HCM instance manages the lifecycle of every employee from hire to retire. Now imagine an AI agent that manages reimbursement exception processing, or monitors infrastructure and initiates remediation workflows, or handles first-pass review of procurement requests against policy. That agent consumes resources. It has a cost. It operates within a reporting structure. It needs access controls. It needs to be auditable. And someone, a human, needs to be accountable for what it does.

As of today, none of us have an institutional framework for this. Agents float in a governance gap. They aren’t in Workday. They don’t have position numbers. They aren’t reflected in our staffing models or our budget structures. They aren’t covered by the HR lifecycle processes that ensure every human worker has clear accountability, supervision, and a paper trail. And yet they are increasingly doing work that, if a human were doing it, would absolutely require all of those things.

This isn’t a technology problem, it is a human capital problem. OpenAI’s document talks about shifting the tax base from payroll to capital gains as automated labor grows. That’s a macro policy question. But at the institutional level, the equivalent question is: how do we account for an agent’s labor in our workforce planning? If an agent handles the equivalent of two FTEs worth of procurement review, does that show up in our staffing model? How does it affect position requests? Budget justifications? If we’re reporting headcount to the Board of Regents or to federal agencies, do we need a parallel accounting for agent capacity? And what about accountability? When a human employee makes an error in a compliance-sensitive process, there’s a clear chain. When an agent makes that same error, who owns it? The developer who built it? The product owner who scoped it? The CIO whose organization deployed it? We need to accelerate toward conversations to these questions.

Do we need something like a UT Agent Registry, a formal institutional record for every AI agent that performs work on behalf of the university? A governed registry that captures what the agent does, what systems it accesses, what authority it has, who supervises it, and how its performance is measured and audited. The equivalent of an EID. A position description. A reporting line and a professional development budget. This might sound like bureaucracy. It’s not. It’s the same governance discipline we apply to every other resource that operates on behalf of the institution. We don’t let humans access sensitive systems without identity management, role-based access, and a clear accountability chain. We shouldn’t let agents do it either.

At the UT System level I am responsible for exposing what AI tools we have in our environment and what types of work they do, but nothing at the level I am working through.

OpenAI’s policy document is forward-looking in many ways, but it still frames the AI transition primarily as something that happens to workers and to economies, with governments and companies managing the fallout. What it doesn’t reckon with is that institutions like universities are going to be running hybrid workforces, humans and agents, long before the national policy framework catches up. We will be making these decisions in our ERP systems, in our identity platforms, in our governance structures, whether or not Washington has figured out the robot tax question.

At UT Austin, we’ve been working toward this, but the writing on the wall is becoming more and more clear with each experiment and through each observable outcome. UT.AI is our common AI environment where data protection, privacy, accessibility, security, and institutional identity are the foundation. The self-healing campus concept I wrote about last month is built on the premise that agentic AI will enable domain experts to build personal, ephemeral interfaces to institutional data. But that vision only works if the foundation is right, and part of getting the foundation right is being honest about the fact that agents are becoming part of our workforce and we need to govern them accordingly.

I don’t have all the answers. But I think the conversation needs to start with a simple recognition: the line between a tool and a worker is blurring, and our institutional frameworks haven’t caught up. OpenAI may be right that we need a new industrial policy. But we also need a new human capital policy, one that accounts for the non-human actors that are increasingly doing institutional work alongside our people.

The Self-Healing Campus Concept

I was in a meeting with Microsoft executives this week talking about AI adoption, roadmaps, what comes next, the usual choreography of these conversations. And I found myself saying something I hadn’t planned to say: We are going to use AI to create a self-healing campus.

I want to try to explain what I meant because from what I am seeing emerge in the marketplace is nothing short of revolutionary. But I have to start somewhere that might seem like a detour: with a platform, and with institutional responsibility. Because the self-healing idea only works if the foundation is built right and building that foundation is the hardest part of the work.

Universities are complex institutions. We hold sensitive data about hundreds of thousands of people. We have federal compliance obligations, privacy commitments, accessibility requirements, and a duty to protect the identities and records of our students, faculty, and staff. Any vision of AI-powered self-service that ignores those realities isn’t innovation, it’s shadow IT with a new name. The history of higher education IT is full of well-intentioned workarounds that created breaches, compliance gaps, and technical debt of their own.

So before the self-healing can happen, someone has to build the environment that makes it safe. That is what we are doing at UT Austin with UT.AI.

UT.AI is our attempt to build a common AI environment where the institutional protections are not an afterthought — they are the foundation. Data protection, privacy, accessibility, security, and institutional identity are baked in from the start, so that everything built on top of it inherits those guarantees by default. The goal is not to restrict what the community can do with AI. It is to make it possible for them to do more, safely.

With that foundation established, here is the idea that excites me. Tools like Claude Code, GitHub Copilot, and OpenAI Codex are making it possible for people who are not professional software engineers to build working software. A procurement officer who understands the byzantine logic of a university purchasing workflow can now describe what they need and have a running application in an afternoon. A researcher who has spent twenty years building expertise in a domain can translate that expertise into automation without waiting for IT to prioritize their ticket. The knowledge that has always lived in people’s heads, contextual, accumulated, irreplaceable, can now be turned directly into tools.

This is significant. But it isn’t the part that struck me most. What if you don’t have to wait for an institution to build the interface you need? What if you just describe it — and it appears?

Here’s the thought that hit me: if I can open Claude Code and describe a web application and have it built in front of me, then I am one step away from a world where I experience the web entirely on my own terms. Not the interface someone else designed for me. Not the portal IT built three years ago that nobody has the budget to modernize. My interface. The one that surfaces exactly the information I need, in exactly the format I want, through a gateway I described in plain language.

Think about what that means for a university. We have decades of accumulated technical debt — systems built for a version of the institution that no longer exists, interfaces designed around assumptions about how people work that stopped being true years ago. Every year, IT organizations like mine make triage decisions about what gets modernized and what stays on life support. We do the best we can. But the backlog is real, and it grows faster than we can address it.

The traditional answer is more resources, better prioritization, smarter governance. All of that still matters. But agentic AI introduces a different answer: what if the community doesn’t need us to fix all the interfaces, because they can make their own?

A faculty member who needs the grants management system to present data in a specific way shouldn’t have to wait for a system modernization project. They should be able to describe what they want — “show me my pending awards grouped by sponsor, sorted by close date, exportable to a format my department administrator can actually use” — and have that view rendered for them, connected to the underlying data, without touching the legacy system at all. The underlying system doesn’t have to change. The interface layer becomes personal, ephemeral, generated on demand.

This is what I mean by self-healing. Not that the old systems get fixed. But that the community builds around them, over them, through them and in doing so, the debt stops mattering the way it used to. The pain point that drove the ticket to IT gets resolved by the person who felt it, in the moment they felt it, using tools that are already available.

The interface layer becomes personal, ephemeral, generated on demand. Technical debt stops mattering the way it used to.

But here is why the platform layer is not optional. Every one of those personal gateways needs to know who you are, what data you are authorized to see, and how that data can be used. It needs to meet accessibility standards so that the self-service future is actually available to everyone, not just the technically confident. It needs to enforce the same privacy and security guarantees that govern every other institutional system. Without a common platform that provides those things by default, “self-healing” becomes “self-inflicted harm at scale.”

UT.AI is our attempt to thread that needle. Build the platform first; with institutional identity, data protection, privacy, accessibility, and security as non-negotiables. Then open it up so that the community can build on top of it, around it, through it. The freedom is real, but it is freedom within a responsible envelope, not despite one.

At UT Austin, we are already investing in the tools and infrastructure that make this possible. Through a year of UT Spark we have learned that going alone at the platform layer isn’t enough, through the AI Studio we are launching this fall, through the agentic capabilities we are putting in front of our community. This concept forces us to shift in how we think about what IT is for.

We are not just in the business of building and maintaining systems. We are in the business of enabling people to do their best work safely, equitably, and in ways that hold up under scrutiny. If agentic AI means that more and more of that enabling happens through description rather than deployment, through personal gateways rather than institutional portals, then our job is to make sure the foundation is solid enough that the community can build on top of it without putting themselves or the institution at risk.

That is a campus that heals itself. And I think it is closer than it looks.

February ‘26 ET Leadership Updates

I’m excited to share two leadership updates that position us better to be a leader in AI innovation while strengthening our commitment to teaching, learning, accessibility, and digital adoption across campus.

Mario Guerra Jr. Promoted to Associate Vice President, AI Platforms & Innovation

Mario Guerra Jr. will transition from Director of Enterprise Learning Technology into a newly created role as Associate Vice President for AI Platforms & Innovation.

In this role, Mario will build and lead UT.AI Studio, our flagship AI innovation program that brings together enterprise platform development, student talent cultivation, and strategic corporate partnerships. This promotion recognizes Mario’s vision and leadership in emerging technologies and reflects our commitment to moving with urgency and intention in advancing AI across the university. Mario will oversee:

  • AI Platform Engineering: Enterprise AI platforms serving the UT community, including UT Spark, UT Sage, Microsoft 365 Copilot, and custom AI agents
  • UT.AI Academy: A student talent development program training more than 100 students annually through industry certifications and real-world project experience
  • Corporate Partnerships: Strategic relationships with technology partners, including our anchor partnership with Dell Technologies

Under Mario’s leadership, we will continue to distinguish ourselves as a national leader in higher-education AI—not only by deploying technology, but by transforming how we teach, learn, conduct research, and operate as an institution.

New Role: Associate Vice President, Enterprise Initiatives & Instructional Technology

This AVP will lead our technology-enabled learning ecosystem while driving digital adoption across campus and overseeing major, cross-cutting institutional initiatives. The role brings together instructional technology, accessibility, learning environments, and enterprise program execution under a single strategic leader.

This AVP will be responsible for:

  • Instructional Technology Solutions: Canvas LMS, instructional tools, faculty engagement, and teaching innovation
  • Digital Accessibility Services: Accessible course design, universal design for learning, and inclusive pedagogy
  • Learning Spaces & Educational Environments: Technology strategy for classrooms, learning spaces, and active learning environments
  • Digital Adoption & Campus Dexterity: Building faculty and staff fluency with Microsoft 365, Canvas, AI platforms, and learning technologies
  • Enterprise Program Execution: Leading major CIO-sponsored transformation initiatives that span organizational boundaries

Critically, this AVP will partner closely with Mario and the UT.AI Studio team to translate AI platform innovation into faculty- and staff-facing programs, training, and support, ensuring our AI investments result in meaningful, scalable adoption across campus.

WHY THIS MATTERS

Artificial intelligence is transforming higher education at an unprecedented speed. This leadership realignment reflects our commitment to lead that transformation with both bold vision and responsible execution.

By establishing focused executive leadership for AI platforms and innovation and for instructional technology and digital adoption, we ensure each area receives the strategic attention, resources, and accountability it requires.

Mario’s elevation to AVP recognizes the critical importance of AI to UT Austin’s institutional future. The creation of a second AVP role ensures we continue advancing excellence in teaching and learning, while strengthening digital fluency, accessibility, and adoption across campus.

These are deliberate investments in our ability to:

  • Serve more than 50,000 students
  • Enable faculty innovation at scale
  • Position UT Austin as a national model for AI in higher education

WHAT’S NEXT

The new Associate Vice President for Enterprise Initiatives & Instructional Technology position will be posted on February 4.

We are seeking a relationship-driven leader with:

  • Deep instructional technology expertise
  • Experience as an Associate Vice President, CIO, or Deputy CIO
  • A strong track record in digital adoption and change leadership
  • A passion for faculty engagement and inclusive learning environments

I’m grateful to Mario for his leadership and vision, and excited about the future we are building together. Please join me in congratulating him on this well-deserved promotion.

Reclaiming Time and Elevating Our Work

Over the past six months, we’ve been working closely with colleagues across the university to better understand how Microsoft Copilot is shaping our day-to-day operations. While much of the attention around these technologies focuses on novelty or experimentation, what we are seeing and hearing is something more practical and powerful, it is creating time.

Using our workforce data as a baseline, we know the median annual salary at UT Austin is and what that equates to as an hourly wage. From there, it’s not hard to observe the potential for measuring impact. If each of our active 2,296 Copilot license holders, nearly all of whom are in administrative roles, saves just two hours a week, that represents $8.5 million in regained value annually. At three hours, it rises to $12.8 million. Four hours brings the total past $17.1 million. What our community of users is telling us is that they are using these tools to recapture no less than two hours of time back each week.

Those are big numbers, and they get your attention. But what excites me isn’t just the financial story, it’s the operational one. Every reclaimed hour represents capacity. That’s time that can be redirected away from repetitive tasks and toward higher-value work. It means more space to think critically, connect meaningfully with our students, colleagues, and the community focusing on advancing the mission of the university.

Graph showing dollar amount of savings based on time saved with Copilot.
Screenshot

Before I get a ton of hate mail for conflating actual savings for reclaimed time, I will admit that these types of calculations don’t tell much of a real story. But if you dig right below the skepticism and start to think about how nearly all of our time is divided across dozens of micro-tasks each day, finding low level opportunities to recapture some it can mean quite a bit. Today it is happening here and there with a tough email that we need to write or respond to, or in summarizing a new piece of legislature, to do a quick review of a contract, or any of the other things we all do each day in support of the university. Now think of all of that in the aggregate and you can begin to see how much time can actually be saved. And in my world, where I am trying to fit more work into every single moment, this matters.

We often talk about innovation in terms of new tools, but the real innovation here may be how we choose to use the time we’re getting back. AI can’t simply be about doing things faster, it must also be about creating the breathing room we need to elevate our work and to think differently about what’s possible. What I am looking for are stories that come from the community where our AI tools are saving you time, opening new thinking, or impacting current practice in ways that allows you more space to breathe, think, and engage more broadly.

From Miscellaneous to Meaningful

Back in 2008, I wrote a post called Should It All Be Miscellaneous? inspired by David Weinberger’s book and the liberating idea that the web didn’t need rigid hierarchies. Tags, links, and search could replace the old drawers and filing cabinets of the physical world. At the time, that felt like progress, why force everything into neat boxes when the web could be sprawling, searchable, and serendipitous?

Why do I bring up a blog post from nearly 20 years ago? I have a new habit of finding a spot on Saturday early afternoon to catch up on work and have lunch. I was standing in line yesterday and I remembered that post and wanted to reflect on how that compared to what we are discovering in our AI journey. The comment section proved worth the rabbit hole.

So here we are in 2025, and I find myself revisiting that question in light of what I wrote recently in Exposing the Missing Pieces in Our Content. Not so much what I wrote, but what the community gave back in the comments. The irony? The very thing that once felt like freedom, letting everything be miscellaneous, has become one of our biggest challenges.

AI has thrown a harsh light on this reality. As Mario said in the comments, “One of the main blockers to unlocking the power of AI is the state of our data and information.” That’s the truth. Our Copilot trainings have surfaced the same theme over and over, the technology is ready, but our content isn’t. Thousands of sites, all managed differently, with redundant information, and with varying levels of accuracy and oversight. It’s not that the web is broken, it’s that our relationship with it hasn’t matured.

I guess that makes some sense, the web as we know it is still relatively a puppy within the higher education governance timeline. Let’s be honest, IT governance is still a work in progress, and it predates the web on our campuses by 30 or so years. Maybe it is no wonder we are still looking for answers?

Cody’s comment stuck with me too: “I don’t think websites are going anywhere.” I agree (even though I poked at him). Websites aren’t disappearing tomorrow. But the way people expect to interact with information is shifting fast. AI agents, chat interfaces, and voice assistants aren’t replacing the web, they’re reframing it. They’re forcing us to ask what is the role of a website when an agent can synthesize answers in seconds? Maybe the answer is harmony, as Cody suggested, agents and websites complementing each other, each doing what they do best.

Valerie and Kristin added another layer: this isn’t just about technology; it’s about stewardship. Kristin’s metaphor hit home: “We don’t build a world-class art museum and ask everyone to drop off the paintings they like most.” Yet that’s how we’ve treated our institutional web for decades, every department spinning up a site, every reorg leaving behind digital fossils. AI is exposing that fragility. And as Kristin said, maybe the CIO has to become the Chief Curator now. I mean content is information after all.

So here we are, nearly 20 years after I asked if everything should be miscellaneous. The answer? It depends. The web still needs flexibility, creativity, and openness. But it also needs anchors, places where truth lives, where information is accurate, current, and trusted. Not because AI demands it (though it does), but because our community deserves it.

AI didn’t create this problem, it is revealing it. And maybe that’s the push we need to finally treat our information more like core infrastructure. Why would that change the equation? IT governance has given us the idea of investing wisely over the lifecycle of systems to ensure they are resilient, robust, and reliable as they are constantly consumed. Yes, the content floats on physical infrastructure, but shouldn’t we value it as much as the switches, cabling, and access points? And just like with managing the lifecycle of infrastructure, it should be governed by the most critical, highest risk, and greatest value creating investments.

It all begs me to ask so many questions that I don’t have answers to. Questions like, if we could only invest in 20–30 primary sites across the university, which ones would make the cut? How do we balance the creative chaos of the open web with the need for authoritative sources that AI (and humans) can trust? Are we ready to think of ourselves not just as technologists or communicators, but as curators of institutional knowledge?

I bet someone out there has a thought or two.