Category Archives: Thoughts

The Self-Healing Campus Concept

I was in a meeting with Microsoft executives this week talking about AI adoption, roadmaps, what comes next, the usual choreography of these conversations. And I found myself saying something I hadn’t planned to say: We are going to use AI to create a self-healing campus.

I want to try to explain what I meant because from what I am seeing emerge in the marketplace is nothing short of revolutionary. But I have to start somewhere that might seem like a detour: with a platform, and with institutional responsibility. Because the self-healing idea only works if the foundation is built right and building that foundation is the hardest part of the work.

Universities are complex institutions. We hold sensitive data about hundreds of thousands of people. We have federal compliance obligations, privacy commitments, accessibility requirements, and a duty to protect the identities and records of our students, faculty, and staff. Any vision of AI-powered self-service that ignores those realities isn’t innovation, it’s shadow IT with a new name. The history of higher education IT is full of well-intentioned workarounds that created breaches, compliance gaps, and technical debt of their own.

So before the self-healing can happen, someone has to build the environment that makes it safe. That is what we are doing at UT Austin with UT.AI.

UT.AI is our attempt to build a common AI environment where the institutional protections are not an afterthought — they are the foundation. Data protection, privacy, accessibility, security, and institutional identity are baked in from the start, so that everything built on top of it inherits those guarantees by default. The goal is not to restrict what the community can do with AI. It is to make it possible for them to do more, safely.

With that foundation established, here is the idea that excites me. Tools like Claude Code, GitHub Copilot, and OpenAI Codex are making it possible for people who are not professional software engineers to build working software. A procurement officer who understands the byzantine logic of a university purchasing workflow can now describe what they need and have a running application in an afternoon. A researcher who has spent twenty years building expertise in a domain can translate that expertise into automation without waiting for IT to prioritize their ticket. The knowledge that has always lived in people’s heads, contextual, accumulated, irreplaceable, can now be turned directly into tools.

This is significant. But it isn’t the part that struck me most. What if you don’t have to wait for an institution to build the interface you need? What if you just describe it — and it appears?

Here’s the thought that hit me: if I can open Claude Code and describe a web application and have it built in front of me, then I am one step away from a world where I experience the web entirely on my own terms. Not the interface someone else designed for me. Not the portal IT built three years ago that nobody has the budget to modernize. My interface. The one that surfaces exactly the information I need, in exactly the format I want, through a gateway I described in plain language.

Think about what that means for a university. We have decades of accumulated technical debt — systems built for a version of the institution that no longer exists, interfaces designed around assumptions about how people work that stopped being true years ago. Every year, IT organizations like mine make triage decisions about what gets modernized and what stays on life support. We do the best we can. But the backlog is real, and it grows faster than we can address it.

The traditional answer is more resources, better prioritization, smarter governance. All of that still matters. But agentic AI introduces a different answer: what if the community doesn’t need us to fix all the interfaces, because they can make their own?

A faculty member who needs the grants management system to present data in a specific way shouldn’t have to wait for a system modernization project. They should be able to describe what they want — “show me my pending awards grouped by sponsor, sorted by close date, exportable to a format my department administrator can actually use” — and have that view rendered for them, connected to the underlying data, without touching the legacy system at all. The underlying system doesn’t have to change. The interface layer becomes personal, ephemeral, generated on demand.

This is what I mean by self-healing. Not that the old systems get fixed. But that the community builds around them, over them, through them and in doing so, the debt stops mattering the way it used to. The pain point that drove the ticket to IT gets resolved by the person who felt it, in the moment they felt it, using tools that are already available.

The interface layer becomes personal, ephemeral, generated on demand. Technical debt stops mattering the way it used to.

But here is why the platform layer is not optional. Every one of those personal gateways needs to know who you are, what data you are authorized to see, and how that data can be used. It needs to meet accessibility standards so that the self-service future is actually available to everyone, not just the technically confident. It needs to enforce the same privacy and security guarantees that govern every other institutional system. Without a common platform that provides those things by default, “self-healing” becomes “self-inflicted harm at scale.”

UT.AI is our attempt to thread that needle. Build the platform first; with institutional identity, data protection, privacy, accessibility, and security as non-negotiables. Then open it up so that the community can build on top of it, around it, through it. The freedom is real, but it is freedom within a responsible envelope, not despite one.

At UT Austin, we are already investing in the tools and infrastructure that make this possible. Through a year of UT Spark we have learned that going alone at the platform layer isn’t enough, through the AI Studio we are launching this fall, through the agentic capabilities we are putting in front of our community. This concept forces us to shift in how we think about what IT is for.

We are not just in the business of building and maintaining systems. We are in the business of enabling people to do their best work safely, equitably, and in ways that hold up under scrutiny. If agentic AI means that more and more of that enabling happens through description rather than deployment, through personal gateways rather than institutional portals, then our job is to make sure the foundation is solid enough that the community can build on top of it without putting themselves or the institution at risk.

That is a campus that heals itself. And I think it is closer than it looks.

Practicing Like We Play

On game day, Darrell K Royal Stadium becomes the heart of campus. More than one hundred thousand fans show up to cheer for the Longhorns, filling the stands in burnt orange. I look forward to Saturdays in the fall for so many reasons, but the best is that I get to feel like I am truly part of something far bigger and more meaningful as I take in the scenes from the stands. For the fans, we all want the day to feel seamless. Tickets scan, Wi-Fi connects, replays play, and the stadium feels secure. That simplicity is not an accident; it’s the result of months of preparation and a game day of real-time effort from teams in Enterprise Technology and our partners all across campus.

Just as the football team prepares with practices, film study, and repetition, we practice the way we intend to perform. Perfection is always the goal. The Networking team tunes wireless coverage across the stadium and ensures every vendor, ticketing station, and media outlet can connect. The Cable and Construction team checks and runs the fiber and cabling that carry instant replay, coach-to-sideline communication, and live broadcasts across the nation. The Warehouse team stages and delivers every piece of gear, from radios and cables to generators and water, so that when the call comes, it’s ready. The Electronic Physical Security Systems team sets schedules, monitors cameras, and ensures that safety is woven into the game day experience from the start.

Evening at DKR, Austin, TX

When kickoff arrives, those teams are moving together as one. Networking watches over every access point and switch. EPSS monitors security and supports the Emergency Operations Center. Cable and Construction crews are ready for rapid response. The Warehouse team keeps supplies moving to where they’re needed most. It’s live, it’s fast, and just like the players on the field, execution must be perfect.

Game day is proof of a larger truth at UT Austin, technology touches nearly every aspect of campus life. From classrooms to research labs, from student housing to DKR Stadium, our work shapes the experience of our community. Like all the teams across this campus, we accept that responsibility with seriousness and pride. Our role is to prepare, to execute, and to remain in the background so that students, faculty, staff, alumni, and fans can focus on what matters most. That is the measure of success. When UT takes the field, and the stadium hums with energy, the technology simply works. To me, this is what it means to lead at a world-class university. Quietly, reliably, and with the same pursuit of perfection we expect from the Longhorns on the field. That is the standard we set for ourselves and that is just some of the work that makes me proud to serve this community. Hook ’em!

Reclaiming Time and Elevating Our Work

Over the past six months, we’ve been working closely with colleagues across the university to better understand how Microsoft Copilot is shaping our day-to-day operations. While much of the attention around these technologies focuses on novelty or experimentation, what we are seeing and hearing is something more practical and powerful, it is creating time.

Using our workforce data as a baseline, we know the median annual salary at UT Austin is and what that equates to as an hourly wage. From there, it’s not hard to observe the potential for measuring impact. If each of our active 2,296 Copilot license holders, nearly all of whom are in administrative roles, saves just two hours a week, that represents $8.5 million in regained value annually. At three hours, it rises to $12.8 million. Four hours brings the total past $17.1 million. What our community of users is telling us is that they are using these tools to recapture no less than two hours of time back each week.

Those are big numbers, and they get your attention. But what excites me isn’t just the financial story, it’s the operational one. Every reclaimed hour represents capacity. That’s time that can be redirected away from repetitive tasks and toward higher-value work. It means more space to think critically, connect meaningfully with our students, colleagues, and the community focusing on advancing the mission of the university.

Graph showing dollar amount of savings based on time saved with Copilot.
Screenshot

Before I get a ton of hate mail for conflating actual savings for reclaimed time, I will admit that these types of calculations don’t tell much of a real story. But if you dig right below the skepticism and start to think about how nearly all of our time is divided across dozens of micro-tasks each day, finding low level opportunities to recapture some it can mean quite a bit. Today it is happening here and there with a tough email that we need to write or respond to, or in summarizing a new piece of legislature, to do a quick review of a contract, or any of the other things we all do each day in support of the university. Now think of all of that in the aggregate and you can begin to see how much time can actually be saved. And in my world, where I am trying to fit more work into every single moment, this matters.

We often talk about innovation in terms of new tools, but the real innovation here may be how we choose to use the time we’re getting back. AI can’t simply be about doing things faster, it must also be about creating the breathing room we need to elevate our work and to think differently about what’s possible. What I am looking for are stories that come from the community where our AI tools are saving you time, opening new thinking, or impacting current practice in ways that allows you more space to breathe, think, and engage more broadly.

From Miscellaneous to Meaningful

Back in 2008, I wrote a post called Should It All Be Miscellaneous? inspired by David Weinberger’s book and the liberating idea that the web didn’t need rigid hierarchies. Tags, links, and search could replace the old drawers and filing cabinets of the physical world. At the time, that felt like progress, why force everything into neat boxes when the web could be sprawling, searchable, and serendipitous?

Why do I bring up a blog post from nearly 20 years ago? I have a new habit of finding a spot on Saturday early afternoon to catch up on work and have lunch. I was standing in line yesterday and I remembered that post and wanted to reflect on how that compared to what we are discovering in our AI journey. The comment section proved worth the rabbit hole.

So here we are in 2025, and I find myself revisiting that question in light of what I wrote recently in Exposing the Missing Pieces in Our Content. Not so much what I wrote, but what the community gave back in the comments. The irony? The very thing that once felt like freedom, letting everything be miscellaneous, has become one of our biggest challenges.

AI has thrown a harsh light on this reality. As Mario said in the comments, “One of the main blockers to unlocking the power of AI is the state of our data and information.” That’s the truth. Our Copilot trainings have surfaced the same theme over and over, the technology is ready, but our content isn’t. Thousands of sites, all managed differently, with redundant information, and with varying levels of accuracy and oversight. It’s not that the web is broken, it’s that our relationship with it hasn’t matured.

I guess that makes some sense, the web as we know it is still relatively a puppy within the higher education governance timeline. Let’s be honest, IT governance is still a work in progress, and it predates the web on our campuses by 30 or so years. Maybe it is no wonder we are still looking for answers?

Cody’s comment stuck with me too: “I don’t think websites are going anywhere.” I agree (even though I poked at him). Websites aren’t disappearing tomorrow. But the way people expect to interact with information is shifting fast. AI agents, chat interfaces, and voice assistants aren’t replacing the web, they’re reframing it. They’re forcing us to ask what is the role of a website when an agent can synthesize answers in seconds? Maybe the answer is harmony, as Cody suggested, agents and websites complementing each other, each doing what they do best.

Valerie and Kristin added another layer: this isn’t just about technology; it’s about stewardship. Kristin’s metaphor hit home: “We don’t build a world-class art museum and ask everyone to drop off the paintings they like most.” Yet that’s how we’ve treated our institutional web for decades, every department spinning up a site, every reorg leaving behind digital fossils. AI is exposing that fragility. And as Kristin said, maybe the CIO has to become the Chief Curator now. I mean content is information after all.

So here we are, nearly 20 years after I asked if everything should be miscellaneous. The answer? It depends. The web still needs flexibility, creativity, and openness. But it also needs anchors, places where truth lives, where information is accurate, current, and trusted. Not because AI demands it (though it does), but because our community deserves it.

AI didn’t create this problem, it is revealing it. And maybe that’s the push we need to finally treat our information more like core infrastructure. Why would that change the equation? IT governance has given us the idea of investing wisely over the lifecycle of systems to ensure they are resilient, robust, and reliable as they are constantly consumed. Yes, the content floats on physical infrastructure, but shouldn’t we value it as much as the switches, cabling, and access points? And just like with managing the lifecycle of infrastructure, it should be governed by the most critical, highest risk, and greatest value creating investments.

It all begs me to ask so many questions that I don’t have answers to. Questions like, if we could only invest in 20–30 primary sites across the university, which ones would make the cut? How do we balance the creative chaos of the open web with the need for authoritative sources that AI (and humans) can trust? Are we ready to think of ourselves not just as technologists or communicators, but as curators of institutional knowledge?

I bet someone out there has a thought or two.

Exposing the Missing Pieces in Our Content

Part of our campus AI journey is to design and deploy AI agents that can utilize key information from exisiting websites across campus. These agents may replace the sites, reducing technical bloat and information drift. While doing so, an unexpected benefit has emerged, one that speaks volumes about the evolving relationship between technology and content strategy on a highly decentralized campus.

When we first set out to build these agents, we did what most teams do, we pointed them at our sites or their underlying data, ingested the knowledge, tested retrieval, and began crafting conversations. But something interesting happened when we put these agents to work. People have started asking for things we couldn’t give them.

In short, the agents began surfacing questions we hadn’t anticipated; questions students, faculty, staff, and prospective Longhorns are likely asking every day. And, just as importantly, they showed us where our data and content fell short.

They have become mirrors, reflecting the structure, and the fragmentation, of our institutional knowledge. The things they cannot answer point directly to gaps in the content architecture: outdated FAQs, scattered documentation, siloed policy pages, and even buried gems of information lost in PDF archives or legacy web systems. It’s not that the information doesn’t exist. It’s that it’s too hard to find, inconsistently written, or lacks the context necessary to form a coherent response. We are sure the agent isn’t making mistakes per say, it tells us what it can’t say, and that’s been incredibly valuable.

One of the more revealing moments for me came when we began evaluating how the agent performed with the A–Z directory. This is a resource that has long served as the backbone for finding services and offices across the university. But once we put the agent to work with this data, the limitations of that system became painfully clear. What we had assumed was structured, complete, and reliable turned out to be limited, outdated, and in some cases, misleading.

UT Spark AI interface showing the A-Z agent.

This has been a bit of a wake-up call. It is so tempting to take a “lift and shift” approach, move what we have on the web into the AI agent and assume it will just work. But that does not hold up. The agent exposes what the web often hides. It forces precision. It requires context. And it absolutely demands trust in the data that fuels it.

We are now integrating these insights into a more systematic approach. Each time a query breaks down, we want to trace it back. What we need to be asking centers on: Should this information exist? If so, where should it live? Can we make it easier to find, easier to understand, and easier for the agent to serve up confidently?

This work is not just about making our AI better. It’s about making our websites more accessible, our documentation more useful, and our services more responsive. Every gap we close improves the experience not just for the agent, but for the human trying to find their way. I didn’t expect this kind of feedback loop to emerge so quickly, but I’m glad it has. It reminds us to slow down, look closely, and be intentional, not just with how we build agents, but how we steward the information we share across this institution.