Tag Archives: ET

Do Our Agents Need Identities?

OpenAI released a 13-page policy document today called Industrial Policy for the Intelligence Age: Ideas to Keep People First. It’s ambitious. It proposes robot taxes, a national public wealth fund seeded partly by AI companies, automatic safety net triggers tied to displacement metrics, containment playbooks for rogue AI systems, and pilots of a 32-hour workweek framed as an “efficiency dividend.” Sam Altman told Axios that the scale of what’s coming is comparable to the Progressive Era and the New Deal.

I read the whole thing. And I want to engage with it seriously, because the ideas matter. But I also want to say something that I think is missing from the conversation, something that becomes visible only if you’re operating at the institutional layer where these impacts actually land.

Universities sit at the intersection of almost everything this document talks about. We are workforce development engines, research enterprises, employers of tens of thousands, and the training ground for the next generation of workers whose careers will be shaped by whatever policy regime emerges. We hold sensitive data, manage federal compliance obligations, and operate complex enterprise systems that keep all of it running. If transformative AI is coming, and I believe it is even if the timeline is debatable, the university is where the policy meets the pavement.

He proposes distributing AI-enabled research infrastructure broadly across universities, community colleges, hospitals, and regional hubs. Good. It talks about portable benefits that follow individuals across jobs and industries. Good. It calls for modernizing the tax base away from payroll and toward capital gains as automated labor displaces human labor. That’s a real conversation worth having. And it proposes that workers should have a formal voice in how AI is deployed in their workplaces, something I believe in deeply.

But here’s where I want to push further, because I think there’s a conversation that we aren’t having yet.

We are already in the early days of deploying AI agents that perform real institutional work. Not chatbots answering FAQs. Agents that process transactions, triage requests, route approvals, generate reports, monitor systems, and make decisions within defined parameters. The trajectory is clear: these agents are going to take on more responsibility, operate with more autonomy, and become embedded in workflows that currently depend on human staff. So here’s my question: if an agent is doing the work of a full-time employee, shouldn’t it be governed like one?

I don’t mean this as a thought experiment. I mean it operationally. At UT Austin, every employee has an Enterprise ID, an EID. That EID is the key to everything: system access, role-based permissions, org chart placement, budget allocation, position control, performance accountability. Our Workday HCM instance manages the lifecycle of every employee from hire to retire. Now imagine an AI agent that manages reimbursement exception processing, or monitors infrastructure and initiates remediation workflows, or handles first-pass review of procurement requests against policy. That agent consumes resources. It has a cost. It operates within a reporting structure. It needs access controls. It needs to be auditable. And someone, a human, needs to be accountable for what it does.

As of today, none of us have an institutional framework for this. Agents float in a governance gap. They aren’t in Workday. They don’t have position numbers. They aren’t reflected in our staffing models or our budget structures. They aren’t covered by the HR lifecycle processes that ensure every human worker has clear accountability, supervision, and a paper trail. And yet they are increasingly doing work that, if a human were doing it, would absolutely require all of those things.

This isn’t a technology problem, it is a human capital problem. OpenAI’s document talks about shifting the tax base from payroll to capital gains as automated labor grows. That’s a macro policy question. But at the institutional level, the equivalent question is: how do we account for an agent’s labor in our workforce planning? If an agent handles the equivalent of two FTEs worth of procurement review, does that show up in our staffing model? How does it affect position requests? Budget justifications? If we’re reporting headcount to the Board of Regents or to federal agencies, do we need a parallel accounting for agent capacity? And what about accountability? When a human employee makes an error in a compliance-sensitive process, there’s a clear chain. When an agent makes that same error, who owns it? The developer who built it? The product owner who scoped it? The CIO whose organization deployed it? We need to accelerate toward conversations to these questions.

Do we need something like a UT Agent Registry, a formal institutional record for every AI agent that performs work on behalf of the university? A governed registry that captures what the agent does, what systems it accesses, what authority it has, who supervises it, and how its performance is measured and audited. The equivalent of an EID. A position description. A reporting line and a professional development budget. This might sound like bureaucracy. It’s not. It’s the same governance discipline we apply to every other resource that operates on behalf of the institution. We don’t let humans access sensitive systems without identity management, role-based access, and a clear accountability chain. We shouldn’t let agents do it either.

At the UT System level I am responsible for exposing what AI tools we have in our environment and what types of work they do, but nothing at the level I am working through.

OpenAI’s policy document is forward-looking in many ways, but it still frames the AI transition primarily as something that happens to workers and to economies, with governments and companies managing the fallout. What it doesn’t reckon with is that institutions like universities are going to be running hybrid workforces, humans and agents, long before the national policy framework catches up. We will be making these decisions in our ERP systems, in our identity platforms, in our governance structures, whether or not Washington has figured out the robot tax question.

At UT Austin, we’ve been working toward this, but the writing on the wall is becoming more and more clear with each experiment and through each observable outcome. UT.AI is our common AI environment where data protection, privacy, accessibility, security, and institutional identity are the foundation. The self-healing campus concept I wrote about last month is built on the premise that agentic AI will enable domain experts to build personal, ephemeral interfaces to institutional data. But that vision only works if the foundation is right, and part of getting the foundation right is being honest about the fact that agents are becoming part of our workforce and we need to govern them accordingly.

I don’t have all the answers. But I think the conversation needs to start with a simple recognition: the line between a tool and a worker is blurring, and our institutional frameworks haven’t caught up. OpenAI may be right that we need a new industrial policy. But we also need a new human capital policy, one that accounts for the non-human actors that are increasingly doing institutional work alongside our people.

Practicing Like We Play

On game day, Darrell K Royal Stadium becomes the heart of campus. More than one hundred thousand fans show up to cheer for the Longhorns, filling the stands in burnt orange. I look forward to Saturdays in the fall for so many reasons, but the best is that I get to feel like I am truly part of something far bigger and more meaningful as I take in the scenes from the stands. For the fans, we all want the day to feel seamless. Tickets scan, Wi-Fi connects, replays play, and the stadium feels secure. That simplicity is not an accident; it’s the result of months of preparation and a game day of real-time effort from teams in Enterprise Technology and our partners all across campus.

Just as the football team prepares with practices, film study, and repetition, we practice the way we intend to perform. Perfection is always the goal. The Networking team tunes wireless coverage across the stadium and ensures every vendor, ticketing station, and media outlet can connect. The Cable and Construction team checks and runs the fiber and cabling that carry instant replay, coach-to-sideline communication, and live broadcasts across the nation. The Warehouse team stages and delivers every piece of gear, from radios and cables to generators and water, so that when the call comes, it’s ready. The Electronic Physical Security Systems team sets schedules, monitors cameras, and ensures that safety is woven into the game day experience from the start.

Evening at DKR, Austin, TX

When kickoff arrives, those teams are moving together as one. Networking watches over every access point and switch. EPSS monitors security and supports the Emergency Operations Center. Cable and Construction crews are ready for rapid response. The Warehouse team keeps supplies moving to where they’re needed most. It’s live, it’s fast, and just like the players on the field, execution must be perfect.

Game day is proof of a larger truth at UT Austin, technology touches nearly every aspect of campus life. From classrooms to research labs, from student housing to DKR Stadium, our work shapes the experience of our community. Like all the teams across this campus, we accept that responsibility with seriousness and pride. Our role is to prepare, to execute, and to remain in the background so that students, faculty, staff, alumni, and fans can focus on what matters most. That is the measure of success. When UT takes the field, and the stadium hums with energy, the technology simply works. To me, this is what it means to lead at a world-class university. Quietly, reliably, and with the same pursuit of perfection we expect from the Longhorns on the field. That is the standard we set for ourselves and that is just some of the work that makes me proud to serve this community. Hook ’em!

Exposing the Missing Pieces in Our Content

Part of our campus AI journey is to design and deploy AI agents that can utilize key information from exisiting websites across campus. These agents may replace the sites, reducing technical bloat and information drift. While doing so, an unexpected benefit has emerged, one that speaks volumes about the evolving relationship between technology and content strategy on a highly decentralized campus.

When we first set out to build these agents, we did what most teams do, we pointed them at our sites or their underlying data, ingested the knowledge, tested retrieval, and began crafting conversations. But something interesting happened when we put these agents to work. People have started asking for things we couldn’t give them.

In short, the agents began surfacing questions we hadn’t anticipated; questions students, faculty, staff, and prospective Longhorns are likely asking every day. And, just as importantly, they showed us where our data and content fell short.

They have become mirrors, reflecting the structure, and the fragmentation, of our institutional knowledge. The things they cannot answer point directly to gaps in the content architecture: outdated FAQs, scattered documentation, siloed policy pages, and even buried gems of information lost in PDF archives or legacy web systems. It’s not that the information doesn’t exist. It’s that it’s too hard to find, inconsistently written, or lacks the context necessary to form a coherent response. We are sure the agent isn’t making mistakes per say, it tells us what it can’t say, and that’s been incredibly valuable.

One of the more revealing moments for me came when we began evaluating how the agent performed with the A–Z directory. This is a resource that has long served as the backbone for finding services and offices across the university. But once we put the agent to work with this data, the limitations of that system became painfully clear. What we had assumed was structured, complete, and reliable turned out to be limited, outdated, and in some cases, misleading.

UT Spark AI interface showing the A-Z agent.

This has been a bit of a wake-up call. It is so tempting to take a “lift and shift” approach, move what we have on the web into the AI agent and assume it will just work. But that does not hold up. The agent exposes what the web often hides. It forces precision. It requires context. And it absolutely demands trust in the data that fuels it.

We are now integrating these insights into a more systematic approach. Each time a query breaks down, we want to trace it back. What we need to be asking centers on: Should this information exist? If so, where should it live? Can we make it easier to find, easier to understand, and easier for the agent to serve up confidently?

This work is not just about making our AI better. It’s about making our websites more accessible, our documentation more useful, and our services more responsive. Every gap we close improves the experience not just for the agent, but for the human trying to find their way. I didn’t expect this kind of feedback loop to emerge so quickly, but I’m glad it has. It reminds us to slow down, look closely, and be intentional, not just with how we build agents, but how we steward the information we share across this institution.

Strengthening Our Data Strategy: D2I Transitioning to Enterprise Technology

At The University of Texas at Austin, we understand that data is essential for making informed decisions and driving innovation. I’m thrilled to announce that Data to Insights (D2I) will officially start reporting to the Vice President of Technology and Chief Information Officer on May 1, 2025.

This transition will enhance collaboration, align practices, and strengthen our commitment to providing top-notch data solutions for our faculty, staff, and students. By integrating D2I under the VP’s portfolio alongside Enterprise Technology, we’re aiming for a more unified and scalable approach to data governance, analytics, and tech services across the university.

Brian Roberts, Vice Provost for Data to Insights, will take on a special advisor role to the Office fo the CIO during the transition as we continue planning for D2I’s long-term future. Kathryn Flowers will join the CIO senior leadership team will lead the D2I team as the Executive Director, ensuring smooth leadership and execution of D2I’s mission.

Enterprise Technology April 2025 Org chart.

Rest assured, this transition won’t cause any immediate changes to D2I’s ongoing projects or services. I am confident that our teams will work closely to make this change seamless and enhance our ability to deliver value to the university community.

Enterprise Technology and D2I will be partnering with the CFO’s Office, the Provost’s Office, and the COO’s Office to assess and realign the university’s data analytics goals in support of institutional priorities, with an emphasis on scaling adoption of and best practices in interaction with the Data Hub. Over the next six months, these teams will collaborate to develop a comprehensive data analytics strategy, reliant on the centralized Data Hub, to be presented to university leadership, with the aim of implementing it by the FY26–27 fiscal year. This effort will include extensive stakeholder engagement, including interviews and cross-functional collaboration across multiple groups, to ensure the strategy is informed, aligned, and positioned to drive meaningful impact across the university.

Thank you for your continued support as we take this important step in aligning our technology and data strategy with the university’s broader goals. If you have any questions or feedback, please feel free to reach out.

That Escalated Quickly

As I reach the end of my first year here at the University of Texas, I am filled with gratitude and pride for the incredible progress we’ve made together. When I first stepped into this role, I was immediately struck by the talent, dedication, and innovative spirit that defines our university community. It has been a privilege to work alongside so many of you who are committed to supporting UT’s mission of excellence in education, research, and service.

Over the course of the past year, I have had the pleasure of meeting people from all over the 40 acres. I came here with that as a goal, to better understand the people I am in service to and the context I am doing that in. We collectively have so much going on and deal with new things each day. This year has seen leadership changes, strategic alignment, great conversations, progress on key initiatives, and a whole lot of really hard work. It has truly escalated quickly.

I arrived with what appeared to be a simple set of priorities. I never expected it to be easy; UT is way too big and complicated for anything to be done quickly and easily. However, the things that have been hard are also the things that seem to matter the most — focusing on our finances, focusing on our organizational alignment, focusing on our services, and focusing on our customers — all required some degree of change and adjustment on lots of peoples’ part. Did I accomplish everything I hoped I would in the first year? No.

Should I be disappointed in that? I am going to also answer that with a no. I am not one to set goals and fail, but the work we have done in the foundational areas of our organization are the ones that ultimately really matter going forward. In my organizational roadmap, I wanted to establish a new leadership team, ensure the financial health of the organization, launch a new digital presence, review our service portfolio, and finalize a new organizational structure that brings the three new units together into a single portfolio under the office of the CIO. We did amazing work in getting nearly all of that done in the last year. Wow.

I also wanted to raise awareness of our overall IT modernization needs campus-wide. I expressed pathways to do that over five distinct areas — student experience, infrastructure and systems, enterprise platforms, customer experience, and teaching and learning. We’ve invested new effort in each of these areas, with many of them paying dividends. We created the Student Experience Council that brings together business leaders across the student experience to identify what are the best-in-class digital approaches we should be using to delight students. We’ve continued to enhance our network and modernize portions of it in a systematic and impactful way. We are assisting Dell Medical in the implementation of Workday Finance and Epic, with plans forming to invest in new ERP capabilities for campus as well. We have launched an AI program that will directly impact customer experience, efficiency, teaching, learning, and research. We even hosted the SEC CIO group to help get a sense of where we stand relative to peers. All while making sure the core services campus relies on are resilient and robust.

The place where I need to focus more energy on is in the advancement of several common good services that support the whole community. We have done a lot of planning with various units and the ITLC. This coming year, we will leverage our investment in our own foundation to help make an even bigger impact across campus and in the larger IT community.

We have so much planned for the next few years and it is exciting as ever to be a Longhorn. I am thrilled to be here and to have made so many good friends along the way. Friends that I know are also committed to the long-term success of this great university. Thank you all for making my first year better than I hoped.