• Skip to primary navigation
  • Skip to main content
UT Shield
ase_logo
  • Home
  • People
    • Current
    • Alumni
    • Robots
  • Projects
    • Current Projects
    • Past Projects
  • Publications
    • All Publications
    • Journal Papers
    • Conference Papers
  • About

Current Projects


Multi-Agent Games, Tasking, and Coordination


Trust Based Distributed Multi-Agent Systems

The group investigates strategies for ensuring resilience in distributed multi-agent systems operating without centralized oversight. A key challenge in such environments is the presence of malicious agents whose actions can disrupt coordination and degrade system performance. To address this, the group has developed a negotiation-based mechanism that enables agents to assess the trustworthiness of their counterparts. Trust is systematically integrated into critical components of the negotiation process, including preference modeling, information exchange, and strategic decision-making. This approach incentivizes cooperation among honest agents while discouraging or limiting interactions with adversarial players. By embedding trust into multi-agent negotiation frameworks, the group’s work enhances robustness and reliability in decentralized systems.

Sensor Tasking for Cislunar Object Tracking

The group develops distributed tasking algorithms for cislunar object tracking. This work began in collaboration with Katalyst Space Technologies and Purdue University on the Interoperable Cislunar Observation Network (ICON) mission, which aims to provide a comprehensive Space Domain Awareness (SDA) solution. As traffic through the Lunar Gateway and the broader cislunar region is expected to increase over the next decade, ensuring a sustainable and safe presence for future missions is becoming critical. Given an observer constellation design, the algorithms maximize the information gain to determine the optimal sequence of measurements for each sensor. The overarching goal is to track as many objects as possible while maintaining accurate estimates of their positions and velocities. The approach addresses the central challenges of the cislunar domain, including limited communication between sensors, restricted observation opportunities due to sensor capabilities and lighting conditions, the vast spatial extent of the regime, and the inherently chaotic dynamics of cislunar orbits, ultimately enabling resilient and scalable tracking capabilities for the next generation of missions.

Game-theoretic Space Situational Awareness in Cislunar Orbits

The group develops a bayesian game-theoretic framework for modeling strategic interactions between adversarial spacecraft in the cislunar environment, with a focus on operations near the Earth-Moon Lagrange points. The approach integrates spacecraft heterogeneity, probabilistic detection, maneuver-based evasion, and communication-enabled estimation into a two-player game of incomplete information. By simulating how red and blue teams deploy and maneuver spacecraft to detect, characterize, and avoid detection by opponents, the framework enables the design of resilient observation strategies and informs mission architecture for future space domain awareness operations in high dynamics, contested regimes beyond geostationary earth orbit (GEO).


Learning and Adaptive Systems


Adaptive Control and System Identification

Within the fields of Adaptive Control and System Identification, the group has developed a family of Multi-Thread Attracting Manifold (MTAM) adaptive control methods for various spacecraft, robotics, and aeronautical applications. This collection of techniques utilizes the adaptation of multiple estimates of a set of unknown parameters. By sampling the space of unknown parameters and dynamically weighing amongst multiple estimates, a second form of adaptation is able to provide fast and safe convergence whose performance is not reliant on arbitrarily high adaptation gains. In addition, the group has a history of applying adaptive control solutions across all domains of aerospace engineering.  In collaboration with Intuitive Machines, based out of Houston, the group has developed several adaptive control solutions for their spacecraft. The group developed an adaptive attitude controller for the Nova-C spacecraft to adapt to the uncertainty in the center of gravity while the spacecraft undergoes powered flight on its way to the lunar surface. Once on the lunar surface, the Nova-C spacecraft deploys an experimental lunar exploration vehicle called the Micro Nova Hopper to perform a series of short, powered flights and reach areas where conventional lunar terrain vehicles are unable to reach.  In addition to the Nova-C spacecraft, the group developed adaptive controllers for Micro Nova Hopper spacecraft to follow its trajectory in and out of a lunar crater.


Cislunar Astrodynamics


Advancing Space Domain Awareness in the Cislunar Region

Within the field of Cislunar Astrodynamics, the group is advancing spacecraft guidance, navigation, and control (GNC) methods tailored for operations in the Earth–Moon system. This region, which extends beyond geostationary orbit and includes the Earth–Moon Lagrange points, presents unique challenges due to complex gravitational dynamics, sparse orbit determination updates, and reduced measurement accuracy at vast distances. To address these difficulties, the group is developing navigation strategies that integrate onboard learning and adaptive estimation to refine control policies and trajectory design. A particular emphasis is placed on leveraging the natural asymmetries and nonlinearities of the three-body problem to design observer-based state-tracking methods, enabling more accurate and reliable spacecraft navigation in cislunar space even under limited or infrequent measurements. As part of this effort, the group is applying these techniques to mission concepts that demand persistent operations in cislunar space, including spacecraft positioned near Earth–Moon Lagrange points for communications relays and space domain awareness. By integrating advances in nonlinear dynamics with practical navigation algorithms, the group aims to enable spacecraft to maintain stable and predictable trajectories in regimes where traditional orbit determination methods prove insufficient.

Symplectic Integrators for Cislunar Regimes

Within the field of Cislunar Astrodynamics, the group focuses on the development of structure-preserving numerical methods for modeling spacecraft dynamics in complex multi-body environments. In particular, the group has designed and implemented explicit symplectic integrators tailored to the Circular and Elliptic Restricted Three-Body Problems (CR3BP/ER3BP), enabling fast and accurate long-term propagation of trajectories while preserving key invariants of the system. These integrators have been benchmarked against conventional non-symplectic methods, consistently demonstrating superior energy preservation and computational efficiency, particularly over extended time spans. The group leverages these tools to support the design and analysis of periodic orbits and transfer trajectories in the Earth-Moon system, with applications ranging from mission planning to trajectory optimization. Ongoing work in this area includes the development of explicit fixed-step symplectic integrators, time-regularized symplectic schemes, and symplectic state transition matrix computations to further enhance spacecraft navigation and control capabilities in cislunar space. 

Numerical Methods in Orbital Mechanics

The group explores how numerical methods can be leveraged to help solve challenging problems in orbital mechanics and spacecraft control. The group works on transferring well-known near rectilinear halo orbits (NRHOs) from simplified models of the Earth-Moon system to more realistic N-body ephemerides by solving boundary value problems.


Spacecraft Autonomy and Proximity Operations


Resilient Target Reacquisition

The group is building a target tracker that can “keep knowing where to look” even when cameras blink or a target behaves unpredictably. Instead of memorizing past paths, the system learns the forces and attitude changes that drive a target’s motion. A learning module updates its best guess of those hidden pushes and attitude shifts on the fly, while a Kalman Filter backbone carefully keeps track of what is known, what is uncertain, and how fast uncertainty grows during brief outages. When the sensor comes back, the tracker quickly tightens its estimate. In practice, this means the group’s tracker can ride out short losses of sight, adapt to changing behavior without offline retraining, and give a conservative time window for when the target can be found again; purely from how the uncertainty evolves. The initial focus is on spacecraft rendezvous, agile drones, and vision-only robots operating in clutter, where learning dynamics (the underlying causes of motion and attitude change) is essential to staying locked on to the target.

Scalable Algorithms for Uncertainty Aware Trajectory Planning and Information-Theoretic Multi-agent Sensor Tasking

The growing accessibility of the space economy has prompted increasing interests in expanding autonomous guidance, navigation and control (GNC) technologies that mitigate risks from unknown/unmodelled dynamics and uncooperative spacecraft/space objects – notably within proximity operations. The group has focused on addressing both these sources of risk via navigation and uncertainty-aware spacecraft and spacecraft swarm maneuver planning, where higher moments of environmental uncertainties are considered in structured optimal guidance and control problems. Unlike heuristic approaches to uncertainty and information characterizations, the group’s research focuses on characteristics firmly rooted in the literature of information theory and statistical estimation. The overarching goal of works within this category is to provide statistically safe and efficient guidance algorithms and control laws that are enhancing or enhanced by the parallel research efforts in spacecraft navigation.

Spacecraft Relative Navigation

Autonomous Guidance, Navigation, and Control (GN&C) is key to space operations including rendezvous and docking, where human feedback may be implausible or impractical. For example, a chaser satellite attempting to capture an uncooperative, tumbling, and damaged spacecraft moving rapidly must estimate the target’s relative position onboard to execute its mission. To further develop autonomous navigation capabilities, the group has focused on the known, uncooperative spacecraft pose estimation problem (Black 2019; Kaki 2023). The group demonstrated elements of this work in collaboration with the Texas Spacecraft Laboratory and NASA JSC on the Seeker mission. Proximity operations in space require precise knowledge of the navigation state. In short, a vehicle must know where it is to figure out how to get to where it wants to be. In this problem, the chaser spacecraft has prior 3D knowledge of the target satellite, but the target is not actively communicating its navigation state (i.e., it is uncooperative) to the chaser vehicle. The group solves this problem by combining computer vision techniques, such as convolutional neural networks, with estimation frameworks like the Kalman Filter. The group uses object detection and key point regression machine learning models to extract 2D-3D correspondences between a 2D image and a 3D model of the target vehicle. The 6D pose—comprising translational position and angular position (attitude) — can be calculated by solving an optimization problem such as Perspective-n-point. The computer vision algorithms provide inputs to a measurement model for the Kalman Filter, which estimates poses over time. Recent work includes quantifying the uncertainty of machine learning predictions to characterize measurement uncertainty for the Kalman Filter, as well as a hardware-in-the-loop demonstration of the pose estimation pipeline on flight-like hardware in flight-like conditions. The group is exploring new model architectures that perform well in flight-like conditions with minimal tuning, infer faster, and remain robust to deformations of the target vehicle.

Event Cameras for Spacecraft Proximity Operations

In many spacecraft proximity operation missions, passive camera sensing is used as a navigation tool to detect and localize objects of interest. Traditional RGB cameras can be used independently but suffer from degradation in certain environmental conditions. In such cases, other sensing modalities can be deployed to reestablish a quality frame that is necessary for downstream tasks. One such alternative modality is event-based vision. Event sensors measure relative pixel intensity change between frames at a microsecond frequency and are particularly useful in scenes that have a high dynamic range. RGB and event cameras together can serve as robust sensing mechanisms for object detection schemes and subsequent tasks. The use cases that the group is exploring for this technology are motion blur from fast-moving objects in space, overexposed regions that occur from direct sunlight, physical occlusions that block the target of interest, and underexposed regions where the target of interest is barely visible.

Computer Vision Methods for Navigation

The group develops robust computer-vision methods for proximity operations in space by designing machine-learning models that recover the full 6-DoF relative pose of non-cooperative spacecraft in Low-Earth Orbit and remain reliable under self-occlusion and extreme illumination. Ongoing work focuses on improving the performance of models trained in simulation, so they transfer effectively to real-world deployments. 

UT Home | Emergency Information | Site Policies | Web Accessibility | Web Privacy | Adobe Reader

© The University of Texas at Austin 2026