Spacecraft Navigation
The CAELUS Lab is looking at multiple navigation problems for spacecraft in Earth, Moon, and asteroid orbits. This research focuses on how we can leverage the information available in optical images to infer a satellite’s location. We are working with a combination techniques informed by machine learning, object tracking, and uncertainty quantification.
Current Projects
SCOPE-1 Satellite Mission
Sponsor: NASA
The SpaceCraft for Optical Position Estimation-1 (SCOPE-1) is a NASA-funded 3U CubeSat to demonstrate the use and performance of terrain-relative navigation on a small satellite in low Earth orbit. This work is in collaboration with Prof. Renato Zanetti and the NEAR Group, and is an Earth-based analog to the Crater-based Navigation and Timing work below. It uses the same key technologies, but with a different image processing model. In the CAELUS Lab, we are developing machine learning-based methods of optical image processing and feature identification to facilitate the downstream Positioning, Navigation, and Timing (PNT) algorithms developed by the NEAR Group.
Crater-based Navigation and Timing (CNT)
Sponsor: NASA
Our CNT project, a collaboration with the NEAR Group and the Johnson Space Center, looks at the practical issues of using images of the lunar surface, most notably craters, for spacecraft navigation. Within the CAELUS Lab, we have trained a convolutional neural network to process images to detect craters and a follow-up algorithm to tag them with an identity based on an existing catalog. Additionally, this work considers the computation and hardware constraints when operating on a small satellite. This work is ongoing and was selected as part of the 2020 Small Spacecraft Technology Partnership (SSTP) Program (announcement).
High-Frequency Spacecraft State Estimation using Event-based Sensors
Sponsor: NASA through the NSTGRO Program
This work explores the use of event-based cameras for spacecraft navigation. Unlike traditional cameras that provide a single image frame at some time, event sensors have asynchronously operating pixels that provide detections when a change in lighting is sensed. As a result, event cameras have a high temporal resolution and a high dynamic range sensing capability that are advantageous in low or harsh lighting conditions that are expected during on orbit operations. Here we explore the use of event cameras as a supplementary sensor for when traditional cameras struggle with software simulations and hardware tests.
Autonomous Navigation via Optical Measurements as Silhouettes for Primitive Bodies
Sponsor: Jet Propulsion Laboratory, California Institute of Technology
In this work, JPL and UT-Austin are developing an autonomous navigation capability for missions to asteroids. What sets this work apart from others is we are (1) only using silhouettes extracted from images, and (2) estimating a Gaussian process (GP)-based representation of the asteroid’s shape. GPs are a common tool from the uncertainty quantification and machine learning communities, which provides some benefits for estimation and UQ in the navigation problem. This work is in its third (and final year), is funded through JPL’s Strategic University Research Program (SURP), and is in collaboration with Prof. Ryan Russell.