Here is a video of the content of the class and selected project presentations:
Uncategorized
New Work on Reactive Synthesis and Motion Planning for Humanoid Robots
Ye Zhao, Yinan Li, Luis Sentis, Ufuk Topcu, Jun Liu, Reactive Task and Motion Planning for Robust Whole-Body Dynamic Locomotion in Constrained Environments, The International Journal of Robotics Research, In Press 2022
This study takes a first step toward formally and reactively task planning and whole-body dynamic loco-manipulation behaviors in constrained and dynamically changing environments. We formulate a two-player temporal logic game between the multi-limb locomotion planner and its dynamic environment to synthesize a winning strategy that delivers symbolic locomotion actions. A controller further executes low-level motion primitives that generate feasible locomotion trajectories.
New Algorithm Enables Heterogeneous Multi-Robot Search for Missing Objects
In this video, three ground robots — two A1 quadrupeds and one HSR mobile platform — sweep an environment quickly in search of a missing volleyball ball. They achieve this capability by employing online path planning and heterogeneous clustering to accommodate for the different speeds and fields of view of each robot. The algorithm supports up to 50 robot/vehicle search and is robust to robot/vehicle failures by re-planning at runtime. This technology is geared towards tasks such as emergency response where people could be missing during a fire or flood.
New Method Enables Whole-Body Controllers to Become Adaptive to External Disturbances
This work describes an online gain adaptation method to enhance the robustness of whole-body controllers for legged robots under unknown external force disturbances. Without properly accounting for external forces, the closed-loop control system incorporating WBC can easily become unstable, and therefore the desired task goals may not be achievable. The proposed method serves as a low-level controller for tracking whole-body trajectories more robustly than using fixed gain feedback control methods. Link to abstract:
https://www.frontiersin.org/articles/10.3389/frobt.2021.788902/abstract
NSF NRT Fellowships in Ethical AI
Our lab is announcing multiple NSF Ph.D. Fellowships in Ethical AI starting 2022 through 2025! Good Systems, a UT Grand Challenge, and Texas Robotics are recruiting our first cohort of NSF Research Traineeship Ph.D. Fellows in Ethical AI. Prospective students interested in working in the HCRL can apply for doctoral admission in our department. More information can be found under the Fellowship tab: http://shorturl.at/gAEY9.
IROS Workshop Talk
In this video, PI Luis Sentis talks about efficient actuation approaches for human-centered robots:
Collaboration with The Polytechnic University of Catalonia Brings Advancement on Modeling Soft Materials
New NSF National Research Traineeship Award
The HCRL is proud to share our new NSF National Research Traineeship on Ethical AI Training for Roboticists. Congratulations to all the participants and to the Good Systems Bridging Barriers Program.
New Paper on Lower-Body Strength Augmentation Exoskeletons
Congratulations to Junhyeok Ahn, Nicolas Brissonneau, Bingham He from the HCRL, and Nick Paine, Orion Campbell, and Nick Nichols, from Apptronik for this great work on augmenting forces of operators using full lower-body exoskeletons.
NSF NRT Grant Awarded
Our collaborative NSF National Research Traineeship grant has been awarded! Dubbed, the NRT-AI: Convergent, Responsible, and Ethical AI Training Experience for Roboticists (CREATE Roboticists) will fund and train multiple generations of PhD students at the intersection between robotics, autonomous systems and ethics. It will create a new portfolio program in ethical robotics to ensure that technology and academic leaders coming from this program have gained expertise and education towards the good use of robotics from a social and labor perspective. We will join forces with faculty across various departments at UT to create a unique program with diverse perspectives.