Research

Links:

IEEE-RAS Technical Committee on Whole-Body Control

Robotics Research at UT Austin

Graduate Education in Robotics at UT Austin 

Cognitive Modeling with Model Predictive Control

We model Human-Robot-Interaction scenarios as dynamical systems and use Model Predictive Control with mixed integer constraints to generate human- aware control policies. The first scenario involves an assistive robot that aims to maximize productivity while minimizing the human’s workload, and the second one involves a listening humanoid robot that manages its eye contact behavior to maximize “connection” and minimize social “awkwardness” with the human during the interaction.

Screen Shot 2016-10-24 at 2.09.03 PM
Model of human productivity

Screen Shot 2016-10-24 at 2.08.41 PM
Experiment of social connection using Socio Cognitive Models

Control Software Integration for Bipedal Robots

This study aims to achieve agile and versatile motion of bipedal robots. The research combines various fields of study such as Whole-Body Operational Space Control, joint torque control, sensor fusion, and motion planning algorithm. The developed methods are generic such that they can be transferred into robots with different morphologies.     Untitled-1.jpg

Robust Locomotion Planning

Our objective in this research is to develop 3D foot placement planners in rough terrains while being robust to perturbations. We employ techniques such as phase space motion planning, dynamic programming, and linear temporal logic to produce sophisticated biped and multi-contact legged behaviors in cluttered environments.

planning2

 Mission-Oriented Control Architectures

We investigate software architectures for realtime operating systems to control mobile and legged humanoid robots in challenging environments. We resort to model-based techniques inherited from the fields of multibody dynamics, space robotics, holonomic and nonholonomic mobility, reactive control, grasping, etc, to create whole-body control structures. We then develop a computational architecture that organizes control primitives in terms of task controllers and skill abstractions.

wbc  Screen Shot 2016-01-10 at 1.50.36 PM

Design of High Performance Actuators

We investigate lightweight high performance actuators. We explore mechanical, electrical and control design techniques to maximize power and efficiency while minimizing the size and weight. Some of the applications are in the areas of legged robots and rehabilitation devices.

Apptronik-P170-In-Style  DSC01413

Whole-Body Contact Awareness and Safety

We develop estimation and control methods for quickly reacting to collisions between omnidirectional mobile platforms and their environment. To enable the full-body detection of external forces, we use torque sensors located in the robot’s drivetrain. Using model based techniques we estimate, with good precision, the location, direction, and magnitude of collision forces, and we develop an admittance controller that achieves a low effective mass in reaction to them.

safety-collisions safety

Cloud Based Robotics Laboratory

In this line of research we explore the access to the controller stack of humanoid robots using a web browser and a web framework. The idea is to connect multiple nodes to a hardware asset to program, control, and perform experiments with robots and cloud-based laboratory equipment.

cloud-control-phone cloud-control-dreamer cloud-control

Design of Humanoid Robots

During 2012-2013 our lab supported NASA Johnson Space Center on designing the mechatronics of the Valkyrie humanoid robots. We performed roles in technology transfer of the UT-SEA actuator and torque controller design of rotary and linear SEAs in the robot. We supported embedded software design and also whole-body controller design.

design-humanoids design-humanoids-2

Physical Human-Robot Interaction

Our goal for this research is to achieve 99.99% safety of humans in close proximity with collaborative humanoid robots. The general approach is to create and implement theories that enables the robot to (a) recognize the intent of the human, (b) generate intelligent action plans for fast supportive responses, and (c) compliantly react under unintended physical collisions. Validation of our theories is done on two robotic platforms: The University of Texas at Austin’s Dreamer robot and NASA Johnson Space Center’s (JSC) Valkyrie (R5) robot.

phri-2 valkyrie_astronaut

Robust Modeling of Human-Centered Systems

We investigate the use of frequency domain identification for obtaining robust models of series elastic actuators. This early work focuses on identifying a lower bound on the H∞ uncertainty, based on the non-linear behavior of the plant when identified under different conditions. An antagonistic testing apparatus allows the identification of the full two input, two output system. The aim of this work is to find a model which explains all the observed test results, despite physical non-linearity. The approach guarantees that a robust model includes all previously measured behaviors, and thus predicts the stability of never-before-tested controllers.

Screen Shot 2016-10-24 at 2.15.30 PM
Output phasors against linear model predictions for randomly generated condition groups plotted