Assistive device technology has improved significantly within the last decade with the introduction of powered (i.e., robotic) prostheses and orthoses. However, there remains a critical gap between the advanced capabilities of these devices and the limited control afforded to their users. A need exists for a noninvasive human-machine interface to accurately sense various forms of locomotion and intuitively interact with the human user during multiple activities of daily living. However, such control interfaces rely on continuous sensing of user intent. Compared to surface electromyography (EMG), sonomyography (i.e., the evaluation of real-time dynamic ultrasound imaging of skeletal muscle) has been proposed as an alternative sensing modality for assistive device control for its ability to track skeletal muscle deformation ranging from superficial to deep tissue. Additionally, significant miniaturization and the portability of US imaging systems along with low computational demand of machine learning algorithms for prediction of human movement encourage sonomyographic integration with assistive devices. Recently, we demonstrated the feasibility of continuously estimating various locomotion modes from sonomyography features of anterior thigh muscle contraction, as well as continuous estimation of hip, knee and ankle joint kinematics and kinetics during various ambulatory tasks. The next step in this work is to integrate ultrasound imaging sensors into a state-of-the-art robotic lower-limb prosthesis and evaluate various ultrasound-derived shared control strategies. This research will impact the quality of life for individuals with mobility limitations by developing and testing new human-machine interfaces that can help translate robotic assistive technologies from research settings into the lives of users.