Huang(Raven) Huang

I am a third year PhD student in EECS at UC Berkeley, advised by Prof. Ken Goldberg. My research focuses on robot learning for manipulation including lateral access mechanical search, deformable object manipulation and grasping. Recently I have also worked on the representation learning between tactile and vision. I received my M.S. degree in ME at UT Austin advised by Prof. Luis Sentis, working on control and modelling of human in exoskeleton.

Email  /  Google Scholar

profile photo
Selected Publications
Learning Self-Supervised Representations from Vision and Touch for Active Sliding Perception of Deformable Surfaces
Justin Kerr*, Huang Huang*, Albert Wilcox, Ryan Hoque, Jeffrey Ichnowski, Roberto Calandra, and Ken Goldberg, *Equal contribution
ICRA 2023 (Submitted)

We learn a self-supervised representation cross tactile and vistion using contrastive loss. We collect vision-tacitile pairs in a self-supervised way in real. The learned representation is utilized in the downstream active perception tasks without fine-tuning.
Mechanical Search on Shelves with Efficient Stacking and Destacking of Objects
Huang Huang*, Letian Fu*, Michael Danielczuk, Chung Min Kim, Zachary Tam, Jeffrey Ichnowski, Anelia Angelova, Brian Ichter, and Ken Goldberg, *Equal contribution
ISRR 2022, arXiv

We develop two policies for lateral access mechanical search with stacked objects. Both policies utilize stacking and destacking actions and can reveal the target object with 82–100% success in simulation outperforming the baseline by up to 66%, and can achieve 67–100% success in physical experiments.
Efficiently Learning Single-Arm Fling Motions to Smooth Garments
Lawrence Yunliang Chen*, Huang Huang*, Ellen Novoseller, Daniel Seita, Jeffrey Ichnowski, Michael Laskey, Ken Goldberg, *Equal contribution
ISRR 2022, arXiv

We develop two policies for lateral access mechanical search with stacked objects. Both policies utilize stacking and destacking actions and can reveal the target object with 82–100% success in simulation outperforming the baseline by up to 66%, and can achieve 67–100% success in physical experiments.
Evo-NeRF: Evolving NeRF for Sequential Robot Grasping
Justin Kerr, Letian Fu, Huang Huang, Yahav Avigal, Matthew Tancik, Jeffrey Ichnowski, Angjoo Kanazawa, Ken Goldberg
CoRL 2022, Oral Presentation, OpenReview

We propose Evo-NeRF with additional geometry regularizations improving performance in rapid capture settings to achieve real-time, updateable scene reconstruction for rapidly grasping table-top transparent objects. We train a NeRF-adapted grasping network learns to ignore floaters.
Real2Sim2Real: Self-Supervised Learning of Physical Single-Step Dynamic Actions for Planar Robot Casting
Vincent Lim*, Huang Huang*, Lawrence Yunliang Chen, Jonathan Wang, Jeffrey Ichnowski, Daniel Seita, Michael Laskey, Ken Goldberg, *Equal contribution
ICRA 2021, paper

We collect planar robot casting data in real in a self-supervised way to tune the simulation in Isaac Gym. We then collect more data in the tuned simulator. Combined with upsampled real data, we learn a policy for planar robot casting to reach to a given target, attaining median error distance (as % of cable length) ranging from 8% to 14%.
Adaptive compliance shaping with human impedance estimation
Huang Huang, Henry F Cappel, Gray C Thomas, Binghan He, Luis Sentis
ACC 2020, paper

We propose an amplification controller with online-adapted exoskeleton compliance that takes advantage of a novel, online human stiffness estimator based on surface electromyography (sEMG) sensors and stretch sensors connected to the forearm and upper arm of the human. Online estimation of stiffness is shown to improve the bandwidth of strength amplification while remaining robustly stable.