Songlin Xu -- Human AI Integration Research

[Google Scholar]   [Linkedin]   [Github]


My name is Songlin Xu, a PhD student (advised by Prof. Xinyu Zhang) in Department of Electrical and Computer Engineering in University of California, San Diego. I graduated from University of Science and Technology of China.

My research focuses on the integration of human and artificial intelligence (AI), which aims to

(a). explore novel human-oriented machine learning algorithms to augment human ability.

(b). utilize human data to improve current machine learning models to make them more adaptive to humans.

Keywords: Human-Oriented Machine Learning, Affective Computing, Mobile Sensing, Intelligent Cognition Assistant, Brain-Computer Interface

CV is available upon request.

Contact Me


2021.2.1: TeethTap is accepted by IUI 2021!

2020.8.6: Hydrauio is accepted by UIST 2020 Poster!

2020.7.1: Graduate from USTC. Goodbye, USTC!

2020.6.11: FingerTrak is accepted by IMWUT 2020!

2019.7-2020.2: Internship and conducting my graduation thesis research at Cornell University(Advisor: Prof. Cheng Zhang)

2019.8: One paper got accepted by WRC SARA 2019 and won the best paper finalist award!

2019.6: Excellent Project Award in Student Innovation Program of USTC

2019.5: The third-class prize of Student Innovation Program at Chinese Academy of Sciences

2019.1-2019.3: Internship at Cornell University(Advisor: Prof. Cheng Zhang)

2019.1: One paper got accepted by IEEE Robotics and Automation Letter!

2018.8-2018.9: Internship at Carnegie Mellon University(Advisor: Prof. Ding Zhao)

2018.7-2018.8: Internship at University of Michigan -- Ann arbor(Advisor: Prof. Ding Zhao)

2018.6: The second-class prize in Robocon of China

2018.1-2018.3: Internship at Dartmouth College(Advisor: Prof. Xing-Dong Yang)

2017.10: Joined the Robotics Lab at USTC under the supervision of Prof. Xiaoping Chen

2017.10: The third-class prize in the divisional competition of China Aeromodelling Design Challenge

2017.9: The second place of 2017RoboGame at USTC


TeethTap: Recognizing Discrete Teeth Gestures Using Motion and Acoustic Sensing on an Earpiece

In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively.

  • Wei Sun, Franklin Mingzhe Li, Benjamin Steeper, Songlin Xu, Feng Tian, Cheng Zhang

  • IUI 2021

  • Video
  • Pdf
  • 2021

FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras on Wrist

In this paper, we present FingerTrak, a minimal-obtrusive wristband that enables continuous 3D finger tracking and hand pose estimation with four miniature thermal cameras mounted closely on a form-fitting wristband. FingerTrak explores the feasibility of continuously reconstructing the entire hand postures (20 finger joints positions) without the needs of seeing all fingers.

  • Fang Hu, Peng He, Songlin Xu, Yin Li and Cheng Zhang

  • Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. June 2020

  • Video
  • Pdf
  • 2020

Hydrauio: Extending Interaction Space on the Pen through Hydraulic Sensing and Haptic Output

We have explored a fluid-based interface(Hydrauio) on the pen body to extend interaction space of human-pen interaction. Users could perform finger gestures on the pen for input and also receive haptic feedback of different profiles from the fluid surface. The user studies showed that Hydrauio could achieve an accuracy of more than 92% for finger gesture recognition and users could distinguish different haptic profiles with an accuracy of more than 95%. Finally, we present application scenarios to demonstrate the potential of Hydrauio to extend interaction space of human-pen interaction.

  • Songlin Xu, Zhiyuan Wu, Shunhong Wang, Rui Fan and Nan Lin

  • UIST 2020, Adjunct

  • Video
  • Pdf
  • 2020

Exploring Hardness and Geometry Information through Active Perception

In this paper, a framework combining active perception and motion planning algorithm is proposed to get both hardness and geometry information of an object which also ensures working efficiency. In this framework, a stylus mounted on a robotic arm explores hardness and geometry information on the surface of the object actively and a depth camera is used to capture raw 3D shape information. A novel motion planning algorithm is proposed to keep the exploration operative and time-saving. Experimental results show that our framework has good performance and can explore global hardness and geometry information efficiently.

IMU-Based Active Safe Control of a Variable Stiffness Soft Actuator

In this paper, a novel soft actuator is presented, whose stiffness is tunable in multiple ways, and more than a 10-fold stiffness enhancement is achievable, making it able to carry heavy loads while maintaining excellent dexterity and compliance. Meanwhile, we first proposed an active safe control strategy based on inertial measurement units (IMUs).

Estimating Risk Levels of Driving Scenarios through Analysis of Driving Styles for Autonomous Vehicles

In order to operate safely on the road, autonomous vehicles need not only to be able to identify objects in front of them, but also to be able to estimate the risk level of the object in front of the vehicle automatically. It is obvious that different objects have different levels of danger to autonomous vehicles. An evaluation system is needed to automatically determine the danger level of the object in front of the autonomous vehicle. It would be too subjective and incomplete if the system were completely defined by humans.

  Projects for Fun

Remote Control Underwater Vehicle

We designed a remoted control underwater vehicle based on Ardusub system. A camera is mounted on the ROV to capture videos underwater and a contact microphone is attached on the body of the ROV to capture sound print signals while the ROV is operating in the water.

Drawing Robot in the Robotics Competition

Participating in 2017RoboGame which is a robot competition at USTC and making a robot which can draw almost everything if you input its black-and-white photograph and can recognize the characters in the photo and write it in another typeface called Xiaozhuan, an ancient style of calligraphy.