Songlin Xu -- Human AI Integration Research

[Google Scholar]   [Linkedin]   [Github]

Bio

I'm a PhD student (advised by Prof. Xinyu Zhang) in Department of Electrical and Computer Engineering in University of California, San Diego. I graduated from University of Science and Technology of China.

My research focuses on building Human-AI Integration systems and intelligent strategies in sensing and regulating human behaviors to augment human mind, cognition, and capabilities by harnessing human-oriented machine learning, human-in-the-loop training, and human-computer interaction.



Contact Me

soxu@ucsd.edu

Recent News

2023.11.10: Check our recent PeerEdu paper currently on arXiv


2023.10.30: Check our recent EduTwin paper currently on arXiv


2023.8.10: Check our recent Eyerofeedback paper currently on arXiv


2023.5.10: Check our recent CogAgent paper currently on arXiv


2023.1.16: TimeCare is accepted by CHI 2023!


2022.12.21: StealthyIMU is accepted by NDSS 2023 (Acceptance ratio: 58/398=14.6%)!


2022.5.20: Check our recent HeadText paper currently on arXiv


2021.2.1: TeethTap is accepted by IUI 2021!


2020.8.6: Hydrauio is shown in UIST 2020 Poster!


2020.6.11: FingerTrak is accepted by IMWUT 2020!





Publication



Leveraging generative artificial intelligence to simulate student learning behavior

Student simulation presents a transformative approach to enhance learning outcomes, advance educational research, and ultimately shape the future of effective pedagogy. We explore the feasibility of using large language models (LLMs), a remarkable achievement in AI, to simulate student learning behaviors.

Oculomotor trajectory mapping on body as an effective intervention to enhance attention

Increasing individuals’ awareness of their body signals can lead to improved interoception, enabling the brain to estimate current body states more accurately and promptly. However, certain body signals, such as eye movements, often go unnoticed by individuals themselves. This study aimed to test the hypothesis that providing eye-movement-correlated tactile feedback on the body enhances individuals’ awareness of their attentive states, subsequently improving attention.

Peer attention enhances student learning

Human attention is susceptible to others. In education, this characteristic has contributed to the research of peer effect to influence student learning. However, a more granular examination is essential to elucidate how peer effect precisely impacts the learning process. Here, we test the hypothesis that peer visual attention plays a pivotal role in modulating students' gaze patterns, thereby influencing their attention and learning experiences, ultimately contributing to the overall learning outcomes.

Modeling human cognition with a hybrid deep reinforcement learning agent

Human cognition model could help us gain insights in how human cognition behaviors work under external stimuli, pave the way for synthetic data generation, and assist in adaptive intervention design for cognition regulation. When the external stimuli is highly dynamic, it becomes hard to model the effect that how the stimuli influences human cognition behaviors.

Augmenting human cognition with an AI-mediated intelligent visual feedback

In spite of the giant leap-forward of AI technologies in the past decade, the human cognitive ability still far eludes machine systems. The situation will likely remain for long, since the current wave of AI resolution heavily relies on massive data, which difers from human cognition in principle, and is unlikely to surpass human’s ability of generalization and logical reasoning.

StealthyIMU: stealing permission-protected private information from smartphone voice assistant using zero-permission sensors

Voice User Interfaces (VUIs) are becoming an indispensable module that enables hands-free interaction between human users and smartphones. Unfortunately, recent research revealed a side channel that allows zero-permission motion sensors to eavesdrop on the VUI voices from the co-located smartphone loudspeaker. Nonetheless, these threats are limited to leaking a small set of digits and hot words. In this paper, we propose StealthyIMU, a new threat that uses motion sensors to steal permission-protected private information from the VUIs. We develop a set of efficient models to detect and extract private information, taking advantage of the deterministic structures in the VUI responses

HeadText: exploring hands-free text entry using head gestures by motion sensing on a smart earpiece

We present HeadText, a hands-free technique on a smart earpiece for text entry by motion sensing. Users input text utilizing only 7 head gestures for key selection, word selection, word commitment and word cancelling tasks. Head gesture recognition is supported by motion sensing on a smart earpiece to capture head moving signals and machine learning algorithms...

FingerTrak: continuous 3D hand pose tracking by deep learning hand silhouettes captured by miniature thermal cameras on wrist

In this paper, we present FingerTrak, a minimal-obtrusive wristband that enables continuous 3D finger tracking and hand pose estimation with four miniature thermal cameras mounted closely on a form-fitting wristband. FingerTrak explores the feasibility of continuously reconstructing the entire hand postures (20 finger joints positions) without the needs of seeing all fingers.

  • Fang Hu, Peng He, Songlin Xu, Yin Li and Cheng Zhang

  • Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT/Ubicomp). June 2020

  • Video
  • Pdf
  • 2020

TeethTap: recognizing discrete teeth gestures using motion and acoustic sensing on an earpiece

In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively.

  • Wei Sun, Franklin Mingzhe Li, Benjamin Steeper, Songlin Xu, Feng Tian, Cheng Zhang

  • IUI 2021

  • Video
  • Pdf
  • 2021

Hydrauio: extending interaction space on the pen through hydraulic sensing and haptic output

We have explored a fluid-based interface(Hydrauio) on the pen body to extend interaction space of human-pen interaction. Users could perform finger gestures on the pen for input and also receive haptic feedback of different profiles from the fluid surface. The user studies showed that Hydrauio could achieve an accuracy of more than 92% for finger gesture recognition and users could distinguish different haptic profiles with an accuracy of more than 95%. Finally, we present application scenarios to demonstrate the potential of Hydrauio to extend interaction space of human-pen interaction.

  • Songlin Xu, Zhiyuan Wu, Shunhong Wang, Rui Fan and Nan Lin

  • UIST 2020, Adjunct

  • Video
  • Pdf
  • 2020

Exploring hardness and geometry information through active perception

In this paper, a framework combining active perception and motion planning algorithm is proposed to get both hardness and geometry information of an object which also ensures working efficiency. In this framework, a stylus mounted on a robotic arm explores hardness and geometry information on the surface of the object actively and a depth camera is used to capture raw 3D shape information. A novel motion planning algorithm is proposed to keep the exploration operative and time-saving. Experimental results show that our framework has good performance and can explore global hardness and geometry information efficiently.

IMU-based active safe control of a variable stiffness soft actuator

In this paper, a novel soft actuator is presented, whose stiffness is tunable in multiple ways, and more than a 10-fold stiffness enhancement is achievable, making it able to carry heavy loads while maintaining excellent dexterity and compliance. Meanwhile, we first proposed an active safe control strategy based on inertial measurement units (IMUs).

Estimating risk levels of driving scenarios through analysis of driving styles for autonomous vehicles

In order to operate safely on the road, autonomous vehicles need not only to be able to identify objects in front of them, but also to be able to estimate the risk level of the object in front of the vehicle automatically. It is obvious that different objects have different levels of danger to autonomous vehicles. An evaluation system is needed to automatically determine the danger level of the object in front of the autonomous vehicle. It would be too subjective and incomplete if the system were completely defined by humans.