Songlin Xu -- Human AI Integration Research

[Google Scholar]   [Linkedin]   [Github]

Bio

I'm a Third-Year PhD student (advised by Prof. Xinyu Zhang) in Department of Electrical and Computer Engineering in University of California, San Diego. I graduated from University of Science and Technology of China.

My research focuses on building Human-AI Integration systems and intelligent strategies in sensing and regulating human behaviors to augment human mind, cognition, and capabilities by harnessing human-oriented foundation models, human-in-the-loop deep reinforcement learning, and human-computer interaction.



Contact Me

soxu@ucsd.edu

Recent News

2024.4.21: Check our recent EduAgent paper currently on arXiv


2023.10.30: Check our recent EduTwin paper currently on arXiv


2023.5.10: Check our recent ReactiveAI paper currently on arXiv


2023.1.16: TimeCare is accepted by CHI 2023!


2022.12.21: StealthyIMU is accepted by NDSS 2023 (Acceptance ratio: 58/398=14.6%)!


2021.2.1: TeethTap is accepted by IUI 2021!


2020.8.6: Hydrauio is shown in UIST 2020 Poster!


2020.6.11: FingerTrak is accepted by IMWUT 2020!





Publication



EduAgent: Generative Student Agents in Learning

Student simulation in online education is important to address dynamic learning behaviors of students with diverse backgrounds. Existing simulation models based on deep learning usually need massive training data, lacking prior knowledge in educational contexts. Large language models (LLMs) may contain such prior knowledge since they are pre-trained from a large corpus. However, because student behaviors are dynamic and multifaceted with individual differences, directly prompting LLMs is not robust nor accurate enough to capture fine-grained interactions among diverse student personas, learning behaviors, and learning outcomes. This work tackles this problem by presenting a newly annotated fine-grained large-scale dataset and proposing EduAgent, a novel generative agent framework incorporating cognitive prior knowledge (i.e., theoretical findings revealed in cognitive science) to guide LLMs to first reason correlations among various behaviors and then make simulations. Our two experiments show that EduAgent could not only mimic and predict learning behaviors of real students but also generate realistic learning behaviors of virtual students without real data.

Leveraging generative artificial intelligence to simulate student learning behavior

Student simulation presents a transformative approach to enhance learning outcomes, advance educational research, and ultimately shape the future of effective pedagogy. We explore the feasibility of using large language models (LLMs), a remarkable achievement in AI, to simulate student learning behaviors.

Modeling human logical reasoning process in dynamic environmental stress with cognitive agents

Human cognition model could help us gain insights in how human cognition behaviors work under external stimuli, pave the way for synthetic data generation, and assist in adaptive intervention design for cognition regulation. When the external stimuli is highly dynamic, it becomes hard to model the effect that how the stimuli influences human cognition behaviors.

Augmenting human cognition with an AI-mediated intelligent visual feedback

In spite of the giant leap-forward of AI technologies in the past decade, the human cognitive ability still far eludes machine systems. The situation will likely remain for long, since the current wave of AI resolution heavily relies on massive data, which difers from human cognition in principle, and is unlikely to surpass human’s ability of generalization and logical reasoning.

StealthyIMU: stealing permission-protected private information from smartphone voice assistant using zero-permission sensors

Voice User Interfaces (VUIs) are becoming an indispensable module that enables hands-free interaction between human users and smartphones. Unfortunately, recent research revealed a side channel that allows zero-permission motion sensors to eavesdrop on the VUI voices from the co-located smartphone loudspeaker. Nonetheless, these threats are limited to leaking a small set of digits and hot words. In this paper, we propose StealthyIMU, a new threat that uses motion sensors to steal permission-protected private information from the VUIs. We develop a set of efficient models to detect and extract private information, taking advantage of the deterministic structures in the VUI responses

FingerTrak: continuous 3D hand pose tracking by deep learning hand silhouettes captured by miniature thermal cameras on wrist

In this paper, we present FingerTrak, a minimal-obtrusive wristband that enables continuous 3D finger tracking and hand pose estimation with four miniature thermal cameras mounted closely on a form-fitting wristband. FingerTrak explores the feasibility of continuously reconstructing the entire hand postures (20 finger joints positions) without the needs of seeing all fingers.

  • Fang Hu, Peng He, Songlin Xu, Yin Li and Cheng Zhang

  • Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT/Ubicomp). June 2020

  • Video
  • Pdf
  • 2020

TeethTap: recognizing discrete teeth gestures using motion and acoustic sensing on an earpiece

In this paper, we present TeethTap, a novel eyes-free and hands-free input technique, which can recognize up to 13 discrete teeth tapping gestures. TeethTap adopts a wearable 3D printed earpiece with an IMU sensor and a contact microphone behind both ears, which works in tandem to detect jaw movement and sound data, respectively.

  • Wei Sun, Franklin Mingzhe Li, Benjamin Steeper, Songlin Xu, Feng Tian, Cheng Zhang

  • IUI 2021

  • Video
  • Pdf
  • 2021

Hydrauio: extending interaction space on the pen through hydraulic sensing and haptic output

We have explored a fluid-based interface(Hydrauio) on the pen body to extend interaction space of human-pen interaction. Users could perform finger gestures on the pen for input and also receive haptic feedback of different profiles from the fluid surface. The user studies showed that Hydrauio could achieve an accuracy of more than 92% for finger gesture recognition and users could distinguish different haptic profiles with an accuracy of more than 95%. Finally, we present application scenarios to demonstrate the potential of Hydrauio to extend interaction space of human-pen interaction.

  • Songlin Xu, Zhiyuan Wu, Shunhong Wang, Rui Fan and Nan Lin

  • UIST 2020, Adjunct

  • Video
  • Pdf
  • 2020

Exploring hardness and geometry information through active perception

In this paper, a framework combining active perception and motion planning algorithm is proposed to get both hardness and geometry information of an object which also ensures working efficiency. In this framework, a stylus mounted on a robotic arm explores hardness and geometry information on the surface of the object actively and a depth camera is used to capture raw 3D shape information. A novel motion planning algorithm is proposed to keep the exploration operative and time-saving. Experimental results show that our framework has good performance and can explore global hardness and geometry information efficiently.

IMU-based active safe control of a variable stiffness soft actuator

In this paper, a novel soft actuator is presented, whose stiffness is tunable in multiple ways, and more than a 10-fold stiffness enhancement is achievable, making it able to carry heavy loads while maintaining excellent dexterity and compliance. Meanwhile, we first proposed an active safe control strategy based on inertial measurement units (IMUs).

Estimating risk levels of driving scenarios through analysis of driving styles for autonomous vehicles

In order to operate safely on the road, autonomous vehicles need not only to be able to identify objects in front of them, but also to be able to estimate the risk level of the object in front of the vehicle automatically. It is obvious that different objects have different levels of danger to autonomous vehicles. An evaluation system is needed to automatically determine the danger level of the object in front of the autonomous vehicle. It would be too subjective and incomplete if the system were completely defined by humans.