Publications
カンファレンス (国際) Detecting and Recovering from Human Errors using Multimodal Sensor Data Obtained by Smartphone
Kazuki Ichii (Keio University), Kaori Ikematsu, Toshiya Isomoto, Kunihiro Kato (Tokyo University of Technology), Yuta Sugiura (Keio University)
The 31st International Conference on Intelligent User Interfaces (ACM IUI 2026)
2026.3.22
Touchscreen devices like smartphones and tablets are ubiquitous, yet unintended touches and selection errors still impair user experience. We present a multimodal method to detect touch-level human errors from motion, touch, gaze, and facial cues on smartphones, and to trigger lightweight, user-controlled recovery suggestions. In a study with 15 participants (15,636 touch-up events; 2,700 labeled errors), our models achieved strong performance in both personalization and cross-user settings, measured by ROC-AUC. With personalized models, XGBoost reached near-ceiling performance (Text: ROC-AUC 1.00, Acc. 0.99; Image: ROC-AUC 1.00, Acc. 0.99). For cross-user generalization under leave-one-user-out (LOUO), an RNN achieved high performance on text typing (ROC-AUC 0.99, Acc. 0.96) and moderate performance on image selection (ROC-AUC 0.68, Acc. 0.68). We further report stepwise personalization trends and a camera-off ablation to clarify modality contributions, and discuss implications for practical recovery-oriented mobile UIs.
Paper :
Detecting and Recovering from Human Errors using Multimodal Sensor Data Obtained by Smartphone
(外部サイト)