Identifying hand use and hand roles after stroke using egocentric video

Identifying hand use and hand roles after stroke using egocentric video 150 150 IEEE Journal of Translational Engineering in Health and Medicine (JTEHM)

Objective: Upper limb (UL) impairment impacts quality of life, but is common after stroke. UL function evaluated in the clinic may not reflect use in activities of daily living (ADLs) after stroke, and current approaches for assessment at home rely on self-report and lack details about hand function. Wrist-worn accelerometers have been applied to capture UL use, but also fail to reveal details of hand function. In response, a wearable system is proposed consisting of egocentric cameras combined with computer vision approaches, in order to identify hand use (hand-object interactions) and the role of the more-affected hand (as stabilizer or manipulator) in unconstrained environments. Methods: Nine stroke survivors recorded their performance of ADLs in a home simulation laboratory using an egocentric camera. Motion, hand shape, colour, and hand size change features were generated and fed into random forest classifiers to detect hand use and classify hand roles. Leave-one-subject-out cross-validation (LOSOCV) and leave-one-task-out cross-validation (LOTOCV) were used to evaluate the robustness of the algorithms. Results: LOSOCV and LOTOCV F1-scores for more-affected hand use were 0.64 ± 0.24 and 0.76 ± 0.23, respectively. For less-affected hands, LOSOCV and LOTOCV F1-scores were 0.72 ± 0.20 and 0.82 ± 0.22. F1-scores for hand role classification were 0.70 ± 0.19 and 0.68 ± 0.23 in the more-affected hand for LOSOCV and LOTOCV, respectively, and 0.59 ± 0.23 and 0.65 ± 0.28 in the less-affected hand. Conclusion: The results demonstrate the feasibility of predicting hand use and the hand roles of stroke survivors from egocentric videos.