Moving Towards Combining Deep Feature Learning and Domain KnowledgeGuided Feature Engineering for Surface Electromyography-Based Hand Movement Recognition
Keywords:
unpredictable, and nonstationaryAbstract
Surface electromyography (sEMG)-based hand movement recognition is a machine-learning-driven
decision-making challenge that is crucial to the reliable operation of noninvasive neural interfaces like
rehabilitation robots and myoelectric prostheses. The noisy, random, and nonstationary nature of sEMG signals
continues to limit the performance of today's sEMG-based hand movement recognition systems, despite recent
advancements in sEMG-based hand movement recognition using end-to-end deep feature learning technologies
based on deep learning models. Researchers have developed a number of techniques that enhance sEMG-based
hand movement via feature engineering. This research proposed a progressive fusion network (PFNet)
framework that integrates deep feature learning and domain knowledge-guided feature engineering to improve
sEMG-based hand movement recognition accuracy while allowing for a trade-off between computational
complexity and performance. Specifically, a feature learning network and a domain knowledge network are used
to learn high-level feature representations from raw sEMG signals and engineered time-frequency domain
features, respectively. A three-stage progressive fusion strategy is then used to gradually fuse the two networks
together and derive the final decisions. Our proposed PFNet was evaluated through extensive experiments on
five sEMG datasets. The experimental results demonstrated that the proposed PFNet outperformed the state of
the arts in hand movement recognition, achieving average accuracies of 87.8%, 85.4%, 68.3%, 71.7%, and
90.3% on the five datasets, respectively.