People with mobility-related disabilities can observe improvements in their quality of life by incorporating rehabilitation and assistive devices. To be compliant to users' needs, such robots should be intelligent to the users' intention to be able to adapt to their needs. A surface Electromyography (sEMG)-based lower limb intention-detection model is studied to augment human-robot interaction by detecting subjects' walking direction prior-to or during walking. Ten Classical Machine Learning-based models with Subject-Exclusive/Generalized strategies and a Deep Learning-based Convolutional Neural Network with an advanced transfer learning methodology (Subject-Adaptive), are employed to detect direction intentions and evaluate inter-subject robustness in one knee/foot-gesture and three walking-related scenarios. In each, sEMG signals are collected from eight muscles of nine subjects during five trials of at least nine distinct gestures/activities. The proper augmentation method of the model in an HRI controller is studied in a computer-simulated environment with IMU and sEMG data collected from subjects.