Sensor integration as a means of remote health monitoring is a growing area of research, especially among the older adult population as a means of supporting living independently. Chronic respiratory conditions require careful monitoring and evaluation in order to ensure health. In this work, the application of audio-based methods to monitor respiratory sounds (e.g., cough) is presented as a means to identify abrupt changes in health status. Three main classification tasks (C1 wet cough vs. dry cough vs. whooping cough vs. restricted breathing, C2 wet cough vs. dry cough, and C3 cough vs. restricted breathing) are evaluated using three main approaches (classical machine learning and transfer learning based on standard image classifiers and audio classifiers). For the image-based transfer learning approach, several audio visualization methods were considered including an aggregate-image method that combined the top three performing visualization methods. Overall, the aggregate-based image classifier had the best performance for C1, C2, and C3 with weighted F1-scores of 0.87, 0.87, and 0.86 respectively. In light of the COVID-19 pandemic, a novel COVID-19 spontaneous (reflex) cough database (NoCoCoDa) was also collected from public media interviews. Finally, performance factors associated with external sources that may affect sound recordings are also investigated. The respiratory monitoring methods described in this thesis are designed to be expanded on in future work eventually leading to a respiratory sound measurement system imbedded in a smart home environment aimed to support older adults age independently.