Abstract: Wearable devices are a useful and widely used source of continuous and temporal dependant data. In contrast to the traditional clinical environment, these devices allow time series data collection in an individual’s daily living environ- ment. However, missing data can occur while using them. Many techniques have been applied to solve these data gaps; nonetheless, missing time series data poses extra challenges, such as maintaining the temporal dependency. In this article, we addressed the forecast of sleep trackers data (sleeping heart rate (HR) and time asleep) for 2 main reasons: (1) to design models capable of accurately forecasting missing data from those devices, and (2) to apply those models to empower sleep interventions that may increase its quality, by forecasting future sleep events. We collected wearables data over 290 days (per individual) from 12 participants using a smartwatch and made this dataset publicly available. We then explored several hyperparameters of 2 Recurrent Neural Networks (RNN), Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU). We further elaborated and compared the performance of 3 approaches to training those RNNs. Although similar performance, slightly more accurate results were obtained after training a GRU network on an entire population’s dataset, which was able to forecast the average, minimum, and maximum sleeping HR with a root-mean-squared error (RMSE) of 4.4 (± 1.4), 4.9 (± 2.6), and 12.1 (± 4.0) beats per minute, respectively. However, the total time asleep was impossible to forecast with low error.
Abstract:
Background
Heart rate (HR), especially at nighttime, is an important biomarker for cardiovascular health. It is known to be influenced by overall physical fitness, as well as daily life physical or psychological stressors like exercise, insufficient sleep, excess alcohol, certain foods, socialization, or air travel causing physiological arousal of the body. However, the exact mechanisms by which these stressors affect nighttime HR are unclear and may be highly idiographic (i.e. individual-specific). A single-case or “n-of-1” observational study (N1OS) is useful in exploring such suggested effects by examining each subject’s exposure to both stressors and baseline conditions, thereby characterizing suggested effects specific to that individual.
Objective
Our objective was to test and generate individual-specific N1OS hypotheses of the suggested effects of daily life stressors on nighttime HR. As an N1OS, this study provides conclusions for each participant, thus not requiring a representative population.
Methods
We studied three healthy, nonathlete individuals, collecting the data for up to four years. Additionally, we evaluated model-twin randomization (MoTR), a novel Monte Carlo method facilitating the discovery of personalized interventions on stressors in daily life.
Results
We found that physical activity can increase the nighttime heart rate amplitude, whereas there were no strong conclusions about its suggested effect on total sleep time. Self-reported states such as exercise, yoga, and stress were associated with increased (for the first two) and decreased (last one) average nighttime heart rate.
Conclusions
This study implemented the MoTR method evaluating the suggested effects of daily stressors on nighttime heart rate, sleep time, and physical activity in an individualized way: via the N-of-1 approach. A Python implementation of MoTR is freely available.
Abstract: Atrial Fibrillation (AF) is a type of arrhythmia characterized by irregular heartbeats, with four types, two of which are complicated to diagnose using standard techniques such as Electrocardiogram (ECG). However, and because smart wearables are increasingly a piece of commodity equipment, there are several ways of detecting and predicting AF episodes using only an ECG exam, allowing physicians easier diagnosis. By searching several databases, this study presents a review of the articles published in the last ten years, focusing on those who reported studies using Artificial Intelligence (AI) for prediction of AF. The results show that only twelve studies were selected for this systematic review, where three of them applied deep learning techniques (25%), six of them used machine learning methods (50%) and three others focused on applying general artificial intelligence models (25%). To conclude, this study revealed that the prediction of AF is yet an under-developed field in the context of AI, and deep learning techniques are increasing the accuracy, but these are not as frequently applied as it would be expected. Also, more than half of the selected studies were published since 2016, corroborating that this topic is very recent and has a high potential for additional research.