Abstract: Continuous and scalable monitoring of cognition and affective states is critical for the early detection of brain health, which is currently limited by the burden of active assessments. This study investigated the potential of consumer-grade wearable and mobile technologies to passively predict 21 cognitive and mental health outcomes in real-world conditions. We collected data from 82 cognitively healthy adults, including passively measured behaviour, physiology, and environmental exposures longitudinally, for 10 months. Active data were gathered in four waves using validated patient- and performance-reported outcomes. Data quality assurance involved a data filtering resulting in average wearable data coverage of 96% per day. Artificial Intelligence-powered prediction was applied, and performance was assessed using subject- and wave-dependent cross-validation. Cognitive and affective outcomes were predicted with low scaled errors. Patient-reported outcomes were more predictable than performance-based ones. Environmental and physiological metrics emerged as the most informative predictors. Passive multimodal data captured meaningful variability in cognition and affect, demonstrating the feasibility of low-burden, scalable approaches to continuous brain-health monitoring. Feature-importance analyses suggested that environmental exposures better explained inter-individual differences, whereas physiological and behavioural rhythms captured within-person changes. These findings highlight the potential of everyday technologies for population-level tracking of brain-health and deviations from expected trajectories.

Abstract: Ego depletion refers to the idea that self-control and decision-making abilities become depleted or diminished after engaging in prolonged or demanding cognitive tasks. When conducting research with humans, states such as stress are usually asked directly to the participants by using ecological momentary assessment (EMA). However, as those answers are self-reported, they are prone to bias due to the lack of user commitment or effort to assess the stress levels the best they can. This paper investigates the relationship between ego depletion and stress level reporting, specifically focusing on the bias in self-reported stress levels. The hypothesis tested in this study suggests that higher levels of ego depletion lead to greater bias in self-reported stress levels. Data collected by Berrocal et al. using EMA were analyzed to examine this hypothesis. The dataset included self-reporting stress levels by individuals being assessed and stress level reports from their peers. Data was collected using a mobile app, incorporating passive smartphone usage data alongside EMA responses. The hypothesis was tested and studied employing artificial intelligence algorithms. The results partially confirmed the initial hypothesis, revealing that lower ego depletion was associated with reduced bias in stress level reporting. Notably, when analyzing data from working days, the morning period demonstrated the least bias compared to the afternoon. These findings suggest that individuals have higher self-capacity and willingness to provide accurate stress level assessments during the morning hours. Challenges encountered in the research included limitations related to the holidays considered and potential confounders such as flexible time schedules or post-lunch sleepiness. This paper releases its data publicly, allowing for further examination and replication of the findings. Future research is encouraged to expand upon these conclusions.

Abstract:
Background
Heart rate (HR), especially at nighttime, is an important biomarker for cardiovascular health. It is known to be influenced by overall physical fitness, as well as daily life physical or psychological stressors like exercise, insufficient sleep, excess alcohol, certain foods, socialization, or air travel causing physiological arousal of the body. However, the exact mechanisms by which these stressors affect nighttime HR are unclear and may be highly idiographic (i.e. individual-specific). A single-case or “n-of-1” observational study (N1OS) is useful in exploring such suggested effects by examining each subject’s exposure to both stressors and baseline conditions, thereby characterizing suggested effects specific to that individual.
Objective
Our objective was to test and generate individual-specific N1OS hypotheses of the suggested effects of daily life stressors on nighttime HR. As an N1OS, this study provides conclusions for each participant, thus not requiring a representative population.
Methods
We studied three healthy, nonathlete individuals, collecting the data for up to four years. Additionally, we evaluated model-twin randomization (MoTR), a novel Monte Carlo method facilitating the discovery of personalized interventions on stressors in daily life.
Results
We found that physical activity can increase the nighttime heart rate amplitude, whereas there were no strong conclusions about its suggested effect on total sleep time. Self-reported states such as exercise, yoga, and stress were associated with increased (for the first two) and decreased (last one) average nighttime heart rate.
Conclusions
This study implemented the MoTR method evaluating the suggested effects of daily stressors on nighttime heart rate, sleep time, and physical activity in an individualized way: via the N-of-1 approach. A Python implementation of MoTR is freely available.

Abstract: Atrial Fibrillation (AF) is a type of arrhythmia characterized by irregular heartbeats, with four types, two of which are complicated to diagnose using standard techniques such as Electrocardiogram (ECG). However, and because smart wearables are increasingly a piece of commodity equipment, there are several ways of detecting and predicting AF episodes using only an ECG exam, allowing physicians easier diagnosis. By searching several databases, this study presents a review of the articles published in the last ten years, focusing on those who reported studies using Artificial Intelligence (AI) for prediction of AF. The results show that only twelve studies were selected for this systematic review, where three of them applied deep learning techniques (25%), six of them used machine learning methods (50%) and three others focused on applying general artificial intelligence models (25%). To conclude, this study revealed that the prediction of AF is yet an under-developed field in the context of AI, and deep learning techniques are increasing the accuracy, but these are not as frequently applied as it would be expected. Also, more than half of the selected studies were published since 2016, corroborating that this topic is very recent and has a high potential for additional research.