B. C. Gül, S. Nadig, S. Tziampazis, N. Jazdi, and M. Weyrich, “FedMultiEmo: Real-Time Emotion Recognition via Multimodal Federated Learning,” in 2025 5th International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME), 2025, pp. 1–8.
Abstract
In-vehicle emotion recognition underpins adaptive driver-assistance systems and, ultimately, occupant safety. However, practical deployment is hindered by (i) modality fragility—poor lighting and occlusions degrade vision-based methods; (ii) physiological variability-heart-rate and skin-conductance patterns differ across individuals; and (iii) privacy risk—centralized training requires transmission of sensitive data. To address these challenges, we present FedMultiEmo, a privacy-preserving framework that fuses two complementary modalities at the decision level: visual features extracted by a Convolutional Neural Network from facial images, and physiological cues (heart rate, electrodermal activity, and skin temperature) classified by a Random Forest. FedMultiEmo builds on three key elements: (1) a multimodal federated learning pipeline with majority-vote fusion, (2) an end-to-end edge-to-cloud prototype on Raspberry Pi clients and a Flower server, and (3) a personalized Federated Averaging scheme that weights client updates by local data volume. Evaluated on FER2013 and a custom physiological dataset, the federated Convolutional Neural Network attains 77% accuracy, the Random Forest 74%, and their fusion 87%, matching a centralized baseline while keeping all raw data local. The developed system converges in 18 rounds, with an average round time of 120 s and a per-client memory footprint below 200 MB. These results indicate that FedMultiEmo offers a practical approach to real-time, privacy-aware emotion recognition in automotive settings.BibTeX
S. Tziampazis, B. Can Gül, N. Jazdi, and M. Weyrich, “OracleFed: Latency-Aware Federated Learning via Dynamic Recovery and Causal Aggregation,” IEEE Access, vol. 13, pp. 188064–188083, 2025.
Abstract
As federated learning (FL) continues to expand across wide-area, heterogeneous networks, preserving true training chronology grows complex, particularly in time-sensitive applications where network delays can obscure causal ordering and degrade global-model fidelity. Current aggregation schemes seldom account for temporal dynamics: they either stall on stragglers, discard late updates, or rely on arrival-based heuristics that conflate communication latency with training staleness. In light of this limitation, we introduce OracleFed—Order-Respecting Aggregation with Causality and Latency Equalization, a framework in which training-time lag is decoupled from network-transport delay and elevated as the primary driver of aggregation. Leveraging a hybrid design that integrates dual timestamping with system-wide synchronization, the framework continuously recalibrates the server’s waiting window and re-indexes arrivals by their generation timestamps, thereby preserving causal order. Rather than treating late arrivals as inherently stale, OracleFed discounts only genuine training staleness, allowing high-latency clients to contribute proportionally to their computational freshness. We benchmark the proposed framework against existing synchronous and asynchronous schemes across eight evaluation metrics in a distributed, in-vehicle emotion-recognition setting. Paired $t$ - and Wilcoxon signed-rank analyses of twenty Monte-Carlo replicates verify that OracleFed combines the coordination advantages of both paradigms and achieves higher predictive quality, lower staleness, and fairer client participation than the baselines.BibTeX
B. C. Gül, S. Tziampazis, N. Jazdi, and M. Weyrich, “SyncFed: Time-Aware Federated Learning through Explicit Timestamping and Synchronization,” in 2025 IEEE 30th International Conference on Emerging Technologies and Factory Automation (ETFA), 2025, pp. 1–8.
Abstract
As Federated Learning (FL) expands to larger and more distributed environments, consistency in training is challenged by network-induced delays, clock unsynchronicity, and variability in client updates. This combination of factors may contribute to misaligned contributions that undermine model reliability and convergence. Existing methods like staleness-aware aggregation and model versioning address lagging updates heuristically, yet lack mechanisms to quantify staleness, especially in latency-sensitive and cross-regional deployments. In light of these considerations, we introduce SyncFed, a time-aware FL framework that employs explicit synchronization and times-tamping to establish a common temporal reference across the system. Staleness is quantified numerically based on exchanged timestamps under the Network Time Protocol (NTP), enabling the server to reason about the relative freshness of client updates and apply temporally informed weighting during aggregation. Our empirical evaluation on a geographically distributed testbed shows that, under SyncFed, the global model evolves within a stable temporal context, resulting in improved accuracy and information freshness compared to round-based baselines devoid of temporal semantics.BibTeX