In contrast to the batch setup, which most machine learning approaches consider, in stream learning, models are learning from streaming data with samples that arrive one after the other. There is a need to flexibly adapt to concept drift (distributional changes). Most approaches update the model in the presence of drift by neglecting old information. However, in some settings, not all data points are affected by the drift. The goal of this project is to explore whether a selective mechanism that keeps non-drifting samples while rejecting drifting ones is beneficial.
Literature
Lu, Jie, et al. “Learning under Concept Drift: A Review.” IEEE Transactions on Knowledge and Data Engineering, 2018, pp. 1–1. https://doi.org/10.1109/TKDE.2018.2876857.
Gama, João, et al. “A Survey on Concept Drift Adaptation.” ACM Computing Surveys, vol. 46, no. 4, Mar. 2014, p. 44:1-44:37. ACM Digital Library, https://doi.org/10.1145/2523813.
Hinder, Fabian, et al. “One or Two Things We Know about Concept Drift—a Survey on Monitoring in Evolving Environments. Part B: Locating and Explaining Concept Drift.” Frontiers in Artificial Intelligence, vol. 7, July 2024, p. 1330258. https://doi.org/10.3389/frai.2024.1330258.