Information Fusion 88 (2022) 263–280
280
M.F. Criado et al.
[116]
A. Gepperth, B. Hammer, Incremental learning algorithms and applications, in:
European Symposium on Artificial Neural Networks, ESANN, 2016.
[117]
S.-A. Rebuffi, A. Kolesnikov, G. Sperl, C.H. Lampert, Icarl: Incremental clas-
sifier and representation learning, in: Proceedings of the IEEE Conference on
Computer Vision and Pattern Recognition, 2017, pp. 2001–2010.
[118]
J. Yoon, W. Jeong, G. Lee, E. Yang, S.J. Hwang, Federated continual learning
with weighted inter-client transfer, in: International Conference on Machine
Learning, PMLR, 2021, pp. 12073–12086.
[119]
R. Kemker, M. McClure, A. Abitino, T. Hayes, C. Kanan, Measuring catastrophic
forgetting in neural networks, in: Proceedings of the AAAI Conference on
Artificial Intelligence, vol. 32, 2018.
[120] I.J. Goodfellow, M. Mirza, D. Xiao, A. Courville, Y. Bengio, An empirical
investigation of catastrophic forgetting in gradient-based neural networks, 2013,
arXiv preprint
arXiv:1312.6211
.
[121]
I. Khamassi, M. Sayed-Mouchaweh, M. Hammami, K. Ghédira, Discussion and
review on evolving data streams and concept drift adapting, Evol. Syst. 9 (1)
(2018) 1–23.
[122]
G.I. Webb, R. Hyde, H. Cao, H.L. Nguyen, F. Petitjean, Characterizing concept
drift, Data Min. Knowl. Discov. 30 (4) (2016) 964–994.
[123]
Y. Zhang, Q. Yang, A survey on multi-task learning, IEEE Trans. Knowl. Data
Eng. (2021).
[124]
R. Caruana, Multitask learning, Mach. Learn. 28 (1) (1997) 41–75.
[125]
M.S. Hammoodi, F. Stahl, A. Badii, Real-time feature selection technique with
concept drift detection using adaptive micro-clusters for data stream mining,
Knowl.-Based Syst. 161 (2018) 205–239.
[126]
A. Dries, U. Rückert, Adaptive concept drift detection, Stat. Anal. Data Min.
ASA Data Sci. J. 2 (5–6) (2009) 311–327.
[127]
J. Shao, Z. Ahmadi, S. Kramer, Prototype-based learning on concept-drifting
data streams, in: Proceedings of the 20th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining, 2014, pp. 412–421.
[128]
J. Gama, P. Medas, G. Castillo, P. Rodrigues, Learning with drift detection, in:
Brazilian Symposium on Artificial Intelligence, Springer, 2004, pp. 286–295.
[129]
M. Baena-Garcıa, J. del Campo-Ávila, R. Fidalgo, A. Bifet, R. Gavalda, R.
Morales-Bueno, Early drift detection method, in: Fourth International Workshop
on Knowledge Discovery from Data Streams, vol. 6, 2006, pp. 77–86.
[130] D.M. Manias, I. Shaer, L. Yang, A. Shami, Concept drift detection in federated
networked systems, 2021, arXiv preprint
arXiv:2109.06088
.
[131]
A. Bifet, R. Gavalda, Adaptive learning from evolving data streams, in:
International Symposium on Intelligent Data Analysis, Springer, 2009, pp.
249–260.
[132]
A. Bifet, G. Holmes, B. Pfahringer, R. Gavalda, Improving adaptive bagging
methods for evolving data streams, in: Asian Conference on Machine Learning,
Springer, 2009, pp. 23–37.
[133]
I. Frias-Blanco, J. del Campo-Ávila, G. Ramos-Jimenez, R. Morales-Bueno,
A. Ortiz-Diaz, Y. Caballero-Mota, Online and non-parametric drift detection
methods based on Hoeffding’s bounds, IEEE Trans. Knowl. Data Eng. 27 (3)
(2014) 810–823.
[134] K. Nar, O. Ocal, S.S. Sastry, K. Ramchandran, Cross-entropy loss and low-
rank features have responsibility for adversarial examples, 2019, arXiv preprint
arXiv:1901.08360
.
[135]
L. Feng, S. Shu, Z. Lin, F. Lv, L. Li, B. An, Can cross entropy loss be robust to
label noise? in: IJCAI, 2020, pp. 2206–2212.
[136]
Y. Ho, S. Wookey, The real-world-weight cross-entropy loss function: Modeling
the costs of mislabeling, IEEE Access 8 (2019) 4806–4813.
[137]
Z. Zhang, M.R. Sabuncu, Generalized cross entropy loss for training deep
neural networks with noisy labels, in: 32nd Conference on Neural Information
Processing Systems, NeurIPS, 2018.
[138]
X. Li, L. Yu, D. Chang, Z. Ma, J. Cao, Dual cross-entropy loss for small-sample
fine-grained vehicle classification, IEEE Trans. Veh. Technol. 68 (5) (2019)
4204–4212.
[139] D. Rolnick, A. Ahuja, J. Schwarz, T.P. Lillicrap, G. Wayne, Experience replay
for continual learning, 2018, arXiv preprint
arXiv:1811.11682
.
[140]
A. Chaudhry, M. Rohrbach, M. Elhoseiny, T. Ajanthan, P.K. Dokania, P.H. Torr,
M. Ranzato, Continual learning with tiny episodic memories, 2019.
[141] H. Shin, J.K. Lee, J. Kim, J. Kim, Continual learning with deep generative
replay, 2017, arXiv preprint
arXiv:1705.08690
.
[142] G.M. Van de Ven, A.S. Tolias, Generative replay with feedback connections as a
general strategy for continual learning, 2018, arXiv preprint
arXiv:1809.10635
.
[143]
J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A.A. Rusu,
K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al., Overcoming
catastrophic forgetting in neural networks, Proc. Natl. Acad. Sci. 114 (13)
(2017) 3521–3526.
[144]
J. Schwarz, W. Czarnecki, J. Luketina, A. Grabska-Barwinska, Y.W. Teh, R.
Pascanu, R. Hadsell, Progress & compress: A scalable framework for continual
learning, in: International Conference on Machine Learning, PMLR, 2018, pp.
4528–4537.
[145] H. Ritter, A. Botev, D. Barber, Online structured laplace approximations for
overcoming catastrophic forgetting, 2018, arXiv preprint
arXiv:1805.07810
.
[146]
J. Serra, D. Suris, M. Miron, A. Karatzoglou, Overcoming catastrophic forgetting
with hard attention to the task, in: International Conference on Machine
Learning, PMLR, 2018, pp. 4548–4557.
[147] J. Yoon, E. Yang, J. Lee, S.J. Hwang, Lifelong learning with dynamically
expandable networks, 2017, arXiv preprint
arXiv:1708.01547
.
[148] X. He, J. Sygnowski, A. Galashov, A.A. Rusu, Y.W. Teh, R. Pascanu, Task
agnostic continual learning via meta learning, 2019, arXiv preprint
arXiv:
1906.05201
.
[149]
A. Mallya, D. Davis, S. Lazebnik, Piggyback: Adapting a single network to
multiple tasks by learning to mask weights, in: Proceedings of the European
Conference on Computer Vision, ECCV, 2018, pp. 67–82.
[150]
A. Mallya, S. Lazebnik, Packnet: Adding multiple tasks to a single network by
iterative pruning, in: Proceedings of the IEEE Conference on Computer Vision
and Pattern Recognition, 2018, pp. 7765–7773.
[151] M. Masana, T. Tuytelaars, J. van de Weijer, Ternary feature masks: zero-
forgetting for task-incremental learning, 2020, arXiv preprint
arXiv:2001.
08714
.
[152] A.A. Rusu, N.C. Rabinowitz, G. Desjardins, H. Soyer, J. Kirkpatrick, K.
Kavukcuoglu, R. Pascanu, R. Hadsell, Progressive neural networks, 2016, arXiv
preprint
arXiv:1606.04671
.
[153]
M.G. Bellemare, Y. Naddaf, J. Veness, M. Bowling, The arcade learning
environment: An evaluation platform for general agents, J. Artificial Intelligence
Res. 47 (2013) 253–279.
[154]
Z. Li, D. Hoiem, Learning without forgetting, IEEE Trans. Pattern Anal. Mach.
Intell. 40 (12) (2017) 2935–2947.
[155]
C. Wah, S. Branson, P. Welinder, P. Perona, S. Belongie, The caltech-ucsd
birds-200–2011 dataset, 2011.
[156]
M.-E. Nilsback, A. Zisserman, Automated flower classification over a large
number of classes, in: 2008 Sixth Indian Conference on Computer Vision,
Graphics & Image Processing, IEEE, 2008, pp. 722–729.
Dostları ilə paylaş: |