Information Fusion 88 (2022) 263–280
269
M.F. Criado et al.
Table 3
Summary of the Datasets employed in the works presented in Section
3.3.1
. Asterisks
indicate that the datasets have been modified in particular ways, making it impossible
to fairly compare each other. Some of the datasets mentioned were not referenced so
far: Bing-caltech256 [
102
], COREL5000 [
103
], ImageNet [
104
] and NYUD [
105
].
Article
Datasets used in experiments
[
69
]
MNIST + SVHN + USPS
[
71
]
Office-31;
Bing-caltech256
[
74
]
COREL5000;
Trecvid2005
a
[
75
]
MNIST*;
Olivetti FR
b
[
79
]
MNIST + SVHN
[
80
]
Office-31;
ImageNet;
VisDA2017
[
82
]
Office-31;
Image CLEF-DA
[
83
]
Digit5;
Office-31
[
84
]
MNIST + MNIST-M + USPS;
VisDA2017
[
89
]
MNIST + SVHN + USPS;
Office-31;
NYUD
[
90
]
Office-31;
Image CLEF-DA
c
[
91
]
Office-Home;
VisDA2017
[
92
]
MNIST + SVHN + USPS;
Office-31
a
Available at
http://www-nlpir.nist.gov/projects/trecvid
.
b
Available at
http://www.uk.research.att.com/facedatabase.html
.
c
Available at
http://imageclef.org/2014/adaptation
.
results of two of the works stand out [
71
,
75
]. They present a complete
variety of experiments and contrast their results with other well-known
methods, getting significantly better error ratios and accuracies. On the
other hand, the most outstanding results achieved with Domain Adapta-
tion methods are the ones from [
84
,
91
,
92
]. The first one, [
84
] propose
their method SimNet and experimentally compare their results with
some other methods like DAN, RTN and a baseline method over the
datasets of MNIST, Office-31 and VisDA2017. It improves the accuracy
obtained by every other method in the three cases. Concerning [
91
,
92
],
they both employ the Office-31 dataset, and obtain impressive results
compared to the other methods they test.
Besides all of the strategies we just talked about, there are a bunch
of other methods to deal with the domain shifts. One of them is [
101
],
which also mentions the Source and Target Domains, but also factorizes
the input space to search for a Grassmann Manifold that fits all of
the data samples. Afterwards, the training is performed only on that
manifold, instead of in the whole feature space. Lastly, some of the
federated strategies of personalization explained in Section
3.2
can also
deal with the kind of heterogeneity brought up in this Section [
40
,
41
,
45
,
46
,
52
,
63
].
3.3.2. Changes in the behaviour throughout clients
Differences in the behaviour of the clients refer to discrepancies in
their conditional probabilities
𝑃 (
𝑦
|
𝑥). A variation of this nature means
that for, at least for some data samples, the correct output is not the
same for all of the clients. More formally:
Dostları ilə paylaş: