a
(e.g. artists), we use the similar
artists for the artists in the set I
a
as provided by the social website. For the lists of
similar instances, we discard the artists that are not in I
a
.
To create a ground truth for the ranked tags applicable to the artists in I
a
, we
download the top tags for each artist and compute the normalized tags as described
in Section 6.3.2. The set of known tags I
g
is constructed by collecting all normal-
ized tags applied to the I
a
artists.
Proposed Evaluation Measures
For a tag or an artist t
i
given by the ground truth, g
a
(t
i
) denotes the rank of t
i
with respect to artist a. Hence, g
a
(t
i
) − 1 tags or artists are considered to be more
applicable (or similar) to a than t
i
. In contrast, with r
a
(t
i
) we denote the rank of t
i
for a as computed by the method to be evaluated.
We propose two evaluation measures. The first focuses on the traditional infor-
mation retrieval measures precision and recall, the second evaluates the ranking.
Precision and Recall. We select the set S
n
of the top n tags for artist a in the
ground truth and evaluate precision and recall of the computed ordered list L
m
of
the m most applicable tags according to the tagging approach to be evaluated.
Ranking. We do not only consider the retrieval of the n top-ranked tags in the
ground truth to be important, but we also want to evaluate the ranking itself, hence
the correlation between the ranking in the ground truth g
a
(t
i
) and the computed
ranking r
a
(t
i
). We evaluate the ranking for each artist using a standard measure,
Spearman’s Rank Correlation Coefficient [Kendall, 1975], as follows.
ρ(a) = 1 −
6 ∑
i
(g
a
(t
i
) − r
a
(t
i
))
2
n(n
2
− 1)
We have now proposed a test set of artists and an algorithm to dynamically
create a ground truth. Such ground truths are used in Section 6.4.3 to evaluate the
tagging of musical artists and books.
6.4 Experimental Results
In this section, we focus on experiments conducted on each of the three problems
described in the beginning of this chapter.
6.4.1 Identifying Relatedness between Instances
We first focus on the extraction of the relatedness of instances from the web. We
will apply the methods as discussed in Section 6.2.1 and return to the historical
128
”like [person] and [person]”
”such as [person] and [person]”
”including [person] and [person]”
”for example [person] and [person]”
”namely [person] and [person]”
”[person] and [person]”
”[person] [person] and other”
Table 6.7. Patterns used to find co-occurrences within the search engine snippets.
persons extracted (Chapter 4). Subsequently we discuss the identification of relat-
edness within a set of musical artists.
Famous People
Having gathered a list of historical persons with biographical information (Sec-
tion 4.5), we are interested to know how the persons in the list are perceived to be
related. Obviously such information can be extracted from the biographies, e.g.
persons can be considered related when they share a profession, have the same
nationality or lived in the same period.
However, we are interested in the way people nowadays relate the historical
people extracted. For example, we are interested to identify the person who is
considered to be most related to Winston Churchill. We therefore mine the web for
a social network of people extracted using the method in the previous section.
We assume that two persons are related when they are often mentioned in the
same context. Using the hypothesis that enumerated items are often related, we use
the pattern-based approach by selecting enumeration patterns (Table 6.7).
For each of the best 3,000 ranked persons found in Section 4.5, we computed
a ranked list based on t(p, q) of most related persons in the large set of the 10,000
persons with biographies.
Aiming for a reflection of the collective knowledge of web contributors on his-
torical figures, the extracted social network of historical persons is not a verifiable
collection of facts. We illustrate the social network extracted by two examples.
Figure 6.5 depicts the relatedness among the best ranked persons. An arrow from
person p to q is drawn if q is among the 20 nearest neighbors of p. Using the same
criterion, Figure 6.6 depicts the relatedness among the best ranked authors.
We are able to verify the precision of the relatedness between historical persons
if we make the following assumptions. We consider the following “minimal criteria
for” two persons to be related:
- either they lived in the same period, i.e. there is an overlap in the periods the
two lived, or
- they shared a profession, or
- they shared a nationality, or
6.4 Experimental Results
129
0.8
0.82
0.84
0.86
0.88
0.9
0.92
0.94
0.96
0.98
1
0
200
400
600
800
1000
1200
1400
1600
1800
2000
precision
n
precision of the social network
1-NN
5-NN
10-NN
15-NN
25-NN
Figure 6.3. Precision for the social network for the n highest ranked persons and
their k nearest neighbors.
- they are both female.
Of course we cannot evaluate recall of the algorithm on these criteria, as for
example not all persons sharing a nationality need to be consider to be related. We
therefore evaluate precision of the social network on these minimal criteria. For
the 3,000 best ranked persons, we select the k most related persons. Per pair we
evaluate whether either one of the four criteria is being met. This precision rate is
presented in Figure 6.3. In comparison, the probability of any of the 3,000 persons
to be related to a person in the large list of 10,000 is 45%. The precision rates
for the social network per criterion can be found in Figure 6.4. The probabilities
for two randomly selected persons to share a period, profession and nationality are
38%, 7.5% and 6.5% respectively. The chance that two historical persons are both
female is only 0.5% for the studied list of 10,000. We hence conclude that these
results give good confidence in the quality of the extracted social network.
Musical Artists
In the second case-study on the extraction of a network of related instances, we
focus on musical artists. Hereto, we use two standard sets of artists: a set of 224
artists, equally divided over 14 genres [Knees et al., 2004] and a large set of 1995
artists divided over 9 genres [Schedl et al., 2006]. For both sets of artists, each
artist is only associated with one genre. We consider two artists to be similar, if
130
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
200
400
600
800
1000
1200
1400
1600
1800
2000
precision
n
precision of the social network per criterion for 10 Nearest Neighbors
same nationality
same profession
same period
both female
Figure 6.4. Precision for 10-NN per criterion.
Johann Sebastian Bach
Johann Wolfgang Goethe
William Shakespeare
Wolfgang Amadeus Mozart
Ludwig van Beethoven
Joseph Haydn
Frederic Chopin
Giuseppe Verdi
Johannes Brahms
Robert Schumann
Franz Schubert
Albert Einstein
Charles Darwin
Leonardo da Vinci
Figure 6.5. The extracted social network for the 15 highest ranked persons.
they share a genre in the test set.
We use the common test set I
224
of 224 artists, equally divided over 14 genres
as defined by Knees et al. [2004]
7
to evaluate the computed artist similarities t(i, i
0
).
We consider two artists to be similar, if they share a genre in the test set. In these
experiments, we only evaluate precision. If for an artist i no mapping or related
instance could be found, we consider the result to be incorrect.
In this case-study, we compare the results of the three alternative methods to
obtain the co-occurrence counts. For
PCM
we added the extra term
music for find-
7
www.cp.jku.at/people/knees/publications/artistlist224.html
6.4 Experimental Results
131
William Shakespeare
Charles Dickens
Francis Bacon
Mark Twain
Washington Irving
Johann Wolfgang Goethe
Friedrich Schiller
Heinrich Heine
James Joyce
Jorge Luis Borges
Prentice Mulford
Victor Hugo
Oscar Wilde
George Bernard Shaw
Ralph Waldo Emerson
Henry David Thoreau
William Blake
Robert Frost
Dante Alighieri
Robertson Davies
HL Mencken
Figure 6.6. The extracted social network for the highest ranked authors.
like [Artist] and [Artist]
such as [Artist] and [Artist]
including [Artist] and [Artist]
namely
[Artist]
and
[Artist]
[Artist] and [Artist]
[Artist] [Artist] and other
Table 6.8. patterns for artist - artist relation.
ing co-occurrences of the artists. For example the terms
Bush and Inner Circle
co-occurred a lot on the web, due to American politics. By adding the term
music
we restrict ourselves to documents handling music.
Since we are not interested in the nature of the relatedness between artists, for
PM
we selected general enumeration patterns (Table 6.8) to obtain co-occurrences.
Figure 6.7 shows the average precision of the similarity of the artists and their
k-NN for the sets of 224 artists. Note that for each artist only 15 others are defined
to be related. We can conclude that the pattern based method
PM
gives good results
and outperforms both
DM
and
PCM
. For smaller values of k the method most
inefficient in the number of queries is outperformed by both
DM
and
PM
. The
performance of
DM
drops quickly due to the fact that only few related artists are
mentioned among the highest ranked pages for the queried instances.
We have also compared the results of the three methods with the data from
Last.fm. For each of the 224 artists, we have extracted a ranked list of similarities
132
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
5
10
15
20
25
30
precision
k
Precision for k-NN Artist Similarity
pm
dm
last.fm
pcm
Figure 6.7. Precision for the categorization of the 224 musical artists compared
with the data extracted from Last.fm.
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0
5
10
15
20
25
30
precision
k
Precision for k-NN Artist Similarity
pm-224
pm-1995
dm-224
dm-1995
Figure 6.8. Precision for the sets of 224 and 1995 artists.
with the other 223 artists
8
. For small sizes of k the
Last.fm data is more precise
than
PCM
, but for larger k all web-based methods outperform the experiment with
the
Last.fm experiment. This can be explained by the fact that for each artist only
100 similar artists are provided. Hence, the average number of identified related
8
e.g. http://ws.audioscrobbler.com/1.0/artist/Madonna/similar.xml
6.4 Experimental Results
133
Instructors can add Voice Tool and Live Classroom links directly
to the Vista Calendar.
... States W2, Canada B12, and Japan [H18, K9] indicate ...
If you work in a large population center, for example, Chicago,
Boston, New York City, and ...
They go together like lamb and tuna fish,
Table 6.9. Not all composed queries lead to relevant texts.
artists in the set of 224 is lower than all web-based methods.
Figure 6.8 shows the average precision of the similarity of the artists and their
k-NN for the sets of 224 and 1995 artists. We can conclude that the pattern based
method gives good results and outperforms
DM
in both sets. For the set of 1995
we did not compute the co-occurrences using
PCM
, as this would take over a year
using the
Yahoo!
API
.
Dealing with Ambiguity. As the use of
PM
leads to good results in the identifi-
cation of artist similarities, we applied the method to a collection of 1732 artists,
used in music recommender experiments [Tiemann & Pauws, 2007]. As this set
contains a number of ambiguous artist names, we return to the work discussed in
Section 3.2.1, and propose a method to identify related instances for a set I
Dostları ilə paylaş: |