Network Representation Learning
background
Recording studying during KTH. First blog about network representation learning, a project belongs to machine learning, advanced course.
LINE
Reproduce paper “LINE: Large-scale Information Network Embedding”.
Alias Table Method
It’s a method of effiently drawing samples from discrete distribution.
reference:
https://www.keithschwarz.com/darts-dice-coins/
https://blog.csdn.net/haolexiao/article/details/65157026
Negative Sampling
word2vec
Original paper:
Efficient estimation of word representations in vector space.
reference:
word2vec Explained: Deriving Mikolov et al.’s
Negative-Sampling Word-Embedding Method
Skip-Gram Model
Original papaer:Distributed Representations of Words and Phrases
and their Compositionality.
The idea behind the word2vec models is that the words that appear in the same context (near each other) should have similar word vectors. Therefore, we should consider some notion of similarity in our objective when training the model. This is done using the dot product since when vectors are similar, their dot product is larger.
reference:
https://www.baeldung.com/cs/nlps-word2vec-negative-sampling
graphSage
Network Representation Learning
http://yoursite.com/2022/12/23/Network-Representation-Learning/
install_url
to use ShareThis. Please set it in _config.yml
.