It is a TF implementation of the paper : "Learning by Association : A versatile semi-supervised training method for neural networks" (https://arxiv.org/pdf/1706.00909.pdf)
"We feed a batch of labeled and a batch of unlabeled data through a network, producing embeddings for both batches. Then, an imaginary walker is sent from samples in the labeled batch to sampled in the unlabeled batch. The transition follows a probability distribution obtained from the similarity of the respective embeddings which we refer to as an association"
In other words, given a batch A of labeled data and a batch B of unlabeled data, we first use an arbitrary network to find the embedding of A and B. For any a in emb(A), we find b in emb(B) "similar to" a. Analogously, we find a' in emb(A) that is "similar to" b. We penalize if class(a) and class(a') differ. The paper likens this concept to an "imaginary walker" from batch emb(A) to emb(B) and back to emb(A), according to the probability distriution obtained from the similarity matrix.
The red arrow is the traveling path of an "imaginary walker"