
Word2Vec and Glove Vectors
Last time, we saw how autoencoders are used to learn a latentÂ embedding space: an alternative, lowdimensional representation of a set of data with some appealing properties: for example, we saw that interpolating in the latent space is a way of generating new examples. In particular, interpolation in the latent space generates more compelling examples than, […]