tensorflow - Strange output of embedding_lookup_sparse -


consider such code:

w = tf.variable(tf.constant([[1., 2], [3, 4], [5, 6], [7, 8]])) sess = tf.session() sess.run(tf.initialize_all_variables())  st = tf.sparsetensor([[0, 0, 0],                       [0, 1, 1],                       [1, 0, 0],                       [1, 1, 0],                       [2, 0, 0],                       [2, 1, 0]], [0, 2, 0, 1, 1, 3], [3, 3, 2])  sess.run(tf.nn.embedding_lookup_sparse(w, st, none, combiner='sum')) 

output:

array([[  6.,   8.],        [  4.,   6.],        [ 10.,  12.]], dtype=float32) 

according documentation shape of output must [3, 3, 3], since shape(sp_id) = [3, 3, 2] , shape(w) = [4, 3], doesn't :(

can explain why works in way? because expected behavior simple embedding_lookup, aggregation on last axes..

edit

for each object have 2 features , each represent 1 word, since want represent each object 2 concat embeddings, can by:

sess.run(tf.reshape(tf.nn.embedding_lookup(w, [[0, 1], [2, 3]]), shape=(2, 4))) 

output

array([[ 1.,  2.,  3.,  4.],        [ 5.,  6.,  7.,  8.]], dtype=float32) 

so, still have 2 features, each 1 can represent several words, , want aggregate embeddings words corresponding 1 feature. looks embedding_lookup_sparse should work in such way, don't understand why code doesn't work.

thanks!

when use lookup, tensor has size of vector in embedding, output innermost dimension has 2. instance, if used: w = tf.variable(tf.constant( [[1., 2, 0.1, 0.2], [3, 4, 0.3, 0.4], [5, 6, 0.5, 0.6], [7, 8, 0.7, 0.8]])) output 4 inner columns.


Comments

Popular posts from this blog

asynchronous - C# WinSCP .NET assembly: How to upload multiple files asynchronously -

aws api gateway - SerializationException in posting new Records via Dynamodb Proxy Service in API -

asp.net - Problems sending emails from forum -