python - How can read_cifar10() routine return anything other than first object in TensorFlow tutorial? -


tensorflow has cifar-10 tutorial, is discussed here. source code in python is here.

it has read_cifar10() routine here, intended read samples binary file.

i failing understand, how works. suspect somehow related tensorflow deferred nature, can't figure out how.

at point routine following:

# read record, getting filenames filename_queue.  no   # header or footer in cifar-10 format, leave header_bytes   # , footer_bytes @ default of 0.   reader = tf.fixedlengthrecordreader(record_bytes=record_bytes)   result.key, value = reader.read(filename_queue) 

i see here, new reader created scratch, , reader pointed filename queue.

how many samples returned read call?

later, inside distorted_inputs() method code following:

print ('filling queue %d cifar images before starting train. '          'this take few minutes.' % min_queue_examples)    # generate batch of images , labels building queue of examples.   return _generate_image_and_label_batch(float_image, read_input.label,                                          min_queue_examples) 

here print normal python call, not deferred, comment assume fetching of 20000 records occur immediately.

how can happen? everywhere see per-one-record logic. how multiplies on many records?

tldr; reader.read adds read operation computation graph, actual execution happens during session.run done separate thread in while(true): session.run(...) kind of loop initiated start_queue_runners

long version: part of "input pipeline" complicated fact reading/prefetching needs happen asynchronously avoid blocking. official how-to describing input pipelines here.

to more specific, reader.read adds operation read single record computation graph. operation feeds shuffle_batch created inside _generate_image_and_label_batch. point no reading has taken place. shuffle_batch operation creates queue decouples input flow, in sense evaluation of part of graph before queue , after queue can done asynchronously using different session.run calls, queue providing buffering in middle. additionally shuffle_batch operation registers operations feeding queue part of graphkeys.queue_runners collection.

inside train(), operation tf.start_queue_runners create several threads corresponding enqueue operations registered in graphkeys.queue_runners collection , start evaluating them in loop. results of reader.read flow through other ops until reaching shuffle_batch queue , getting saved in memory buffer.

the part of graph after shuffle_batch driven main python thread, initiated sess.run([train_op, loss]) command. thread collect batch of examples saved on shuffle_batch queue , propagate forward.

here's example of feeding input queue manually instead of using queue runners.

queue_dtype = np.int32 queue_capacity = 2 values_queue = tf.fifoqueue(capacity=queue_capacity, dtypes=queue_dtype) size_op = values_queue.size() value_placeholder = tf.placeholder(dtype=queue_dtype) enqueue_op = values_queue.enqueue(value_placeholder) dequeue_op = values_queue.dequeue() close_op = values_queue.close()  sess = tf.interactivesession() sess.run(tf.initialize_all_variables())  # add 2 elements onto queue sess.run([enqueue_op], {value_placeholder:2}) sess.run([enqueue_op], {value_placeholder:3}) # if uncomment next line, you'll hang because queue full # sess.run([enqueue_op], {value_placeholder:4})  # close queue. means 3rd read throw outofrangeerror instead of # hanging until queue replenished sess.run([close_op]) print('queue has %d/%d entries' % (sess.run([size_op])[0], queue_capacity))  # take 2 elements off queue fancy_computation = tf.square(dequeue_op) print('computation result %d' %(sess.run([fancy_computation])[0])) print('queue has %d/%d entries' % (sess.run([size_op])[0], queue_capacity)) print('computation result %d' %(sess.run([fancy_computation])[0])) print('queue has %d/%d entries' % (sess.run([size_op])[0], queue_capacity)) print('computation result %d' %(sess.run([fancy_computation])[0])) print('queue has %d/%d entries' % (sess.run([size_op])[0], queue_capacity)) 

what should see if run it

queue has 2/2 entries computation result 4 queue has 1/2 entries computation result 9 queue has 0/2 entries --------------------------------------------------------------------------- outofrangeerror      

Comments

Popular posts from this blog

php - Wordpress website dashboard page or post editor content is not showing but front end data is showing properly -

javascript - Get parameter of GET request -

javascript - Twitter Bootstrap - how to add some more margin between tooltip popup and element -