Skip to content
This repository has been archived by the owner on Jan 3, 2023. It is now read-only.

Python3 Support #192

Closed
meawoppl opened this issue Jan 23, 2016 · 1 comment
Closed

Python3 Support #192

meawoppl opened this issue Jan 23, 2016 · 1 comment

Comments

@meawoppl
Copy link

This is mostly just an issue of print statements:

build/lib/neon/models/rnn.py:140:                if self.step_print > 0 and mb_id % self.step_print == 0:
build/lib/neon/models/rnn.py:502:            For text prediction, reassemble the batches and print out a short
build/lib/neon/models/mlp.py:121:            assert self.step_print != 0
build/lib/neon/models/mlp.py:165:                if self.step_print > 0 and mb_id % self.step_print == 0:
build/lib/neon/models/model.py:416:        # print results
build/lib/neon/data/convert_imageset_batches.py:42:        print "Converting ", ifname
build/lib/neon/data/questionanswer.py:91:        print 'Preparing bAbI dataset or extracting from %s' % path
build/lib/neon/data/questionanswer.py:92:        print 'Task is %s/%s' % (subset, task)
build/lib/neon/data/imagecaption.py:77:        print "Vocab size: %d, Max sentence length: %d" % (self.vocab_size,
build/lib/neon/data/imagecaption.py:102:        print 'Reading train images and sentences from %s' % self.path
build/lib/neon/data/imagecaption.py:248:        print "Writing output and reference sents to dir %s" % self.path
build/lib/neon/data/imagecaption.py:268:        print "Executing bleu eval script: ", bleu_command
build/lib/neon/data/imagecaption.py:307:        print 'Reading test images and sentences from %s' % self.path
build/lib/neon/data/dataiterator.py:151:            print bidx, train_set.start
build/lib/neon/data/imageloader.py:258:            print '****', epoch, master.start, master.idx, master.ndata
build/lib/neon/data/imageloader.py:259:            print t.get().argmax(axis=0)[:17]
build/lib/neon/util/update_dataset_cache.py:62:    print '%s updated to new format' % cache_file
build/lib/neon/util/trace.py:112:        Tracer.logger.debug(print_str)  # print print_str
build/lib/neon/diagnostics/timing_plots.py:199:    print out soumith benchmark numers
build/lib/neon/callbacks/callbacks.py:663:        tag (string): Label to print before the bar (i.e. Train, Valid, Test )
build/lib/neon/callbacks/callbacks.py:715:            # print the new line
build/lib/neon/optimizers/optimizer.py:187:            name (str, optional): the optimizer's layer's pretty-print name.
build/lib/neon/backends/backend.py:2159:        Pretty print of the optree
build/lib/neon/backends/autodiff.py:314:            # print 'created grad_tree cache'
build/lib/neon/backends/tests/test_autodiff_extra.py:234:            # print count
build/lib/neon/backends/tests/test_backend_conv_layer.py:221:        print op
build/lib/neon/backends/tests/test_sr.py:20:    print C_host
build/lib/neon/backends/tests/test_gpu_pool_layer.py:164:                        # print idx
build/lib/neon/backends/tests/test_backend_pool_layer.py:204:        print opA
build/lib/neon/backends/util/check_gpu.py:62:        print "Found GPU(s) with compute capability:", full_version
build/lib/neon/backends/util/check_gpu.py:93:        print "Found %d GPU(s)", count
build/lib/neon/backends/util/check_gpu.py:98:    print get_compute_capability(verbose=False)
build/lib/neon/backends/layer_gpu.py:504:        # for k in self.fprop_kernels: print k
build/lib/neon/backends/layer_gpu.py:505:        # for k in self.bprop_kernels: print k
build/lib/neon/backends/layer_gpu.py:539:            # print grid_P, grid_Q
build/lib/neon/backends/layer_gpu.py:561:        # for k in self.updat_kernels: print k
build/lib/neon/backends/test_lrn.py:86:    print "== denom =="
build/lib/neon/backends/test_lrn.py:87:    print "CPU fprop"
build/lib/neon/backends/test_lrn.py:88:    print cccD.get().reshape(C*D*H*W, N)[0:4, 0:4]
build/lib/neon/backends/test_lrn.py:89:    print "GPU fprop"
build/lib/neon/backends/test_lrn.py:90:    print devD.get().reshape(C*D*H*W, N)[0:4, 0:4]
build/lib/neon/backends/test_lrn.py:92:    print "== output =="
build/lib/neon/backends/test_lrn.py:93:    print "CPU fprop"
build/lib/neon/backends/test_lrn.py:94:    print cccO.get().reshape(C*D*H*W, N)[0:4, 0:4]
build/lib/neon/backends/test_lrn.py:95:    print "GPU fprop"
build/lib/neon/backends/test_lrn.py:96:    print devO.get().reshape(C*D*H*W, N)[0:4, 0:4]
build/lib/neon/backends/test_lrn.py:102:    print "== bprop =="
build/lib/neon/backends/test_lrn.py:103:    print "CPU bprop"
build/lib/neon/backends/test_lrn.py:104:    print cccB.get().reshape(C*D*H*W, N)[0:4, 0:4]
build/lib/neon/backends/test_lrn.py:105:    print "GPU bprop"
build/lib/neon/backends/test_lrn.py:106:    print devB.get().reshape(C*D*H*W, N)[0:4, 0:4]
build/lib/neon/backends/kernels/cuda/pooling.py:842:    # print >>f, code
build/lib/neon/backends/nervanagpu.py:129:            # print "allocate!"
build/lib/neon/backends/nervanagpu.py:621:            # print ('created memoize stack')
build/lib/neon/backends/nervanagpu.py:640:        bench (bool, optional): set to True to print out performance data for
build/lib/neon/backends/nervanagpu.py:2066:#     print caller
build/lib/neon/backends/convnet-benchmarks.py:40:# print more stuff
build/lib/neon/backends/make_kernels.py:40:             help="print output of nvcc calls.")
build/lib/neon/backends/make_kernels.py:121:        print "Missing sass file: %s for kernel: %s" % (sass_file, kernel_name)
build/lib/neon/backends/make_kernels.py:165:                print cmdline
build/lib/neon/backends/make_kernels.py:167:                print proc.stderr.read()
build/lib/neon/backends/make_kernels.py:173:                print output
build/lib/neon/backends/make_kernels.py:176:        print "%d kernels compiled." % len(kernels_made)
build/lib/neon/backends/float_ew.py:112:    print tree with indentation
build/lib/neon/backends/float_ew.py:116:        print ("    " * level) + ", ".join(str(s) for s in node[0:3])
build/lib/neon/backends/float_ew.py:122:        print ("    " * level) + str(node)
build/lib/neon/backends/float_ew.py:288:    #     print stage_data[0], stage
build/lib/neon/backends/float_ew.py:289:    #     for s in stage_data[1]: print s
build/lib/neon/backends/float_ew.py:598:    # print "Compiling %s" % template_vals["name"]
build/lib/neon/backends/float_ew.py:601:    # print >>f, code
build/lib/neon/backends/float_ew.py:901:    # for s in argsprint:   print s
build/lib/neon/backends/float_ew.py:902:    # for s in kernel_args: print s
build/lib/neon/backends/float_ew.py:903:    # for s in type_args:   print s
build/lib/neon/backends/float_ew.py:921:        # for a in kernel_args: print a
build/lib/neon/backends/float_ew.py:940:        #print reduce_shape
build/lib/neon/backends/float_ew.py:1198:    # print >>f, code
doc/source/autodiff.rst:29:    print ad.get_grad_asnumpyarray([x0, x1]) # result is [2 * x0 + x1, x0]
examples/cifar10_conv.py:74:print 'Misclassification error = %.1f%%' % (mlp.eval(test, metric=Misclassification())*100)
examples/babi/demo.py:71:print "\nThe vocabulary set from this task has {} words:".format(babi.vocab_size)
examples/babi/demo.py:72:print stitch_sentence(babi.vocab)
examples/babi/demo.py:73:print "\nExample from test set:"
examples/babi/demo.py:74:print "\nStory"
examples/babi/demo.py:75:print stitch_sentence(ex_story)
examples/babi/demo.py:76:print "Question"
examples/babi/demo.py:77:print stitch_sentence(ex_question)
examples/babi/demo.py:78:print "\nAnswer"
examples/babi/demo.py:79:print ex_answer
examples/babi/demo.py:105:    print "\nAnswer:"
examples/babi/demo.py:108:        print babi.index_to_word[idx], float(probs[idx])
examples/char_lstm.py:94:print 'Misclassification error = %.1f%%' % ((1-fraction_correct)*100)
examples/timeseries_lstm.py:30:    print 'matplotlib needs to be installed manually to generate plots needed for this example'
examples/timeseries_lstm.py:303:    print 'terr = %g, verr = %g' % (terr, verr)
examples/imdb_lstm.py:58:print "Vocab size - ", vocab_size
examples/imdb_lstm.py:59:print "Sentence Length - ", sentence_length
examples/imdb_lstm.py:60:print "# of train sentences", X_train.shape[0]
examples/imdb_lstm.py:61:print "# of test sentence", X_test.shape[0]
examples/imdb_lstm.py:90:print "Test  Accuracy - ", 100 * model.eval(valid_set, metric=Accuracy())
examples/imdb_lstm.py:91:print "Train Accuracy - ", 100 * model.eval(train_set, metric=Accuracy())
examples/text_generation_lstm.py:130:print ''.join(seed_tokens + text)
examples/conv_autoencoder.py:66:print mlp.layers.layers[-1]
examples/conv_autoencoder.py:82:    print 'matplotlib needs to be manually installed to generate plots'
neon/models/model.py:416:        # print results
neon/data/convert_imageset_batches.py:42:        print "Converting ", ifname
neon/data/questionanswer.py:91:        print 'Preparing bAbI dataset or extracting from %s' % path
neon/data/questionanswer.py:92:        print 'Task is %s/%s' % (subset, task)
neon/data/imagecaption.py:77:        print "Vocab size: %d, Max sentence length: %d" % (self.vocab_size,
neon/data/imagecaption.py:102:        print 'Reading train images and sentences from %s' % self.path
neon/data/imagecaption.py:248:        print "Writing output and reference sents to dir %s" % self.path
neon/data/imagecaption.py:268:        print "Executing bleu eval script: ", bleu_command
neon/data/imagecaption.py:307:        print 'Reading test images and sentences from %s' % self.path
neon/data/dataiterator.py:151:            print bidx, train_set.start
neon/data/imageloader.py:258:            print '****', epoch, master.start, master.idx, master.ndata
neon/data/imageloader.py:259:            print t.get().argmax(axis=0)[:17]
neon/util/update_dataset_cache.py:62:    print '%s updated to new format' % cache_file
neon/callbacks/callbacks.py:663:        tag (string): Label to print before the bar (i.e. Train, Valid, Test )
neon/callbacks/callbacks.py:715:            # print the new line
neon/optimizers/optimizer.py:187:            name (str, optional): the optimizer's layer's pretty-print name.
neon/backends/backend.py:2159:        Pretty print of the optree
neon/backends/autodiff.py:314:            # print 'created grad_tree cache'
neon/backends/tests/test_autodiff_extra.py:234:            # print count
neon/backends/tests/test_backend_conv_layer.py:221:        print op
neon/backends/tests/test_sr.py:20:    print C_host
neon/backends/tests/test_gpu_pool_layer.py:164:                        # print idx
neon/backends/tests/test_backend_pool_layer.py:204:        print opA
neon/backends/util/check_gpu.py:62:        print "Found GPU(s) with compute capability:", full_version
neon/backends/util/check_gpu.py:93:        print "Found %d GPU(s)", count
neon/backends/util/check_gpu.py:98:    print get_compute_capability(verbose=False)
neon/backends/layer_gpu.py:504:        # for k in self.fprop_kernels: print k
neon/backends/layer_gpu.py:505:        # for k in self.bprop_kernels: print k
neon/backends/layer_gpu.py:539:            # print grid_P, grid_Q
neon/backends/layer_gpu.py:561:        # for k in self.updat_kernels: print k
neon/backends/test_lrn.py:86:    print "== denom =="
neon/backends/test_lrn.py:87:    print "CPU fprop"
neon/backends/test_lrn.py:88:    print cccD.get().reshape(C*D*H*W, N)[0:4, 0:4]
neon/backends/test_lrn.py:89:    print "GPU fprop"
neon/backends/test_lrn.py:90:    print devD.get().reshape(C*D*H*W, N)[0:4, 0:4]
neon/backends/test_lrn.py:92:    print "== output =="
neon/backends/test_lrn.py:93:    print "CPU fprop"
neon/backends/test_lrn.py:94:    print cccO.get().reshape(C*D*H*W, N)[0:4, 0:4]
neon/backends/test_lrn.py:95:    print "GPU fprop"
neon/backends/test_lrn.py:96:    print devO.get().reshape(C*D*H*W, N)[0:4, 0:4]
neon/backends/test_lrn.py:102:    print "== bprop =="
neon/backends/test_lrn.py:103:    print "CPU bprop"
neon/backends/test_lrn.py:104:    print cccB.get().reshape(C*D*H*W, N)[0:4, 0:4]
neon/backends/test_lrn.py:105:    print "GPU bprop"
neon/backends/test_lrn.py:106:    print devB.get().reshape(C*D*H*W, N)[0:4, 0:4]
neon/backends/kernels/cuda/pooling.py:842:    # print >>f, code
neon/backends/nervanagpu.py:129:            # print "allocate!"
neon/backends/nervanagpu.py:621:            # print ('created memoize stack')
neon/backends/nervanagpu.py:640:        bench (bool, optional): set to True to print out performance data for
neon/backends/nervanagpu.py:2066:#     print caller
neon/backends/convnet-benchmarks.py:40:# print more stuff
neon/backends/make_kernels.py:40:             help="print output of nvcc calls.")
neon/backends/make_kernels.py:121:        print "Missing sass file: %s for kernel: %s" % (sass_file, kernel_name)
neon/backends/make_kernels.py:165:                print cmdline
neon/backends/make_kernels.py:167:                print proc.stderr.read()
neon/backends/make_kernels.py:173:                print output
neon/backends/make_kernels.py:176:        print "%d kernels compiled." % len(kernels_made)
neon/backends/float_ew.py:112:    print tree with indentation
neon/backends/float_ew.py:116:        print ("    " * level) + ", ".join(str(s) for s in node[0:3])
neon/backends/float_ew.py:122:        print ("    " * level) + str(node)
neon/backends/float_ew.py:288:    #     print stage_data[0], stage
neon/backends/float_ew.py:289:    #     for s in stage_data[1]: print s
neon/backends/float_ew.py:598:    # print "Compiling %s" % template_vals["name"]
neon/backends/float_ew.py:601:    # print >>f, code
neon/backends/float_ew.py:901:    # for s in argsprint:   print s
neon/backends/float_ew.py:902:    # for s in kernel_args: print s
neon/backends/float_ew.py:903:    # for s in type_args:   print s
neon/backends/float_ew.py:921:        # for a in kernel_args: print a
neon/backends/float_ew.py:940:        #print reduce_shape
neon/backends/float_ew.py:1198:    # print >>f, code
tests/test_schedule.py:40:        # print epoch, lr, lr2
tests/grad_funcs.py:51:    print 'epsilon, max diff'
tests/grad_funcs.py:63:        print '%e %e %e' % (epsilon, max_abs, max_rel)
tests/grad_funcs.py:64:    print 'Min max diff : %e at Pert. Mag. %e' % (min_max_diff, min_max_pert)
tests/grad_funcs.py:147:    print 'Worst case diff %e, vals grad: %e, bprop: %e' % (max_abs_err,
tests/grad_funcs.py:150:    print 'Worst case diff %e, vals grad: %e, bprop: %e' % (max_rel_err,
tests/test_dataset.py:36:            print X_batch.shape, y_batch.shape
tests/serialization_check.py:38:    print 'comparing pickle files %s and %s' % (fn1, fn2)
tests/serialization_check.py:66:            print 'worst case abs diff = %e' % worst_case
tests/serialization_check.py:70:            print 'worst case rel diff = %e' % worst_case
tests/serialization_check.py:166:        print 'test failed....'
tests/test_lookuptable.py:43:        print fargs
tests/test_misc.py:33:    print d_array2.get()
tests/test_misc.py:37:    print d_array2.get()
tests/test_misc.py:40:    print d_error.get()
tests/test_misc.py:42:    print d_error.get()
tests/test_mergebroadcast_layer.py:108:    print neon_layer.nested_str()
tests/test_mergebroadcast_layer.py:163:    print np.max(np.abs(difference))
tests/test_mergebroadcast_layer.py:165:    print "Beginning Back prop"
tests/test_mergebroadcast_layer.py:183:    print np.max(np.abs(difference))
tests/test_mergebroadcast_layer.py:209:    print neon_layer.nested_str()
tests/test_mergebroadcast_layer.py:274:    print np.max(np.abs(difference))
tests/test_mergebroadcast_layer.py:280:    print np.max(np.abs(difference))
tests/test_mergebroadcast_layer.py:282:    print "Beginning Back prop"
tests/test_mergebroadcast_layer.py:310:    print np.max(np.abs(difference))
tests/test_mergebroadcast_layer.py:321:    print np.max(np.abs(difference))
tests/test_recurrent.py:153:    print '====Verifying hidden states===='
tests/test_recurrent.py:154:    print allclose_with_out(rnn.outputs.get(),
tests/test_recurrent.py:158:    print 'fprop is verified'
tests/test_recurrent.py:160:    print '====Verifying update on W and b ===='
tests/test_recurrent.py:161:    print 'dWxh'
tests/test_recurrent.py:166:    print 'dWhh'
tests/test_recurrent.py:172:    print '====Verifying update on bias===='
tests/test_recurrent.py:173:    print 'db'
tests/test_recurrent.py:179:    print 'bprop is verified'
tests/test_recurrent.py:214:    print 'Perturb mag, max grad diff'
tests/test_recurrent.py:235:        print '%e, %e' % (pert_mag, dd)
tests/test_recurrent.py:243:    print 'Worst case error %e with perturbation %e' % (min_max_err, pert_mag)
tests/test_recurrent.py:244:    print 'Threshold %e' % (threshold)
tests/utils.py:45:    # if it fails print some stats
tests/utils.py:50:        print 'abs errors: %e [%e, %e] Abs Thresh = %e' \
tests/utils.py:53:        print 'worst case: %e %e' % (x.flat[amax], y.flat[amax])
tests/utils.py:55:        print 'rel errors: %e [%e, %e] Rel Thresh = %e' \
tests/utils.py:58:        print 'worst case: %e %e' % (x.flat[amax], y.flat[amax])
tests/test_gru.py:156:    print '====Verifying hidden states===='
tests/test_gru.py:157:    print allclose_with_out(gru.outputs.get(),
tests/test_gru.py:162:    print 'fprop is verified'
tests/test_gru.py:165:    print 'Making sure neon GRU match numpy GRU in bprop'
tests/test_gru.py:196:    # print '====Verifying hidden deltas ===='
tests/test_gru.py:197:    print '====Verifying r deltas ===='
tests/test_gru.py:203:    print '====Verifying z deltas ===='
tests/test_gru.py:209:    print '====Verifying hcan deltas ===='
tests/test_gru.py:215:    print '====Verifying update on W_input===='
tests/test_gru.py:216:    print 'dWxr'
tests/test_gru.py:221:    print 'dWxz'
tests/test_gru.py:226:    print 'dWxc'
tests/test_gru.py:232:    print '====Verifying update on W_recur===='
tests/test_gru.py:234:    print 'dWrr'
tests/test_gru.py:239:    print 'dWrz'
tests/test_gru.py:244:    print 'dWrc'
tests/test_gru.py:250:    print '====Verifying update on bias===='
tests/test_gru.py:251:    print 'dbr'
tests/test_gru.py:256:    print 'dbz'
tests/test_gru.py:261:    print 'dbc'
tests/test_gru.py:267:    print 'bprop is verified'
tests/test_gru.py:302:    print 'Perturb mag, max grad diff'
tests/test_gru.py:323:        print '%e, %e' % (pert_mag, dd)
tests/test_gru.py:331:    print 'Worst case error %e with perturbation %e' % (min_max_err, pert_mag)
tests/test_gru.py:332:    print 'Threshold %e' % (threshold)
tests/lstm_ref.py:252:    print 'Making sure batched version agrees with sequential version: (should all be True)'
tests/lstm_ref.py:253:    print np.allclose(BdX, dX)
tests/lstm_ref.py:254:    print np.allclose(BdWLSTM, dWLSTM)
tests/lstm_ref.py:255:    print np.allclose(Bdc0, dc0)
tests/lstm_ref.py:256:    print np.allclose(Bdh0, dh0)
tests/lstm_ref.py:319:            # print stats
tests/lstm_ref.py:329:    print 'every line should start with OK. Have a nice day!'
tests/test_branch_layer.py:125:    print difference
tests/test_lstm.py:140:    print '====Verifying IFOG===='
tests/test_lstm.py:146:    print '====Verifying cell states===='
tests/test_lstm.py:152:    print '====Verifying hidden states===='
tests/test_lstm.py:158:    print 'fprop is verified'
tests/test_lstm.py:179:    print 'Making sure neon LSTM match numpy LSTM in bprop'
tests/test_lstm.py:180:    print '====Verifying update on W_recur===='
tests/test_lstm.py:187:    print '====Verifying update on W_input===='
tests/test_lstm.py:193:    print '====Verifying update on bias===='
tests/test_lstm.py:199:    print '====Verifying output delta===='
tests/test_lstm.py:205:    print 'bprop is verified'
tests/test_lstm.py:310:    print 'Perturb mag, max grad diff'
tests/test_lstm.py:331:        print '%e, %e' % (pert_mag, dd)
tests/test_lstm.py:339:    print 'Worst case error %e with perturbation %e' % (min_max_err, pert_mag)
tests/test_lstm.py:340:    print 'Threshold %e' % (threshold)
tests/test_conv_layer.py:262:    print 'atol on bprop dW = %e' % atol

Do you accept PR's toward python3 compatibility?

@scttl
Copy link
Contributor

scttl commented Jan 23, 2016

Yes, please see our response in #191 (closing this as a duplicate)

@scttl scttl closed this as completed Jan 23, 2016
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants