-
Notifications
You must be signed in to change notification settings - Fork 5.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
adam learning algorithm error #1040
Comments
Can you check you model config if sparse_update is enabled?
It means probably you are using sparse_update trainning. If true, please add --port_num_for_sparse option. |
It comes to another error after i add sparse_update train_arg="--saving_period=1 --port=7164 --ports_num=1 --local=0 --comment=$comment --dot_period=1000 --log_period=1000 --num_passes=100 --trainer_count=10 --ports_num_for_sparse=1 --use_sparse_updater=1 --use_old_updater=1 --enable_grad_sparse_update=50000000 --grad_sparse_update_max_sparse_rate=0.50" test_arg="--port=7164 --ports_num=1 --distribute_test=0 --job=test --test_pass=0 --test_wait=1 --dot_period=1 --log_period=1000 --saving_period=1 --num_passes=500 --start_pserver=0" Layer(inputs = [Input("input1", parameter_name = "_layer1_1.w", sparse_remote_update = True)], name = "layer1_1", bias = Bias train.log Fri Dec 30 12:59:00 2016[1,0]:+ ./paddle_trainer --num_gradient_servers=40 --trainer_id=0 --pservers=10.90.163.38,10.90.163.37,10.90.136.26,10.90.136.24,10.90.136.25,10.90.163.32,10.90.163.31,10.90.136.23,10.90.163.30,10.90.163.44,10.90.163.41,10.90.163.40,10.90.163.43,10.90.163.42,10.90.163.19,10.90.163.18,10.90.163.17,10.90.163.16,10.90.163.15,10.90.163.14,10.90.163.13,10.90.163.12,10.90.163.11,10.90.163.27,10.90.163.26,10.90.163.29,10.90.163.28,10.90.163.23,10.90.163.22,10.90.163.25,10.90.163.24,10.90.163.20,10.90.139.20,10.90.139.21,10.90.139.13,10.90.139.14,10.90.139.15,10.90.139.16,10.90.139.19,10.90.148.44 --rdma_tcp=tcp --nics=xgbe0 --saving_period=1 --port=7164 --ports_num=1 --local=0 --comment=_job.16646.instances --dot_period=1000 --log_period=1000 --num_passes=100 --trainer_count=10 --ports_num_for_sparse=1 --use_sparse_updater=1 --use_old_updater=1 --enable_grad_sparse_update=50000000 --grad_sparse_update_max_sparse_rate=0.50 --config=conf/trainer_config.conf --save_dir=./output --python_path=./python-gcc345 --python_bin=python2.7 --use_gpu=0 server.log |
i use adamax Settings( |
which algorithm support sparse update in the three of below algorithms? |
Committer: liym27 <liym0923@126.com>
adam
learning algorithm get an error, but it works well when i setadagrad
adam
settingsadagrad
settings:train.log:
The text was updated successfully, but these errors were encountered: