1856 Commits

Author SHA1 Message Date
Glenn Jocher
93c348f353 updates 2019-11-30 18:52:37 -08:00
Glenn Jocher
6a05cf56c2 updates 2019-11-30 18:45:43 -08:00
Glenn Jocher
8be4b41b3d updates 2019-11-30 18:19:17 -08:00
Glenn Jocher
34155887bc updates 2019-11-30 17:48:21 -08:00
Glenn Jocher
6992c68e33 updates 2019-11-30 17:47:49 -08:00
Glenn Jocher
0f6954fa04 updates 2019-11-30 17:47:33 -08:00
Glenn Jocher
a699c901d3 updates 2019-11-30 17:38:29 -08:00
Glenn Jocher
f2ec1cb9ea updates 2019-11-30 17:19:44 -08:00
Glenn Jocher
4e0067cdc9 updates 2019-11-30 17:14:53 -08:00
Glenn Jocher
e28a425384 updates 2019-11-30 17:13:21 -08:00
Glenn Jocher
3cdbf246c9 updates 2019-11-30 17:03:47 -08:00
Glenn Jocher
1ff01f0973 updates 2019-11-30 16:58:56 -08:00
Glenn Jocher
ff41a15a2b updates 2019-11-30 15:34:57 -08:00
Glenn Jocher
8a13bf0f3f updates 2019-11-30 15:33:10 -08:00
Glenn Jocher
23ca2f2e7e updates 2019-11-30 15:32:39 -08:00
Glenn Jocher
9f0273a459 updates 2019-11-30 14:43:23 -08:00
Glenn Jocher
937d8fa53e updates 2019-11-30 14:17:32 -08:00
Glenn Jocher
8d4790349b updates 2019-11-30 14:16:01 -08:00
Glenn Jocher
5a1bc71406 updates 2019-11-30 13:20:22 -08:00
Glenn Jocher
8afc18e028 updates 2019-11-30 13:00:20 -08:00
Glenn Jocher
f365946c2f updates 2019-11-30 12:43:41 -08:00
Glenn Jocher
e613bbc88c updates 2019-11-29 19:10:01 -08:00
Glenn Jocher
77012f8f97 updates 2019-11-29 18:20:57 -08:00
Glenn Jocher
51d666a81a updates 2019-11-28 09:05:13 -10:00
Glenn Jocher
6258061a81 updates 2019-11-27 23:36:02 -10:00
Glenn Jocher
bccff3bfc1 updates 2019-11-27 23:31:25 -10:00
Glenn Jocher
340e0371f8 updates 2019-11-27 22:36:01 -10:00
Glenn Jocher
9e9a6a1425 updates 2019-11-27 15:50:29 -10:00
Glenn Jocher
82b62c9855 updates 2019-11-27 15:50:00 -10:00
Glenn Jocher
4b251406e2 updates 2019-11-27 15:04:05 -10:00
Glenn Jocher
91fca0e17d updates 2019-11-27 15:03:05 -10:00
Glenn Jocher
9319ae8ff9 updates 2019-11-27 15:00:41 -10:00
Glenn Jocher
413afab11c updates 2019-11-27 14:59:46 -10:00
Glenn Jocher
9c1d7d5248 updates 2019-11-27 14:52:33 -10:00
Glenn Jocher
ea19c33a87 updates 2019-11-27 14:35:18 -10:00
Glenn Jocher
3dec99b16c updates 2019-11-26 16:03:45 -10:00
Glenn Jocher
0417b3a527 updates 2019-11-26 13:53:05 -10:00
Glenn Jocher
78a2de52b5 updates 2019-11-26 13:23:47 -10:00
Glenn Jocher
b04392e298 updates 2019-11-26 12:59:13 -10:00
Glenn Jocher
40ae87cb46 updates 2019-11-26 12:36:21 -10:00
Glenn Jocher
0fe40cb687 updates 2019-11-26 12:34:47 -10:00
Glenn Jocher
92f742618c updates 2019-11-26 10:26:14 -10:00
Glenn Jocher
b269ed7b29 updates 2019-11-25 18:42:48 -10:00
Glenn Jocher
3c57ff7b1b updates 2019-11-25 17:24:05 -10:00
Glenn Jocher
90cfb91858 updates 2019-11-25 17:13:10 -10:00
Glenn Jocher
75e8ec323f updates 2019-11-25 11:45:28 -10:00
Glenn Jocher
0245ff9133 updates 2019-11-25 08:26:41 -10:00
Francisco Reveriano
26e3a28bee Update train.py for distributive programming (#655)
When attempting to running this function in a multi-GPU environment I kept on getting a runtime issue. I was able to solve this problem by passing this keyword. I first found the solution here: 
https://github.com/pytorch/pytorch/issues/22436
and in the pytorch tutorial

'RuntimeError: Expected to have finished reduction in the prior iteration before starting a new one. This error indicates that your module has parameters that were not used in producing loss. You can enable unused parameter detection by (1) passing the keyword argument find_unused_parameters=True to torch.nn.parallel.DistributedDataParallel; (2) making sure all forward function outputs participate in calculating loss. If you already have done the above two steps, then the distributed data parallel module wasn't able to locate the output tensors in the return value of your module's forward function. Please include the loss function and the structure of the return value of forward of your module when reporting this issue (e.g. list, dict, iterable). '
2019-11-24 22:21:36 -10:00
Glenn Jocher
a0ef217842 updates 2019-11-24 20:10:39 -10:00
Glenn Jocher
9b55bbf9e2 updates 2019-11-24 20:08:24 -10:00