You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm getting a max variable size error here on a long-ish recording (~7hrs) that seems to come from a large value of nBatches: time 1591.36, pre clustered 48001 / 48215 batches Error using gpuArray.zeros Maximum variable size allowed on the device is exceeded. Error in clusterSingleBatches (line 111) ccb = gpuArray.zeros(nBatches, 'single');
I am using ops.ntbuff = 64; ops.NT = 32*1024+ ops.ntbuff; , and if I increase NT i'm starting to get out of memory errors. Does this impose a restriction on total recording duration unless I grab a card with more ram or is there some way to circumvent this limit?
The text was updated successfully, but these errors were encountered:
This is with 64 channels, all tetrodes. As far as i see it, the ideal solution for us would be to just be able to sort 4 channels at a time, which currently doesnt work - seems related to #25
I'm getting a max variable size error here on a long-ish recording (~7hrs) that seems to come from a large value of nBatches:
time 1591.36, pre clustered 48001 / 48215 batches Error using gpuArray.zeros Maximum variable size allowed on the device is exceeded. Error in clusterSingleBatches (line 111) ccb = gpuArray.zeros(nBatches, 'single');
I am using ops.ntbuff = 64; ops.NT = 32*1024+ ops.ntbuff; , and if I increase NT i'm starting to get out of memory errors. Does this impose a restriction on total recording duration unless I grab a card with more ram or is there some way to circumvent this limit?
The text was updated successfully, but these errors were encountered: