You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In [1]: from agasc.supplement.magnitudes import star_obs_catalogs
...:
In [2]: stars_obs_v2 = star_obs_catalogs.get_star_observations()
...:
HASH: Out of overflow pages. Increase page size
---------------------------------------------------------------------------
error Traceback (most recent call last)
<ipython-input-2-b3fe61c90950> in <module>
----> 1 stars_obs_v2 = star_obs_catalogs.get_star_observations()
~/git/agasc/agasc/supplement/magnitudes/star_obs_catalogs.py in get_star_observations(start, stop, obsid)
23
24 with commands.conf.set_temp('commands_version', '2'):
---> 25 catalogs = commands.get_starcats(start=start, stop=stop, obsid=obsid)
26 observations = Table(commands.get_observations(start=start, stop=stop, obsid=obsid))
27 for cat in catalogs:
~/git/kadi/kadi/commands/observations.py in get_starcats(start, stop, obsid, scenario, cmds, as_dict)
365 starcat = starcat_dict
366
--> 367 starcats_db[db_key] = starcat_dict
368
369 starcats.append(starcat)
~/miniconda3/envs/ska3/lib/python3.8/shelve.py in __setitem__(self, key, value)
123 p = Pickler(f, self._protocol)
124 p.dump(value)
--> 125 self.dict[key.encode(self.keyencoding)] = f.getvalue()
126
127 def __delitem__(self, key):
error: cannot add item to database
I spent a good while doing a deep dive into this problem, which is slow because it only shows up after a couple minutes of churning.
It has nothing to do with memory per se. I was able to replicate the problem with the activity profiler going and the process was never taking more than 1.2 Gb before crashing.
Googling the error produces a number of hits but I couldn't find anything useful. Maybe I didn't click the right link or spend enough time digging into the result pages.
If you break the shelf creation process in into smaller parts (say 5 years at a time) and exit Python each time, then I haven't see a problem..
This would suggest that breaking a large query (e.g. get_starcats()) into bits where the shelve file is closed and re-opened might help. MADDENINGLY, this didn't work and I got the same error at the same place. So it looks like there is some kind of leak in shelve that causes problems even when the shelf file is closed. Go figure.
So this is in a somewhat unhappy place. The workaround is incrementally building up the starcats.db cache file in 5 year chunks. Eventually I'll do something better.
The text was updated successfully, but these errors were encountered:
As noted in sot/agasc#135:
I spent a good while doing a deep dive into this problem, which is slow because it only shows up after a couple minutes of churning.
get_starcats()
) into bits where the shelve file is closed and re-opened might help. MADDENINGLY, this didn't work and I got the same error at the same place. So it looks like there is some kind of leak inshelve
that causes problems even when the shelf file is closed. Go figure.So this is in a somewhat unhappy place. The workaround is incrementally building up the
starcats.db
cache file in 5 year chunks. Eventually I'll do something better.The text was updated successfully, but these errors were encountered: