Replies: 2 comments 2 replies
-
Probably not related, but I had a similar (infinite recursion) error a few years back when I had a shortcode that automatically generated an xref to another page, and in a corner case it failed because there were two pages referring to each other, and both was waiting for the other one to finish building. The way I'd try to triangulate the error is:
Alternatively, you can try to remove stuff from the templates (like head.html) to see if anyhing there is causing the build to fail. |
Beta Was this translation helpful? Give feedback.
-
Probably also not related, but I had a similar (infinite recursion) error a while back when I used function resources.GetRemote inside a shortcode and the remote resource didn't answer in a timely fashioned manner. |
Beta Was this translation helpful? Give feedback.
-
A plea for help as I'm getting exasperated by my current project site (a script built auto-documentation system that pulls configuration data from the platform's core DB, and writes it out as Markdown along with some bits of analysis and linking and grouping of related data, etc).
I last had something close to this amount of files building successfully a year ago using a lightly modified Docsy theme (https://github.com/ouhft/docsy/tree/ouhft) along with Hugo 0.120.x, but significantly (and I've posted about this on the Hugo forum before) the memory requirements for such a build meant most machines couldn't handle it (the only two that could have 128GB of RAM and were adding an additional 200GB of Swap as well). Thus I've been really looking forward to testing the memory improvements of Hugo 0.124 onwards... only that caused other complications that broke Docsy, and we're now at a stage with the 0.10.0 release (well done Chalin et al!) that I hoped I'd get a working build.
But no.
Having finally removed all the bad link references that I'd managed to auto-mis-generate, I'm still having a failing build that mostly complains about:
I've tried to increase the timeout, from the default of 30s, to 180s, and then 360s... this hasn't changed the outcome, only increased the overall time to failure.
To give you some more detail - this is the output from the 360s timeout run:
For context, there are a handful of markdown files that are > 100MB in size (index pages typically)
And for contrast in time to failure - with the default 30s timeout we get an output of
.... which to me reads the same, but smaller delays.
Can anyone recommend some strategies to help me diagnose where the base issue lies please?
Beta Was this translation helpful? Give feedback.
All reactions