Serve robots.txt
to prevent crawlers from accessing UI
#97
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Here, add a
robots.txt
that denies all crawler access to UIinstallations. If a River UI is exposed publicly without authentication,
it's bad and a security problem, but we don't have to make it worse by
allowing crawlers to find it.
We'll add a separate
robots.txt
in the demo that will allow basicaccess to top-level pages, although deny access to crawl through the
potentially tens of thousands of jobs in
/jobs/
.There were a number of ways to go about this that were plausible. I
ended up putting in an embedded file system that can pull in static
files easily into the Go program, and which could be reused for future
static files in case they're needed. It may be a little more than we
need right now, but it wasn't much harder to do than serving a one off
static file.