Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

S3 doesn't work if AWS metadata is blocked on kubernetes. #4122

Closed
empath-nirvana opened this issue Apr 19, 2024 · 1 comment · Fixed by #4151
Closed

S3 doesn't work if AWS metadata is blocked on kubernetes. #4122

empath-nirvana opened this issue Apr 19, 2024 · 1 comment · Fixed by #4151

Comments

@empath-nirvana
Copy link

Expected Behavior

This is the issue I'm experiencing, but the issue is closed and the problem still exists in the latest version.

#3492

It should be able to pull the region and anything else it needs from well known environment variables, but it doesn't. (I have AWS_REGION set as an environment variable and it doesn't work, for example)

Current Behavior

Steps to reproduce

Configure as mentioned in the ticket, but with aws metadata blocked on the node.

Specifications

  • Version:
  • Platform: eks
  • Subsystem:

Possible Solution

Initialize the s3 client in a way that it will use well known aws environment variables like AWS_REGION if metadata isn't available.

@tokoko
Copy link
Collaborator

tokoko commented Apr 20, 2024

Thanks for the issue. actually, the problem here is that feast treats awsRegion config as mandatory. This shouldn't be affecting any other aws env varieble. Just a note that what you can do before an actual fix is to provide region in the configuration like this:

registry: s3://example_feast_registry
awsRegion: eu-central-1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants