Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to reach HA hdfs nameservice with the elasticsearch-repository-hdfs-2.3.0 plugin #8

Open
surekhabalaji opened this issue Mar 7, 2017 · 2 comments

Comments

@surekhabalaji
Copy link

We have installed a elasticsearch hdfs plug-in on the elasticSearch V 2.3.1 node and are trying to create a hdfs repo with the following command.

PUT _snapshot/my_hdfs_repository
{
"type": "hdfs",
"settings": {
"uri": "hdfs://namenode:8020/",
"path": "elasticsearch/respositories/my_hdfs_repository",
"conf.dfs.client.read.shortcircuit": "true"
}
}

This command works well while using nodename:port. Our env has the High Availabillity turned on with hdfs nameservice.

When we try creating hdfs repository using the nameservice ("uri": "hdfs://nameservice/")
, we get UnknownHostException - Cannot create Hdfs file-system for uri [hdfs://SA20HDPA] nested: IllegalArgumentException[java.net.UnknownHostException: SA20HDPA]; nested: UnknownHostException[SA20HDPA].

Any help is appreciated.

@n0mik0s
Copy link

n0mik0s commented Mar 16, 2018

The same question. Any possibility to use nameservice instead namenode in snap configuration?

UPD
We use python to connect to the hadoop cluster. Python have brilliant statement try-except-else which suit our needs in this particular case...

@n0mik0s
Copy link

n0mik0s commented Jun 4, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants