Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ARROW-243 Use libhdfs3 #108

Closed
wants to merge 1 commit into from
Closed

ARROW-243 Use libhdfs3 #108

wants to merge 1 commit into from

Conversation

rhl-
Copy link

@rhl- rhl- commented Jul 26, 2016

This allows you to switch of libhdfs for libhdfs3. It seems to work locally. There are some issues with the existing HDFS IO Unit tests in that they don't necessarily properly configure themselves based on the configuration of the Hadoop cluster. (For example, disabling short circuit reads). It seems the previous PR hit some network hiccups. I'm trying to see if this retry makes travis ci pass.

…libhdfs3.

added an alternate implementation of ReatAt
@wesm
Copy link
Member

wesm commented Jul 26, 2016

You can git commit --amend (with no changes) to bump the commit hash and force push to trigger a Travis build, do not need to open a new PR. Moving my comment from #106 here:

Can you create a JIRA (on issues.apache.org) about this and add it (ARROW-XXX: ...) to the PR title?

I'm more in favor of adding a runtime (vs. compile/link-time) option to switch between libhdfs and libhdfs3. If we can avoid requiring any of these libraries at link-time that would be ideal. Note also that libhdfs3 features a transitive LGPL dependency (GNU SASL).

@rhl- rhl- changed the title added auto download code, and cmake support for switching libhdfs to … ARROW-243 Use libhdfs3 Jul 26, 2016
@rhl-
Copy link
Author

rhl- commented Jul 26, 2016

I've gone ahead and created the JIRA issue.

I'm not sure what is wrong with the google test libraries on two of the builders. I'll have to take a closer look.

I'll look into making this change for libhdfs3.

Do you have any idea of the implementation of ReadAt is what is expected?

Do the travis builds test the hdfs code?

@asfgit asfgit closed this in cfde460 Dec 19, 2016
wesm pushed a commit to wesm/arrow that referenced this pull request Sep 2, 2018
Author: Uwe L. Korn <uwelk@xhochy.com>

Closes apache#108 from xhochy/parquet-620 and squashes the following commits:

da122ad [Uwe L. Korn] Ensure metadata is written only once
wesm pushed a commit to wesm/arrow that referenced this pull request Sep 4, 2018
Author: Uwe L. Korn <uwelk@xhochy.com>

Closes apache#108 from xhochy/parquet-620 and squashes the following commits:

da122ad [Uwe L. Korn] Ensure metadata is written only once

Change-Id: I7653597fdf69c961545d6c978fdc1367267adee7
wesm pushed a commit to wesm/arrow that referenced this pull request Sep 6, 2018
Author: Uwe L. Korn <uwelk@xhochy.com>

Closes apache#108 from xhochy/parquet-620 and squashes the following commits:

da122ad [Uwe L. Korn] Ensure metadata is written only once

Change-Id: I7653597fdf69c961545d6c978fdc1367267adee7
wesm pushed a commit to wesm/arrow that referenced this pull request Sep 7, 2018
Author: Uwe L. Korn <uwelk@xhochy.com>

Closes apache#108 from xhochy/parquet-620 and squashes the following commits:

da122ad [Uwe L. Korn] Ensure metadata is written only once

Change-Id: I7653597fdf69c961545d6c978fdc1367267adee7
wesm pushed a commit to wesm/arrow that referenced this pull request Sep 8, 2018
Author: Uwe L. Korn <uwelk@xhochy.com>

Closes apache#108 from xhochy/parquet-620 and squashes the following commits:

da122ad [Uwe L. Korn] Ensure metadata is written only once

Change-Id: I7653597fdf69c961545d6c978fdc1367267adee7
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants