-
Notifications
You must be signed in to change notification settings - Fork 310
Conversation
hmm, I guess travis configuration needs to be changed to build with jdk 8 |
…d to use java 1.7
Codecov Report
@@ Coverage Diff @@
## master #242 +/- ##
==========================================
+ Coverage 90.4% 90.46% +0.05%
==========================================
Files 5 5
Lines 323 325 +2
Branches 49 51 +2
==========================================
+ Hits 292 294 +2
Misses 31 31 |
Would be wonderful if this could be merged |
For now I'm running this with jitpack :( |
It is currently failing codecov ... |
Is there any timeline by which this will be merged and this will be released? We started facing this issue in production, because of |
code coverage is now passing. The added test is pretty lame to get code coverage. Probably the right thing to do is to start testing against spark 2.2, but that requires reconfiguring travis to use java 1.8, and I don't have permissions for that. |
Isn't all travis configuration in |
541093a
to
d62d006
Compare
oh good point. I actually meant codecoverage, but taking a closer look at the build process I guess that is automatic from travis. lets see what coverage I get from the latest changes |
Alright thanks. |
Just checking if this is expected to be released as 3.3.0 some time today? Thanks |
Just curious when is 3.3 expected to release? |
Hi, I have a naive question on InternalRow to Row conversion. Since this conversion will happen for each row, can it result in performance degradation in case of millions of records when compared to spark-avro_3.2.1 (with Spark 2.1.0)? |
when will 3.3 release? thank you |
@squito @marcintustin @ritesh-dineout @febinsathar spark-avro 4.0.0 is released :) |
This adds support for spark 2.2.0. Primarily this is addressing the api change introduced by SPARK-19085 apache/spark@b3d3962. This fixes the issue in the most simplistic way: it copies the old conversion from
InternalRow
toRow
. A better implementation would do something more efficient withInternalRow
.This keeps both
write()
methods, so it should be compatible with all spark 2+ versions.It also fixes some dependency conflicts when running tests -- it seems curator has a conflicting version of guava with hadoop, but we don't actually need curator for tests.
Tested by running unit tests locally.
Fixes #240