Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Get Microsoft.Spark and Microsoft.Spark.Worker assembly version information #715
Get Microsoft.Spark and Microsoft.Spark.Worker assembly version information #715
Changes from 4 commits
54ce888
9412db8
4ae7663
68d24eb
d3e8d98
7013b1a
84917dc
33460ec
33ccccb
d427423
7a3b2f0
8b9d732
d7fdc1d
267cc11
341d603
02ff4be
050ce15
f0ec8db
0b465a2
a6b089d
dd057a1
c7e4131
9de8f4a
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, why have we chosen
numPartitions
as 10 here?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any number is fine, just some safe default to run for local / small clusters. Do you have a recommendation on what value we can use ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No 10 is fine, I was just wondering if there was a specific reason.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add a test covering this since it is a public API?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also I am wondering if asking for
numPartitions
is really necessary for finding the version from the user perspective. Would having it as a constant inside the function pose a problem?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would be the pros and cons of hardcoding the numbers instead ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pros I think would be usability, since it might be hard to intuitively understand what number of partitions a user should enter and what it has to do with getting version information. Cons, I guess include not being able to trigger every worker node and if some nodes have a different worker version on them then that would go uncaught. But that could happen even with taking
numPartitions
as an argument so not sure what the benefit there is, unless we document it in more depth so as to advise the user of the importance of estimating a 'correct'numPartitions
value, maybe explain in detail whatbest effort
means etc. Thoughts @imback82 ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we should name this
AssemblyVersion
/AssemblyVersionInfo
/VersionInfo
or something unique to dotnet just in case Spark ever adds a method calledVersion
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about making this an extension method under
Experimental
namespace?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Keep in same
Microsoft.Spark
project just a different namespace, right ?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Moved to
Microsoft.Spark.Experimental.Sql
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about the assembly version on the driver?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Which assembly version on the driver ? We are getting the Microsoft.Spark assembly version on the driver and the Microsoft.Spark.Worker version on whichever machines the spark executors are spinning up on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add BuildDate (by getting assembly file creation date) if we think it'd be useful/nice to have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can iterate on this as we get feedbacks