You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To me you seem to be right and the conf seems like the way to go, though it seems like there was some good reason to avoid it. @imatiach-msft what did you mean by "more reliable"?
When spark.executor.cores is explicitly set, multiple executors from the same application may be launched on the same worker if the worker has enough cores and memory. Otherwise, each executor grabs all the cores available on the worker by default
When calculating # cores for each executor, currently LightGBMUtils uses Java's Runtime API, which returns all CPU cores in the physical host I think (https://github.com/Azure/mmlspark/blob/master/src/lightgbm/src/main/scala/LightGBMUtils.scala#L127). From the function comment, it mentioned "this is more reliable than getting value from conf". But here we should use the value from the conf, right? @imatiach-msft
The text was updated successfully, but these errors were encountered: