Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Log warnings for numIterations * miniBatchFraction < 1.0 #13265

Closed
wants to merge 2 commits into from

Conversation

Hydrotoast
Copy link

What changes were proposed in this pull request?

Add a warning log for the case that numIterations * miniBatchFraction <1.0 during gradient descent. If the product of those two numbers is less than 1.0, then not all training examples will be used during optimization. To put this concretely, suppose that numExamples = 100, miniBatchFraction = 0.2 and numIterations = 3. Then, 3 iterations will occur each sampling approximately 6 examples each. In the best case, each of the 6 examples are unique; hence 18/100 examples are used.

This may be counter-intuitive to most users and led to the issue during the development of another Spark ML model: zhengruifeng/spark-libFM#11. If a user actually does not require the training data set, it would be easier and more intuitive to use RDD.sample.

How was this patch tested?

build/mvn -DskipTests clean package build succeeds

@srowen
Copy link
Member

srowen commented May 24, 2016

Jenkins test this please

@@ -197,6 +197,11 @@ object GradientDescent extends Logging {
"< 1.0 can be unstable because of the stochasticity in sampling.")
}

if (numIterations * miniBatchFraction < 1.0) {
logWarning("Not all examples will be used if numIterations * miniBatchFraction " +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use string interpolation to add the actual values into this warning message

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@SparkQA
Copy link

SparkQA commented May 24, 2016

Test build #59233 has finished for PR 13265 at commit c7dd4c6.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@srowen
Copy link
Member

srowen commented May 25, 2016

OK merged to master/2.0

asfgit pushed a commit that referenced this pull request May 25, 2016
## What changes were proposed in this pull request?

Add a warning log for the case that `numIterations * miniBatchFraction <1.0` during gradient descent. If the product of those two numbers is less than `1.0`, then not all training examples will be used during optimization. To put this concretely, suppose that `numExamples = 100`, `miniBatchFraction = 0.2` and `numIterations = 3`. Then, 3 iterations will occur each sampling approximately 6 examples each. In the best case, each of the 6 examples are unique; hence 18/100 examples are used.

This may be counter-intuitive to most users and led to the issue during the development of another Spark  ML model: zhengruifeng/spark-libFM#11. If a user actually does not require the training data set, it would be easier and more intuitive to use `RDD.sample`.

## How was this patch tested?

`build/mvn -DskipTests clean package` build succeeds

Author: Gio Borje <[email protected]>

Closes #13265 from Hydrotoast/master.

(cherry picked from commit 589cce9)
Signed-off-by: Sean Owen <[email protected]>
@asfgit asfgit closed this in 589cce9 May 25, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants