Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ Init ] Verify optimizer during initialization according to execution mode. #2853

Merged
merged 1 commit into from
Jan 3, 2025

Conversation

EunjuYang
Copy link
Contributor

Abstract

Currently, our code requires an optimizer setup even when developers intend to execute the network using INFERENCE mode. To address this issue, the PR've updated the initialize method to validate the presence of an appropriate optimizer based on the specified execution mode.

/// @todo: initialize should take a mode and check if mode is train but
/// optimizer is not given, make it as a hard error

Additionally, earlier unit tests did not account for varying execution modes, leading to gaps in checking optimizer configurations. Consequently, this PR includes modifications to ensure comprehensive coverage through updated unit test cases.

Self evaluation:

Build test: [x]Passed [ ]Failed [ ]Skipped
Run test: [x]Passed [ ]Failed [ ]Skipped

…tion mode.

- Updates the testing scope of optimization configuration from train()
  to initialize()
- Adds conditions to verify optimization settings and modifies several
  unit tests running under ExecutionMode::TRAIN without prior
optimization setup.

Signed-off-by: Eunju Yang <[email protected]>
Copy link
Member

@DonghakPark DonghakPark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice Work!! LGTM!


if (!opt) {
ml_loge("Optimizer should be set before initialization for training.");
return ML_ERROR_INVALID_PARAMETER;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't ML_ERROR_TRY_AGAIN better than ML_ERROR_INVALID_PARAMETER?

typedef enum {
ML_ERROR_NONE = 0, /**< Success! */
ML_ERROR_INVALID_PARAMETER = -EINVAL, /**< Invalid parameter */
ML_ERROR_TRY_AGAIN =
-EAGAIN, /**< The pipeline is not ready, yet (not negotiated, yet) */
ML_ERROR_UNKNOWN = _ERROR_UNKNOWN, /**< Unknown error */
ML_ERROR_TIMED_OUT = (_ERROR_UNKNOWN + 1), /**< Time out */
ML_ERROR_NOT_SUPPORTED =
(_ERROR_UNKNOWN + 2), /**< The feature is not supported */
ML_ERROR_PERMISSION_DENIED = -EACCES, /**< Permission denied */
ML_ERROR_OUT_OF_MEMORY = -ENOMEM, /**< Out of memory (Since 6.0) */
} ml_error_e;

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comment. I found that ML_ERROR_TRY_AGAIN indicates a runtime error, which does not fit the case for this optimizer setup. While I agree that the ML_ERROR_INVALID_PARAMETER may not perfectly fit for this specific error message, the error code was set based on our previous optimizer validation setup. FYI, I have attached the snippet below:

if (!opt) {
ml_loge("Cannot train network without optimizer.");
return ML_ERROR_INVALID_PARAMETER;
}

Copy link
Collaborator

@jijoongmoon jijoongmoon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@jijoongmoon jijoongmoon merged commit 252b2df into nnstreamer:main Jan 3, 2025
23 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants