-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Expose underlying model of campaign #184
Comments
Hi @brandon-holt Having said that this exact functionality is already int he works and has also been requested e.g. here #78 Would you mind registering your interest in this feature in the other mentioned Issue? I would then close this one then to avoid duplicates |
Got it, will do! Thanks for the reply, looking forward to this feature! |
@AdrianSosic @Scienfitz Will the update allowing for model exposure also include some kind of feature importance analysis, to give us insight into which features/feature values were most useful for the model and how they influence the predictions? |
yes |
Hi, is it/would it be possible to expose the underlying model of a campaign in order to calculate predicted means and variances of a a set of new measurements (not necessarily those recommended by the campaign, but any user-specified measurement that exists within the search space)?
Ideally I'd like to be able to quantify the performance of the model on a set of known measurements as well.
Thanks!
The text was updated successfully, but these errors were encountered: