-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aibolit API. Use Cases. ML pipeline. Overview. Discussion. #556
Comments
First attempt, let me know if it's not the right format: In general, I'd like feature extraction to be customizable. Right now it is set globally in config.py. If I may propose a way of implementing it: Have text-> feature_vector extraction included in model train and inference pipeline. A model object would store an information about features, and once a text is passed it extracts them automaticall and passes the vector to ML model.
Some use cases where this would be useful/necessary:
|
The problem is that we have certain functionality related to So, we cannot pass text or list of texts. We split this function before because we have lots of additional actions. Our model is closely related to @acheshkov Can we just write another function which @KatGarmash needs. It will duplicate functionality but it will have additional features, which @KatGarmash needs? |
@lyriccoder yes, you can create a new function with desired interface |
|
@lyriccoder which "certain type of dictionary"? |
I have looked through the methods for the Actually, now that I have looked at the main.py code, I guess can use |
@acheshkov @lyriccoder updated the specification of the desired functions (see first comment). Is it better? |
Otherwise, you can save calculated dataset to any variable and fit as many times as you need with different features. I will just create 2 different functions for you, @KatGarmash |
yes, it is |
Here we want to collect the scenarios of using aibolit package and discuss the API for end-user.
Leave your comments, diagrams and text here.
The text was updated successfully, but these errors were encountered: