-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Classification using LDA #26
Comments
amrit... is the paper all done? like do that before moving on t |
I am on it prof |
@timm Here is the result of using LDA to automatically label the documents and then use a learner. From the paper, we cant reproduce results, due to :
Experiment:
Conclusion
Results: |
am now lost in the details. please bust fscore into precision and recall this looks like no win with tuning... right? please write this up as a 2-4 page pdf doc. define all your terms. dont worry about the start up sections (motivation, background) but what is your justification for "baseline"? what papers use "baseline"? t |
Yes no win with tuning, but our result numbers shown to LN might change. Conclusion might remain same or not. My baseline results is from our BIGDSE paper, where we just used hashing trick with svm as baseline. I will compile all these terms and my thoughts into a white paper soon. |
fyi- you may need to tune (1) the feature extraction (of the topics) AND (2) the learner to get improved performance. right now ur just tuning (1) right? without doing (2), what you could do is show conclusion instability (a venn diagram of documents classified XYZ via untuned feature extraction repeated 10 times on 10 different data orderings. with (2) you might get the kinds of improvements wei reported |
|
Experiment Setup
We have the baseline results with no smote svm, smote svm.
The text was updated successfully, but these errors were encountered: