Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Secrets of Machine Learning: Ten Things You Wish You Had Known Earlier to be More Effective at Data Analysis #206

Open
agitter opened this issue Jan 12, 2020 · 1 comment
Labels
paper Papers we should cite

Comments

@agitter
Copy link
Collaborator

agitter commented Jan 12, 2020

https://arxiv.org/abs/1906.01998

Despite the widespread usage of machine learning throughout organizations, there are some key principles that are commonly missed. In particular: 1) There are at least four main families for supervised learning: logical modeling methods, linear combination methods, case-based reasoning methods, and iterative summarization methods. 2) For many application domains, almost all machine learning methods perform similarly (with some caveats). Deep learning methods, which are the leading technique for computer vision problems, do not maintain an edge over other methods for most problems (and there are reasons why). 3) Neural networks are hard to train and weird stuff often happens when you try to train them. 4) If you don't use an interpretable model, you can make bad mistakes. 5) Explanations can be misleading and you can't trust them. 6) You can pretty much always find an accurate-yet-interpretable model, even for deep neural networks. 7) Special properties such as decision making or robustness must be built in, they don't happen on their own. 8) Causal inference is different than prediction (correlation is not causation). 9) There is a method to the madness of deep neural architectures, but not always. 10) It is a myth that artificial intelligence can do anything.

Could be relevant to this project. I haven't read it carefully enough to see what I agree or disagree with.

@Benjamin-Lee
Copy link
Owner

If you don't use an interpretable model, you can make bad mistakes... Explanations can be misleading and you can't trust them

I love this

@Benjamin-Lee Benjamin-Lee added the paper Papers we should cite label Oct 4, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
paper Papers we should cite
Projects
None yet
Development

No branches or pull requests

2 participants