-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Tip 1 #241
New Tip 1 #241
Conversation
I think this is going in a good direction. What are you thinking re: how to decide? For me it comes down to:
It might come down to more things, but this is a bit off the cuff of how I'd think about things. |
Converting this from draft status to be ready for review. |
BTW, this closes #228 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New lines 3 & 4 still needs some work.
What do you have in mind? I'm trying to get at the misleading conception that since NN are a universal function approximators they should be used universally. |
I think the sentiment works; just needs some tweaking of the language. Maybe something like In recent years, the number of publications implementing DL in biology have risen tremendously. I still don't love the "it may appear that it is capable of anything" bit. Any suggestions? |
What about something like "a panacea for modeling problems"? |
Yes, I think that works well |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good in general! Some generally small comments here and there, a few of which may require a couple of extra sentences.
@SiminaB thanks for the feedback. Going to bed now but will work on it tomorrow. |
content/03.when-to-use-dl.md
Outdated
Projects such as Keras [@https://keras.io], a high-level interface for TensorFlow, make it relatively straightforward to design and test custom DL architectures. | ||
In the future, projects such as these are likely to bring DL experimentation within reach to even more researchers. | ||
|
||
There are some types of problems in which using DL is strongly indicated over ML. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this part is missing "why" to synthesize the examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Basically, what I'm getting at is that you don't need to start out with the most high powered tools there are now that there are more beginner-friendly ones that are stable. If you need a NN architecture that doesn't exist out of the box in one of the higher level tools but isn't so exotic that you need TensorFlow primitives, a tool like Keras might be a good middle ground.
Of the presented options I think I prefer "Decide whether deep learning is appropriate for your problem" |
I also like "Decide whether deep learning is appropriate for your problem". It suggests that you have a goal (problem) and want to consider appropriate tools, which may or may not be deep learning. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like the new tip. I've just made a few minor inline changes based on people's comments
[ci skip] This build is based on 43ceab7. This commit was created by the following CI build and job: https://github.com/Benjamin-Lee/deep-rules/commit/43ceab79f9126d053cf68269988ac648ecdf85f3/checks https://github.com/Benjamin-Lee/deep-rules/runs/304996701
[ci skip] This build is based on 43ceab7. This commit was created by the following CI build and job: https://github.com/Benjamin-Lee/deep-rules/commit/43ceab79f9126d053cf68269988ac648ecdf85f3/checks https://github.com/Benjamin-Lee/deep-rules/runs/304996701
quality exposition from Daniela Witten on how regularization/penalization
and stochastic gradient descent can be viewed as two sides of the same
coin:
https://threadreaderapp.com/thread/1292293102103748609.html
same phenomenon observed with gradient boosting converging to an L1
penalized regression fit, etc.
as Hastie has said a nearly infinite number of times, if the true model is
simple, penalization will find it; if the true model is complex, only more
data will find it.
(with the caveat that truth is subjective and not necessarily important :-)
)
…--t
On Tue, Oct 13, 2020 at 2:40 PM Benjamin Lee ***@***.***> wrote:
Merged #241 <#241> into
master.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#241 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAABCISM4KZW6MCUTHYPXUDSKSNLHANCNFSM4SGYPFZA>
.
|
…in-tip-2 Remove a paragraph that was moved to tip 1 in #241
Did you add yourself as a contributor if this is your first contribution?
Any more details?
Per the discussion in #228, this PR moves the content in the current tip 1 around (only removing a few lines) and creates space for a new first tip focusing on whether you should use DL in the first place.