What is the "standard" for DeepEnsembles in the literature, what is randomized (initial network parameter values, training set shuffling, etc) #7
beckynevin
started this conversation in
General
Replies: 3 comments 5 replies
-
@bnord I'm thinking it would be interesting to test the randomization fo the training set separately from the randomization of the weights. |
Beta Was this translation helpful? Give feedback.
1 reply
-
Also, to comment on my above comment. I did the math to show that the uncertainty in Lakshminarayanan+2017 is the same as total uncertainty (aleatoric and epistemic) |
Beta Was this translation helpful? Give feedback.
2 replies
-
Lol, I don't even know what that is, so I doubt we did it. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I'm starting this discussion to brainstorm for our DE approach; basically what have past papers done, is there a standard, and is there something beyond the standard that we might be interested in implementing?
In the original DeepEnsembles paper Lakshminarayanan+2017, they do the following:
In DeeplyUncertain (aka Caldeira & Nord 2020) the DeepEnsemble has the following properties:
Beta Was this translation helpful? Give feedback.
All reactions