Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Loss Function #13

Open
ssdutHB opened this issue Feb 4, 2018 · 1 comment
Open

About Loss Function #13

ssdutHB opened this issue Feb 4, 2018 · 1 comment

Comments

@ssdutHB
Copy link

ssdutHB commented Feb 4, 2018

Hi, everyone. I read the code in data_parser.py, found that there is an option for input labels, that is "target regression". If we choose this option, the loaded ground truth will be a matrix consisting of real numbers from 0 to 1 rather than a binary matrix. After that, I checked losses.py and found that there are two lines of codes:
"count_neg = tf.reduce_sum(1. - y)"
"count_pos = tf.reduce_sum(y)"

there two lines of codes seems work well for a binary ground truth, but if they work for a ground truth that consists of real numbers from 0 to 1? I am looking forward to your answers.

@CangHaiQingYue
Copy link

CangHaiQingYue commented Jul 18, 2018

I had the same question as you. I used

y_0 = tf.zeros_like(y)
beta = tf.reduce_mean(tf.cast(tf.equal(y, y_0), tf.float32))

to solve.
But during evalution,
"count_neg = tf.reduce_sum(1. - y)" "count_pos = tf.reduce_sum(y)"
works better, I dont know why

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants