Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: 'float' object is not iterable #43

Open
mehdidc opened this issue Oct 12, 2017 · 9 comments
Open

TypeError: 'float' object is not iterable #43

mehdidc opened this issue Oct 12, 2017 · 9 comments

Comments

@mehdidc
Copy link

mehdidc commented Oct 12, 2017

@jorisvandenbossche your submission "keras_ssd7_basic" trained successfully but broke during the scoring :

test map 0.0523166823678
test prec(0) nan
test prec(0.5) nan
test prec(0.9) nan
test rec(0) 0.0
test rec(0.5) 0.0
test rec(0.9) 0.0
test madc nan
test madr nan
Traceback (most recent call last):
  File "aws_one.py", line 371, in <module>
    train_one(**config)
  File "aws_one.py", line 227, in train_one
    score_submission(submission)
  File "/mnt/ramp/ramp-board/databoard/db_tools.py", line 1075, in score_submission
    submission.compute_valid_score_cv_bag()
  File "/mnt/ramp/ramp-board/databoard/model.py", line 1260, in compute_valid_score_cv_bag
    ground_truths_train, test_is_list)
  File "/mnt/ramp/ramp-board/databoard/model.py", line 940, in _get_score_cv_bags
    ground_truths, combined_predictions, valid_indexes))
  File "/mnt/ramp/ramp-workflow/rampwf/score_types/base.py", line 21, in score_function
    return self.__call__(y_true, y_pred)
  File "/mnt/ramp/ramp-workflow/rampwf/score_types/detection.py", line 25, in __call__
    for single_detection in y_pred]
TypeError: 'float' object is not iterable

Any idea?

@jorisvandenbossche
Copy link
Contributor

Hmm, not directly sure. Didn't see this locally.
But the fact that there are many 'nan's in the other scores is also a bit suspicious. Is this with the full data?

@jorisvandenbossche
Copy link
Contributor

@mehdidc do you have some more information here? When did this error exactly happen? Was it on the public data or on the private backend data?

@kegl
Copy link
Contributor

kegl commented Oct 17, 2017

I'm using nan's for saying that for that particular instance there is no prediction. E.g. when we do CV bagging, some points are never in the test set. It of course must be different from an empty list. This should be handled in the score_function, using the optional valid_indexes parameter, sent by the caller in case some of the points are invalid. See the way is it handled in

https://github.com/paris-saclay-cds/ramp-workflow/blob/detection_error/rampwf/score_types/base.py

We just need to add it to DetectionBaseScoreType. __call__

@jorisvandenbossche
Copy link
Contributor

@kegl sorry, I don't understand. This checking of valid_indexes is already in master, and the DetectionBaseScoreType uses the score_function (which handles the valid_indexes) from BaseScoreType. So I don't fully see what needs to be changed in DetectionBaseScoreType

@jorisvandenbossche
Copy link
Contributor

Is there anything I can do to help debug this?

Can maybe the submission be ran again with the latest ramp-workflow to see if it still happens?

@aboucaud
Copy link
Contributor

aboucaud commented Nov 3, 2017

@mehdidc can you still reproduce the error ?

@mehdidc
Copy link
Author

mehdidc commented Nov 3, 2017

@aboucaud @jorisvandenbossche Yes still the same error with the new ramp-workflow. Again trains successfully but breaks during the scoring:

DEBUG: lzma module is not available
DEBUG: Registered VCS backend: git
DEBUG: Registered VCS backend: hg
DEBUG: Registered VCS backend: svn
DEBUG: Registered VCS backend: bzr
Traceback (most recent call last):
File "aws_one.py", line 375, in
train_one(**config)
File "aws_one.py", line 90, in train_one
score_submission(new_submission)
File "/mnt/ramp/ramp-board/databoard/db_tools.py", line 1086, in score_submission
submission.compute_valid_score_cv_bag()
File "/mnt/ramp/ramp-board/databoard/model.py", line 1262, in compute_valid_score_cv_bag
ground_truths_train, test_is_list)
File "/mnt/ramp/ramp-board/databoard/model.py", line 942, in _get_score_cv_bags
ground_truths, combined_predictions, valid_indexes))
File "/mnt/ramp/ramp-workflow/rampwf/score_types/base.py", line 21, in score_function
return self.call(y_true, y_pred)
File "/mnt/ramp/ramp-workflow/rampwf/score_types/detection/base.py", line 20, in call
y_pred_above_confidence = _filter_y_pred(y_pred, conf_threshold)
File "/mnt/ramp/ramp-workflow/rampwf/score_types/detection/util.py", line 188, in _filter_y_pred
for y_pred_patch in y_pred]
TypeError: 'float' object is not iterable

@aboucaud
Copy link
Contributor

aboucaud commented Nov 3, 2017

According to your log y_pred is a float. Weird, no ?

@kegl
Copy link
Contributor

kegl commented Nov 3, 2017

There was a problem with the submission that failed even the starting kit, once cv bagging was introduced, namely that predict should return an np.array of objects, not a multi-d np.array. I fixed it in this PR on mars_craters: #53. Yesterday I retrained this new submission using ramp_test_submission, both on the starting kit data and the backend data. The scores sucked :) but it went through, including cv_bagging. That may have not solved this crash, but we should try.

Now, these errors should be caught in the init of the detection prediction type to enforce early that the predict function returns the right format (np.array of objects).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants