-
Notifications
You must be signed in to change notification settings - Fork 165
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
InvalidArgumentError: Name: , Context feature 'video_id' is required but could not be found. #23
Comments
it's because the devs changed their code according to GDPR compliance and there is a version mismatch of the code and your data |
Hi , |
What can be done to resolve this issue? |
Still does not work for me. I copy pasted the readers.py code in the starter code to the readers.py code in willow's directory. Still does not work for me. |
@punitagrawal32 you just need to have the dataset and starter code of the previous year, you need an older version, but still if you don't have downloaded the whole Frame and Video Level features and put them in the correct folder it asks you will encounter another problem that happened to lots of people including me |
@estathop I have not been able to download the version 1 data using the download.py script (using curl).
Perfectly able to download version 2 though ( changing partition=2 in the code). Do you know why this might be happening? I have manually downloaded the older version data, and trying to run the NetVLAD model on the 'audio' features only. I have been advised to modify the code in the 'frame_level_models.py' script, so that it can accept only audio features, and not BOTH (audio and video). Could you suggest the changes? |
@estathop Also, I realize I need to run the models on the newer data. Is there a way I can run @antoine77340 's code on the newer data (say the GRU model), such that I do not get the error? |
You need to modify his code heavily, personally I quitted trying to make it work |
@punitagrawal32 About the original "Context feature 'video_id' is required but could not be found" problem, I solved it by changing "video_id" in readers.py to "id". As for the change of input data channels, you have to modify frame_level_models.py. In my example, I only want video input. Audio |
@wenching33 Thank you for the reply. I need the audio features only. Thus, I have commented out lines 664 and 665,and replaced vlad_video with vlad_audio in line 671. This generates the following error:
On looking it up, it seems to have worked for you because video inputs range from 0:1024, whereas the code in the audio input ranges from 1024: (to the end). But since I do not have video features, this is throwing the error. I tried changing the code to:
and to:
but now it throws the error:
Do you know what might be causing the issue? Thankful for your help. Regards, |
@wenching33 awaiting your response... Thanks in advance. |
@estathop do you have it work? thanks! |
@chenboheng If you want just a state-of-the-art classifier on Youtube8M you can check this year's winner who has everything working according to the Version 2. All top-5 last year's entries didn't work for me so when I needed just a classifier I used google's source code to train a mixture of Experts on Video level features |
@punitagrawal32 Sorry, I didn't check my message for long time. Your error message looks like type error. I actually modify WILLOW's code heavily to make it work for my own purpose. There will be lots of problem you have to solve all the way. I'm afraid I can't help you for each problem you will meet. |
you can change 'video_ id' to 'id'. in the readers.py. it work. because, in extract_tfrecords_main.py of youtube-8m, flags.DEFINE_string('video_file_feature_key', 'id', |
If you use python3 instead of python2, then you can try two things as below
--- a/frame_level_models.py
+++ b/frame_level_models.py
@@ -644,13 +644,13 @@ class NetVLADModelLF(models.BaseModel):
if lightvlad:
video_NetVLAD = LightVLAD(1024,max_frames,cluster_size, add_batch_norm, is_training)
- audio_NetVLAD = LightVLAD(128,max_frames,cluster_size/2, add_batch_norm, is_training)
+ audio_NetVLAD = LightVLAD(128,max_frames,cluster_size//2, add_batch_norm, is_training)
elif vlagd:
video_NetVLAD = NetVLAGD(1024,max_frames,cluster_size, add_batch_norm, is_training)
- audio_NetVLAD = NetVLAGD(128,max_frames,cluster_size/2, add_batch_norm, is_training)
+ audio_NetVLAD = NetVLAGD(128,max_frames,cluster_size//2, add_batch_norm, is_training)
else:
video_NetVLAD = NetVLAD(1024,max_frames,cluster_size, add_batch_norm, is_training)
- audio_NetVLAD = NetVLAD(128,max_frames,cluster_size/2, add_batch_norm, is_training)
+ audio_NetVLAD = NetVLAD(128,max_frames,cluster_size//2, add_batch_norm, is_training)
if add_batch_norm:# and not lightvlad: I simply figured out this issue by testing like as below
This also helps you |
I download the 1/100 frame level features and run the train.py code. However, the follow wrong codes are obtained:
INFO:tensorflow:Error reported to Coordinator: <class 'tensorflow.python.framework.errors_impl.InvalidArgumentError'>, Name: , Context feature 'video_id' is required but could not be found.
[[Node: train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample = ParseSingleSequenceExample[Ncontext_dense=1, Ncontext_sparse=1, Nfeature_list_dense=1, Nfeature_list_sparse=0, Tcontext_dense=[DT_STRING], context_dense_shapes=[[]], context_sparse_types=[DT_INT64], feature_list_dense_shapes=[[]], feature_list_dense_types=[DT_STRING], feature_list_sparse_types=[], _device="/job:localhost/replica:0/task:0/cpu:0"](train_input/ReaderReadV2_2:1, train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample/feature_list_dense_missing_assumed_empty, train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample/context_sparse_keys_0, train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample/context_dense_keys_0, train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample/feature_list_dense_keys_0, train_input/ParseSingleSequenceExample_2/Const, train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample/debug_name)]]
[[Node: train_input/shuffle_batch_join/cond_2/random_shuffle_queue_EnqueueMany/_98 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/gpu:0", send_device="/job:localhost/replica:0/task:0/cpu:0", send_device_incarnation=1, tensor_name="edge_90_train_input/shuffle_batch_join/cond_2/random_shuffle_queue_EnqueueMany", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/gpu:0"]]
Caused by op u'train_input/ParseSingleSequenceExample_2/ParseSingleSequenceExample', defined at:
File "train.py", line 638, in
app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 44, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
File "train.py", line 626, in main
FLAGS.export_model_steps).run(start_new_model=FLAGS.start_new_model)
File "train.py", line 353, in run
saver = self.build_model(self.model, self.reader)
File "train.py", line 524, in build_model
num_epochs=FLAGS.num_epochs)
File "train.py", line 236, in build_graph
num_epochs=num_epochs))
File "train.py", line 164, in get_input_data_tensors
reader.prepare_reader(filename_queue) for _ in range(num_readers)
File "/media/ResearchProject/deeplearning/code/Youtube-8M-WILLOW/readers.py", line 212, in prepare_reader
max_quantized_value, min_quantized_value)
File "/media/ResearchProject/deeplearning/code/Youtube-8M-WILLOW/readers.py", line 224, in prepare_serialized_examples
for feature_name in self.feature_names
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/parsing_ops.py", line 780, in parse_single_sequence_example
feature_list_dense_defaults, example_name, name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/parsing_ops.py", line 977, in _parse_single_sequence_example_raw
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/ops/gen_parsing_ops.py", line 287, in _parse_single_sequence_example
name=name)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/op_def_library.py", line 763, in apply_op
op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 2395, in create_op
original_op=self._default_original_op, op_def=op_def)
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/framework/ops.py", line 1264, in init
self._traceback = _extract_stack()
InvalidArgumentError (see above for traceback): Name: , Context feature 'video_id' is required but could not be found.
How can I solve this problem?
The text was updated successfully, but these errors were encountered: