You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
============ Training LBS model ============
[Train] LRS dataloader
[PointCloud] Found 5046 samples in data/shadow_hand
0%| | 0/1 [00:00<?, ?it/s[Cache] get_canonical_pose | 0/630 [00:00<?, ?it/s]
0%| | 0/1 [00:24<?, ?it/s]
Traceback (most recent call last):ng Average: 0.0004029735955761829: 63%|█████████████████████████████████████████████████████████████████████████▎ | 398/630 [00:24<00:09, 24.96it/s]
File "/home/rusrakhimov/projects/drrobot/train.py", line 266, in <module>
three_stage_training(lp.extract(args), None, op.extract(args), pp.extract(args), args.test_every, args.save_every, args.checkpoint_every, args.start_checkpoint, args.debug_from, args.experiment_name)
File "/home/rusrakhimov/projects/drrobot/train.py", line 228, in three_stage_training
train_lrs(gaussians)
File "/home/rusrakhimov/projects/drrobot/lbs/nn.py", line 79, in train_lrs
for i, (pcd, pose) in enumerate(dataloader):
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 630, in __next__
data = self._next_data()
^^^^^^^^^^^^^^^^^
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1345, in _next_data
return self._process_data(data)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/utils/data/dataloader.py", line 1371, in _process_data
data.reraise()
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/_utils.py", line 694, in reraise
raise exception
ValueError: Caught ValueError in DataLoader worker process 3.
Original Traceback (most recent call last):
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
data = fetcher.fetch(index)
^^^^^^^^^^^^^^^^^^^^
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
data = [self.dataset[idx] for idx in possibly_batched_index]
~~~~~~~~~~~~^^^^^
File "/home/rusrakhimov/projects/drrobot/utils/lbs_utils.py", line 162, in __getitem__
pose = np.load(os.path.join(sample_path, 'joint_positions.npy'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/numpy/lib/npyio.py", line 456, in load
return format.read_array(fid, allow_pickle=allow_pickle,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rusrakhimov/micromamba/envs/dr3/lib/python3.11/site-packages/numpy/lib/format.py", line 839, in read_array
array.shape = shape
^^^^^^^^^^^
ValueError: cannot reshape array of size 0 into shape (24,)
Loss: 0.0001564089034218341, Running Average: 0.0004029735955761829: 63%|█████████████████████████████████████████████████████████████████████████▍ | 399/630 [00:24<00:14, 16.31it/s]
The text was updated successfully, but these errors were encountered:
I have a gcc compilator. The problem is of another nature.
Made a small investigation in the class PointCloudDataset
self.dirs = []
for d in sorted(glob.glob(os.path.join(self.path, 'sample_*'))):
try:
o3d.io.read_point_cloud(os.path.join(d, 'pc.ply'))
np.load(os.path.join(d, 'joint_positions.npy'))
self.dirs.append(d)
except Exception as e:
print(f"{d} is damaged due to {e}")
Exception example:
[Open3D WARNING] Read PLY failed: unable to open file: data/shadow_hand/sample_4979.lock/pc.ply
data/shadow_hand/sample_4979.lock is damaged due to [Errno 20] Not a directory: 'data/shadow_hand/sample_4979.lock/joint_positions.npy'
RPly: Unable to open file
Found out that glob captures garbage folders. After filtering such folders the training works
Hello! thank you for an amazing project!
Could you please help to resolve the issue when I run:
python train.py --dataset_path data/shadow_hand --experiment_name shadow_hand_experiment
The output:
The text was updated successfully, but these errors were encountered: