Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A question about tracking on Vesicles dataset #22

Open
1595813520 opened this issue Dec 21, 2024 · 4 comments
Open

A question about tracking on Vesicles dataset #22

1595813520 opened this issue Dec 21, 2024 · 4 comments

Comments

@1595813520
Copy link

1595813520 commented Dec 21, 2024

Dear author:
Thank you for your outstanding contributions to the open-source community and for your exceptional work in the field of cell tracking! I greatly appreciate your research and the efforts you’ve made to advance this field.
I have a small question I’d like to ask: How did you test and evaluate on the Vesicles dataset? The test code you provided requires both img.tif and mask.tif, but the Vesicles dataset only contains raw TIFF image files without corresponding segmentation mask data. I am unsure how to proceed with the tracking in this case. I would be extremely grateful if you could provide some guidance or suggestions. Thank you very much!

@1595813520 1595813520 changed the title ValueError: Could not find mask folder for data/ctc/Fluo-N2DL-HeLa/02 Thank you for open-sourcing such an outstanding piece of work! Dec 21, 2024
@1595813520 1595813520 changed the title Thank you for open-sourcing such an outstanding piece of work! A question about tracking on Dec 28, 2024
@1595813520 1595813520 changed the title A question about tracking on A question about tracking on Vesicles dataset Dec 28, 2024
@1595813520
Copy link
Author

I would also like to inquire about the comparison experiment with MOTT mentioned in your paper. Specifically, did you train MOTT on the Vesicles dataset, or did you use the pre-trained weights provided by MOTT to perform tracking and evaluation on the Vesicles dataset? I would greatly appreciate your response. Thank you!

@bentaculum
Copy link
Member

bentaculum commented Jan 7, 2025

Hi @1595813520,
good to hear that you find Trackastra useful.
To answer your questions:

  1. For simplicity, we converted the ground truth xml files of the Particle Tracking Challenge to the Cell Tracking Challenge format, and were able to use our pipeline as is.
  2. We compare our particle tracking results to the results reported in the MOTT paper in Table 2.

@1595813520
Copy link
Author

1595813520 commented Jan 8, 2025

Hi @1595813520, good to hear that you find Trackastra useful. To answer your questions:

  1. For simplicity, we converted the ground truth xml files of the Particle Tracking Challenge to the Cell Tracking Challenge format, and were able to use our pipeline as is.
  2. We compare our particle tracking results to the results reported in the MOTT paper in Table 2.

Hello, thank you for your reply! I understand now. (However, could you run the MOTT training code successfully? Is there some issue with it?)

I have another question: After training the model on the DeepCell dataset, I tested and evaluated it, but the prediction results seem abnormal:

image

The AOGM score is always 0, which indicates that the predicted results are identical to the ground truth. However, the results in the two trajectory files test/001_RES/man_track.txt and test/001_TRA/man_track.txt differ. Do you have any ideas on what could be the cause? Here is my prediction code:

root = Path("/data/trackastra/data/deepcell/test")
idx = "001"

image_dir = root / idx
seg_dir = root / f"{idx}_GT" / "SEG"
output_dir = root / f"{idx}_RES"
os.makedirs(output_dir, exist_ok=True)

image_files = sorted(image_dir.glob("*.tif"))    # t000.tif, t001.tif, ...
seg_files = sorted(seg_dir.glob("*.tif"))        # man_seg000.tif, man_seg001.tif, ...

imgs = np.array([tifffile.imread(str(f)) for f in image_files])
masks = np.array([tifffile.imread(str(f)) for f in seg_files])

model_path = "/data/trackastra/scripts/runs/2024-12-28_14-01-30_example/"
model = Trackastra.from_folder(model_path, device=device)

track_graph = model.track(imgs, masks, mode="greedy")

ctc_tracks, masks_tracked = graph_to_ctc(
    track_graph,
    masks,
    outdir=output_dir,
)

Here is my evaluation code:

import os
import pprint
import urllib.request
import zipfile
import pandas as pd
from tqdm import tqdm
from traccuracy import run_metrics
from traccuracy.loaders import load_ctc_data
from traccuracy.matchers import CTCMatcher, IOUMatcher
from traccuracy.metrics import CTCMetrics, DivisionMetrics

pp = pprint.PrettyPrinter(indent=4)

gt_data = load_ctc_data(
    '/data/trackastra/data/deepcell/test/001_GT/TRA',
    '/data/trackastra/data/deepcell/test/001_GT/TRA/man_track.txt',
)

pred_data = load_ctc_data(
    '/data/trackastra/data/deepcell/test/001_RES',
    '/data/trackastra/data/deepcell/test/001_RES/man_track.txt',
)

ctc_results = run_metrics(
    gt_data=gt_data,
    pred_data=pred_data,
    matcher=CTCMatcher(),
    metrics=[
        CTCMetrics(),  
        DivisionMetrics()  
    ],
)

print(f"{ctc_results}")

Apart from the training process, could you suggest any potential reasons for this issue? I would greatly appreciate any insights you can provide. Looking forward to your reply, and best wishes!

@1595813520
Copy link
Author

Hi @1595813520, good to hear that you find Trackastra useful. To answer your questions:

  1. For simplicity, we converted the ground truth xml files of the Particle Tracking Challenge to the Cell Tracking Challenge format, and were able to use our pipeline as is.
  2. We compare our particle tracking results to the results reported in the MOTT paper in Table 2.

I have one more question: could you please explain how your Association Accuracy metric is calculated? Is it obtained by dividing True Positive by False Positive? Also, do you use the scores of nodes or edges for the calculation? I apologize for asking so many questions, but if you could respond, it would be a great help to me. Thank you very much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants