Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TAO YoloV4 .etlt model with Triton server #12

Open
rsicak opened this issue Mar 15, 2022 · 9 comments
Open

TAO YoloV4 .etlt model with Triton server #12

rsicak opened this issue Mar 15, 2022 · 9 comments

Comments

@rsicak
Copy link

rsicak commented Mar 15, 2022

Hi, is there any guide how to implement Yolo v4 TAO model into Triton inference server? I have trained Yolo v4 custom data model via TAO toolkit and looking for an guide how to implement this model with Triton inference server. Thank you.

@imSrbh
Copy link

imSrbh commented May 6, 2022

@rsicak Any update on How you made it working??

@imSrbh
Copy link

imSrbh commented May 6, 2022

@sujitbiswas
@morganh-nv

I have yolov4 .etlt model and generated trt.engine from nvinfer (deepstream-app) and I have generated libnvds_infercustomparser_tao.so

I wish to use same in triton inference server.
model-repository:
image

Then Written a deepstream-app for the same with nvinferserver plugin.

For one model with the same configuration, I am able to do inference.
But Unable to get any meta info & obj_count for other model with distinct classes.

Thanks...

Here is my config_infer.txt

infer_config {
  unique_id: 1
  gpu_ids: [0]
  max_batch_size: 16
  
  backend {
    inputs: [ {
      name: "Input"
    }]
    outputs: [
      {name: "BatchedNMS"},
      {name: "BatchedNMS_1"},
      {name: "BatchedNMS_2"},
      {name: "BatchedNMS_3"}
    ]
    triton {
      model_name: "Helmet"
      version: 1
      grpc {
        url: "172.17.0.2:8001"		
      }
    }
  }

  preprocess {
    network_format: MEDIA_FORMAT_NONE
    tensor_order: TENSOR_ORDER_NONE
    tensor_name: "Input"
    maintain_aspect_ratio: 0
    frame_scaling_hw: FRAME_SCALING_HW_DEFAULT
    frame_scaling_filter: 1
    normalize {
    scale_factor: 1.0
    channel_offsets: [0, 0, 0]
    }
  }

  postprocess {
    labelfile_path: "../../model_repository/Helmet_model/labels.txt"
    detection {
      num_detected_classes: 2
      custom_parse_bbox_func:"NvDsInferParseCustomBatchedNMSTLT"
      per_class_params {
          key: 0
          value { pre_threshold: 0.4 }
        }
      nms {
        confidence_threshold:0.2
        topk:20
        iou_threshold:0.5
      }
                             
    }
  }

  custom_lib {
    path:"../../.../customLib/libnvds_infercustomparser_tao.so"
  }
}

@Wesley-E
Copy link

Hi @imSrbh I believe that you have to write a client for your model to get the desired outputs that you are looking for and I may be able to assist you with this if necessary. Let me know if you still need help.

@monjha
Copy link

monjha commented Jun 13, 2022

Hi @imSrbh

could you share your analytics config file where you are defining object count for a particular class id? you probably want to update the class-id if you are able to see counts in one model and no counts in another.

@morganh-nv
Copy link
Collaborator

You can firstly run the default steps in this github.
Then, to run your own .etlt model in triton server, just need to replace your own .etlt model with the original one, and also set correct input shapes in config file(https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/model_repository/yolov3_tao/config.pbtxt#L9).
The triton server will generate model.plan with your own .etlt model.
For example, for yolov3, https://github.com/NVIDIA-AI-IOT/tao-toolkit-triton-apps/blob/main/scripts/download_and_convert.sh#L51

@h9945394143
Copy link

@morganh-nv @monjha could you help us with post-processor parser

@Wesley-E
Copy link

@morganh-nv @monjha could you help us with post-processor parser

I could help you with that

@Wesley-E
Copy link

@morganh-nv @monjha could you help us with post-processor parser

Do you have a branch that you're currently working on that you could share?

@morganh-nv
Copy link
Collaborator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants