Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analyst Feedback on Model Result #68

Open
14 of 15 tasks
MaxenceGui opened this issue Mar 20, 2024 · 1 comment
Open
14 of 15 tasks

Analyst Feedback on Model Result #68

MaxenceGui opened this issue Mar 20, 2024 · 1 comment
Assignees
Labels
epic Issue that can be achieved by dividing the work into smaller steps.
Milestone

Comments

@MaxenceGui
Copy link

MaxenceGui commented Mar 20, 2024

Issue Description

When seed analysts use Nachet, they should be able to give their retroaction on the result. A pipeline of action needs to be integrated from the Frontend to the database to be able to register the user feedback. The possible feedbacks types are:

  • No seed: The user indicates that the seed detected by the model is not a seed (soil peds for example)
  • Wrong classification: The user indicates that the seed is wrongfully classified by selecting the right seed or guess.
  • Perfect Inference: The guess of the model was correct and the box hasn't been changed.

Architecture:

---
title: Nachet Architecture for Inference
---
erDiagram
    seed{
        uuid id
        text name
    }
    inference{
        uuid id PK
        json inference 
        uuid picture_id FK
        uuid user_id FK
        timestamp upload_date
    }
    object{
        uuid id PK
        json box_metadata
        uuid inference_id FK
        integer type_id
        boolean verified
        boolean modified
        uuid top_guess FK
        timestamp upload_date
        timestamp updated_at
    }
    seed_object{
        uuid id PK
        uuid seed_id FK 
        uuid object_id FK
        float score
    }

  user ||--o{ inference: requests
  inference ||--|| picture: infers
  inference }o--|| pipeline: creates
  inference ||--o{ object: detects
  object ||--o{ seed_object: is
  seed_object }o--|| seed: is
Loading

Work to do

1. Tweak the current classification/inference process in the Backend to use the Datastore.

Sequence of saving the inference:

Note the picture must of already been uploaded and registered in the DB

sequenceDiagram;
  actor User
  box grey Ai-Lab services
  participant Frontend
  participant Backend
  participant Datastore
  participant ML
  end
  box grey Storage services
  participant PostgreSQL Database
  participant Azure Storage
  end

    User ->> Frontend: Classify picture
    Frontend -) Backend: Classify_picture(user_id,picture_id,pipeline_id)
    Backend -) Datastore: get_picture_url(picture_id)
    Datastore ->> Backend : picture_url
    Backend -) ML: inference_request(pipeline,picture)
    ML ->> Backend : inference.json
    Backend -) Datastore: register_inference_result(inference,user_id,picture_id,pipeline_id)
    Datastore ->> Datastore: trim_inference
    Datastore -) PostgreSQL Database: new_inference(trimmed_inference)
    Datastore ->> Datastore: Add {inference_id: uuid}
    loop each box 
        Datastore ->> Datastore: build_box_metadata(box_metadata)
        Datastore ->> PostgreSQL: new_inference_object(box_metadata)
        Datastore ->> Datastore: Add {box_id: uuid}
        loop each guess
            Datastore -) PostgreSQL: get_seed_id(seed_name)
            Datastore ->> PostgreSQL: new_seed_object(box_id,seed_id)
            Datastore ->> Datastore: Add {object_id: uuid}
        end
        Datastore  ->> PostgreSQL: set_inference_object_top_id(object_id, top_seed_object_id)
    end
  Datastore ->> Backend: inference_with_id.json
  Backend ->> Frontend: Display picture with inference results
Loading

2. Implement a process to enable users to submit their inference feedback/validation

Sequence of saving the inference feedback

sequenceDiagram;
  actor User
  participant Frontend
  participant Backend
  participant Datastore
  participant Database
  

    User ->> Frontend: Validate inference
    alt Perfect Inference
    Frontend -) Backend: Inference result positive (user_id,inference_id)
    Backend -) Datastore: Inference result positive (user_id,inference_id)
    Datastore ->> Database: Set each object.verified = True & object.modified=False
    else Annotated Inference
    Frontend -) Backend: Inference feedback (inference_feedback.json,user_id,inference_id)
    Backend ->> Datastore: Inference feedback (inference_feedback.json, user_id, inference_id)
    Datastore -> Database: Get Inference_result(inference_id)
        loop each Boxes
            alt box has an id value
                alt inference_feedback.box.verified= False
                    Datastore --> Datastore: Next box & flag_all_box_verified=False
                else
                    Datastore -) Database: Set object.verified=True & object.verified_by=user_id
                    Datastore -) Datastore: Compare label & box coordinate
                    alt label value empty
                        Datastore -) Database: Set object.top_inference=Null
                        Datastore -) Database: Set object.modified=False                   
                    else label or box coordinate are changed & not empty
                        Datastore -) Database: Update object.top_inference & object.box_metadata
                        Note over Datastore,Database: if the top label is not part of the seed_object guesses, <br>we will need to create a new instance of seed_object.
                        Datastore -) Database: Set object.modified=true
                    else label and box haven't changed
                        Datastore -) Database: Set object.modified=False
                    end
                end
            else box has no id value
                Datastore -) Database: Create new object and seed_object
            end
        end
        alt if flag_all_box_verified=True
            Datastore -) Database: Set Inference.verified=true
        end
    end
Loading

Acceptance Criteria

  • Users can annotate their retroaction on the model results
  • Retroaction is saved into the database and helps data scientists with the model training
  • Data from retroaction is recorded and saved in the database
  • An endpoint in the backend is set up and allows the transit of the data from the frontend to the database and the model

Tasks

Frontend:

  • Implement the ability for users to give their retroaction in the frontend #129
    • Implement the ability for users to give their retroaction in the frontend
      • Related to add ability for user to identify classification error and document it for continuous training nachet-frontend#40
      • Provide positive feedback to accept an inference as good data
      • Provide negative feedback to reject an inference box completely
      • Provide negative feedback with a correction to the inference box position and dimensions
      • Provide negative feedback with a correction to the species label (species in training set)
      • Provide negative feedback with a correction to the species label (species not in training set)
      • Provide negative feedback no seed in the inference box
      • Provide negative feedback seed not detected
      • Add API calls for uuid and seed list
      • New Component Tests
      • Update Documentation

Backend:

Database:

  • Create inference structure for the Classification and feedback #23
@MaxenceGui MaxenceGui added the epic Issue that can be achieved by dividing the work into smaller steps. label Mar 20, 2024
@MaxenceGui MaxenceGui added this to the M4 (2024 May) milestone Mar 20, 2024
@github-project-automation github-project-automation bot moved this to Todo in Nachet Mar 20, 2024
@MaxenceGui MaxenceGui pinned this issue Apr 5, 2024
@ChromaticPanic
Copy link
Collaborator

Some ideas for potential feedback mechanism

  • click a prediction box, show a callout with more information
    • could be model explanation, if our models do that
    • ranked predictions / or other info users want to see
    • the callout can have check or X for feedback
    • selecting X would give the option to specify if:
      • bounding box issue
        • Let them redraw
        • or let them say "there are multiple seeds in this box" (If that's useful for negative reinforcement)
      • wrong species - specify correct one from list

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
epic Issue that can be achieved by dividing the work into smaller steps.
Projects
Status: In Progress
Status: In Progress
Development

No branches or pull requests

5 participants