Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add logger examples in eval script #59

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
16 changes: 16 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,22 @@ If you are looking for a simple challenge configuration that you can replicate t

11. To update the challenge on EvalAI, make changes in the repository and push on `challenge` branch and wait for the build to complete.

### Printing and Logging in Evaluation Script
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to Logging in Evaluation Script

`print` statements will show up on the console directly.
In order to get `logging` statements from the evaluation script, ensure that the logger has a `stdout` handler added. We redirect the output from `stdout` to the submission workers console.
An example logger can be created like so:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Change to Here's an example of setting up a logger:


```python
eval_script_logger = logging.getLogger(name='eval_script')
eval_script_logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
eval_script_logger.addHandler(handler)
```

Then, we can use this logger anywhere in the script and the corresponding level logs will show up in the output.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chnage to Once set up, you can use the logger anywhere in the evaluation script to display logs on EvalAI manage tab.


## Create challenge using config

1. Fork this repository.
Expand Down
18 changes: 15 additions & 3 deletions evaluation_script/main.py
Original file line number Diff line number Diff line change
@@ -1,8 +1,20 @@
import random

import logging
import time
import sys

def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwargs):
print("Starting Evaluation.....")

eval_script_logger = logging.getLogger(name='eval_script')
eval_script_logger.setLevel(logging.DEBUG)

handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
eval_script_logger.addHandler(handler)

Comment on lines +9 to +17
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you create a new method called get_logger in this script and use that to get the logger here?

"""
Evaluates the submission for a particular challenge phase and returns score
Arguments:
Expand Down Expand Up @@ -54,7 +66,7 @@ def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwarg
]
# To display the results in the result file
output["submission_result"] = output["result"][0]["train_split"]
print("Completed evaluation for Dev Phase")
eval_script_logger.info("Completed evaluation for Dev Phase")
elif phase_codename == "test":
print("Evaluating for Test Phase")
output["result"] = [
Expand All @@ -77,5 +89,5 @@ def evaluate(test_annotation_file, user_submission_file, phase_codename, **kwarg
]
# To display the results in the result file
output["submission_result"] = output["result"][0]
print("Completed evaluation for Test Phase")
eval_script_logger.info("Completed evaluation for Test Phase")
return output