Skip to content

Team-MIKA/DM-i-AI

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DM i AI

Welcome to the event DM i AI hosted by Ambolt ApS. In this GitHub repository, you will find all the necessary information needed for the event. Please read the entire information before proceeding to the use case, and please make sure to read the full description of all the use cases. You will be granted points for each use case based on how well you score in the respective use case.

Use cases

Below you can find the four different use cases for the DM i AI event.
Within each use case, you find a description together with a template that can be used to setup an API endpoint.
The API endpoint will be used for submission and is required. Emily can help with setting up the API, but you should feel free to set them up on your own. The requirements for the API endpoints are specified in the respective use cases.

- Where's Waldo
- Movie Rating Prediction
- Racing Track Simulation
- IQ Test Solver

Clone this GitHub repository to download Emily templates for all four use cases.

git clone https://github.com/amboltio/DM-i-AI.git

Inside the DM-i-AI folder, you will find the four different use cases. To open a use case with Emily type emily open <use-case> e.g. emily open wheres-waldo to open the first use case.

Emily CLI

The Emily CLI is built and maintained by Ambolt to help developers and teams implement and run production ready machine learning powered micro services fast and easy.
Click here for getting started with Emily. Emily can assist you with developing the required API endpoints for the use cases. Together with every use case a predefined and documented template is provided to ensure correct API endpoints for the specific use case. You can find the documentation of the entire framework here.
The use cases have been built on top of the FastAPI framework, and should be used to specify endpoints in every use case.

Discord Channel

Come hang out and talk to other competitors of the event on our Discord channel. Discuss the use cases with each other or get in touch with any of the Ambolt staff, to solve eventual issues or questions that may arise during the competition. Join here!

Getting started without emily

You are not required to use Emily for competing in this event, however, we strongly recommend using Emily if you are not an expert in developing APIs and microservices. If you do not choose to use Emily, you should check the individual template and find the requirements for the different API endpoints. These have to be exactly the same for the evaluation service to work. Inside the `dtos` folder you can find information on the request and response models, describing the input and output requirements for your API.

Submission

When you are ready for submission, click here for instructions on how to deploy. Then, head over to the Submission Form and submit your model by providing the host address for your API and your UUID obtained during sign up. Make sure that you have tested your connection to the API before you submit!

You can only submit once per use case. We highly recommend that you test your solution before submitting. You can do this on the submission form by using the Test submission button. You can test as many times as you like, but you can only submit once per use case. When you test, your score from the test will show up on the leaderboard, so you can see how you compare to the other teams.

When you test your solution on the submission form, it will be evaluated on a test set. When you submit your solution and get the final score for that use case, your solution will be evaluated on a validation test set which is different from the test set. This means that the score you obtained when testing your solution may be different from the score you get when submitting. Therefore, we encourage you not to overfit to the test set!

Ranked score and total score

The scoreboard will display a "ranked score" for each usecase and a "total score". The ranked score reflects the placement your best model has achieved relative to the other participants' models. The current best model gets the first place and 100 points. If two models perform equally well, they will share a ranked score.

The total score is simply a summation of your ranked scores.

This format also means that you can loose points / be overtaken by other teams during the week if they submit a model that is better than yours.

Deadline for submission

The deadline for submission is Sunday November 7th EOD (23:59).

Final evaluation

Upon completion of the contest, the top 5 highest-ranking teams will be asked to submit their training code and the trained models for validation. The final ranking is announced on 30/11.

How to get a server for deployment?

When you are doing the submission we are expecting you to host the server at which the REST API can be deployed. You can sign up to Azure for Students, where you will get free credits that you can use to create a virtual machine. We expect you all to be able to do this, since the competition is only for students. Alternatively, you can also deploy your submission locally (This requires a public IP).
The following contain the necessary links for creating a virtual machine:

What if I have already used my Azure student credits?

If you have already used your credicts, reach out to us on either discord or on [email protected] and we will help you out.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 60.2%
  • HTML 37.3%
  • Dockerfile 2.5%