Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Improvement] Revamped Question Generator Model #31

Open
Roaster05 opened this issue Mar 23, 2024 · 2 comments
Open

[Improvement] Revamped Question Generator Model #31

Roaster05 opened this issue Mar 23, 2024 · 2 comments

Comments

@Roaster05
Copy link
Contributor

Description:

Within the framework of our ongoing project, a paramount objective lies in the proficiency of question generation, serving as a cornerstone for educational support. Presently, our question-generation model is confined to producing short-answer type questions, overlooking the imperative need for a more diverse array of question types. It is evident that a more robust model should be developed, capable of generating not only short-answer questions but also encompassing Boolean questions (True/False), short-answer questions requiring concise one-word responses, and the ubiquitous multiple-choice questions (MCQs). Moreover, the current implementation of our model presents a significant bottleneck, with each question generation cycle taking approximately 45-60 seconds, thereby indicating substantial room for enhancement in terms of efficiency.

Expected Output:

  • The development and integration of an advanced question-generation model that embraces a spectrum of question types including Boolean inquiries, succinct short-answer queries, and the ever-popular multiple-choice questions (MCQs).
  • Streamlining the question generation process to significantly reduce processing time, ultimately enhancing the overall user experience and engagement.
  • Implementation of a sophisticated context-based answer key generation system aimed at furnishing users with comprehensive insights into the rationale behind correct solutions, thereby fostering a deeper understanding and enriching the overall learning experience.
@Anshukumar8529
Copy link

I've pointed out the issue with the memory usage when calling the generate question function, which could lead to GPU memory leaks as the model and tokenizer objects are being created every time the function is called. This problem could be prevented by moving the model and tokenizer creation outside of the function or making sure to release their memory resources by calling the .free() method.

@Anshukumar8529
Copy link

1.Loaded the QA and keyword detections models and tokenizers once and used them throughout the application.
2.Added exception handling to loading the models and tokenizers.
3.Updated the do POST function to handle HTTP responses appropriately, such as sending a 400 status for a bad request or a 500 status for internal server errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants