Welcome to my repository of NLP assignments completed as part of my university coursework. This collection includes various projects focused on classification models, word embeddings, and the use of pre-trained models. Below you will find an overview of each assignment along with instructions on how to run the code and understand the results.
- Assignment 1: Classification Models
- Assignment 2: Word Embeddings
- Assignment 3: Using Pre-trained Models
- Results
- References
In this assignment, we explored various classification models for natural language processing tasks. The primary objectives were:
- Understanding the fundamentals of text classification.
- Implementing different classification algorithms such as Naive Bayes and logistic regression.
- Evaluating the performance of each model using metrics like accuracy, precision, recall, and F1-score.
Hate-Tweet-Classification.ipynb
: Contains the code for training and evaluating classification models.
This assignment focused on creating and utilizing word embeddings for NLP tasks. Key learning outcomes included:
- Understanding word embeddings and their significance in NLP.
- Training custom word embeddings using Word2Vec.
Word-Embeddings.ipynb
: Script for training word embeddings and performing related tasks.
In this assignment, we leveraged pre-trained NLP models to solve complex tasks efficiently. The objectives were:
- Understanding the benefits of using pre-trained models.
- Applying models like BERT and other transformers for text classification, sentiment analysis, and more.
- Fine-tuning pre-trained models for specific tasks.
Pre-Trained-Models.ipynb
: Code for applying and fine-tuning pre-trained models.
Detailed analysis and discussions can be found within the corresponding Jupyter notebooks for each assignment.
Feel free to explore the code, use it as a reference for your own projects, and provide feedback or suggestions for improvement.