Skip to content

Hate speech classification, Word embeddings and the use of Pre-trained models as coursework for NLP Maastricht University 2023-24

Notifications You must be signed in to change notification settings

ritwyck/NLP-LabWork

Repository files navigation

NLP LabWork

Welcome to my repository of NLP assignments completed as part of my university coursework. This collection includes various projects focused on classification models, word embeddings, and the use of pre-trained models. Below you will find an overview of each assignment along with instructions on how to run the code and understand the results.

Table of Contents

  1. Assignment 1: Classification Models
  2. Assignment 2: Word Embeddings
  3. Assignment 3: Using Pre-trained Models
  4. Results
  5. References

Assignment 1: Classification Models

In this assignment, we explored various classification models for natural language processing tasks. The primary objectives were:

  • Understanding the fundamentals of text classification.
  • Implementing different classification algorithms such as Naive Bayes and logistic regression.
  • Evaluating the performance of each model using metrics like accuracy, precision, recall, and F1-score.

Files

  • Hate-Tweet-Classification.ipynb: Contains the code for training and evaluating classification models.

Assignment 2: Word Embeddings

This assignment focused on creating and utilizing word embeddings for NLP tasks. Key learning outcomes included:

  • Understanding word embeddings and their significance in NLP.
  • Training custom word embeddings using Word2Vec.

Files

  • Word-Embeddings.ipynb: Script for training word embeddings and performing related tasks.

Assignment 3: Using Pre-trained Models

In this assignment, we leveraged pre-trained NLP models to solve complex tasks efficiently. The objectives were:

  • Understanding the benefits of using pre-trained models.
  • Applying models like BERT and other transformers for text classification, sentiment analysis, and more.
  • Fine-tuning pre-trained models for specific tasks.

Files

  • Pre-Trained-Models.ipynb: Code for applying and fine-tuning pre-trained models.

Results

Detailed analysis and discussions can be found within the corresponding Jupyter notebooks for each assignment.

References

Feel free to explore the code, use it as a reference for your own projects, and provide feedback or suggestions for improvement.

About

Hate speech classification, Word embeddings and the use of Pre-trained models as coursework for NLP Maastricht University 2023-24

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published