Skip to content

ProdCor/IA024-Natural-Language-Processing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 

Repository files navigation

Deep Learning for Natural Language Processing

Overview

This repository contains the projects and assignments completed as part of the "IA024 - Deep Neural Networks for Natural Language Processing". The course focused on advanced neural network architectures and methods for natural language processing. Here, you'll find implementations and experiments involving sentiment analysis, various NLP models, and SoTA techniques like prompt engineering.

Projects

Sentiment Analysis on IMDb Dataset

  • Objective: Develop a model to perform sentiment analysis on movie reviews from the IMDb dataset.
  • Techniques Used: Preprocessing of text data, training sentiment classification models, and evaluation of model performance.

Bengio Architecture

  • Objective: Implement and explore the foundational neural network architecture proposed by Yoshua Bengio for natural language processing.
  • Details: Focus on understanding the mechanics of embedding layers and their impact on downstream NLP tasks.

Transformers

  • Objective: Study and implement Transformer models to understand their advantages over traditional RNNs in handling sequences.
  • Key Concepts: Attention mechanisms, self-attention, and the ability to handle long-range dependencies.

GPT Model

  • Objective: Implement a Generative Pre-trained Transformer model and explore its capabilities in generating text.
  • Applications: Text generation, understanding the role of unsupervised learning in effective pre-training.

BERT Model

  • Objective: Train and fine-tune a BERT model for various NLP tasks.
  • Focus Areas: Impact of bidirectional training, fine-tuning practices for specific NLP tasks.

LoRA and QLoRA Methods

  • Objective: Implement Low-Rank Adaptation (LoRA) and its variant QLoRA to improve parameter efficiency in Transformer models.
  • Insights: Explore the balance between model complexity and performance with parameter-efficient techniques.

Prompt Engineering and Multi Agents

  • Objective: Develop strategies for effective prompt engineering to enhance the performance of language models.
  • Challenges: Design and test multi-agent systems where agents communicate or compete using natural language.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published