Skip to content

Jupyter Notebook for German named entity recognition benchmark

License

Notifications You must be signed in to change notification settings

pascalhuszar/NER-benchmark

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Simple walktrough for a named entitiy recognition (NER) setup for the German language

Different embeddings for Named Entity Recognition (NER) in German text are benchmarked. As stated in my bachelor thesis, a combination of BERT embeddings and Flair embeddings yields new best performances on the GermEval-14 NER dataset (F1-score of 86.62).

Setup

Flair Experiments: Flair and Google Colab

BERT Experiments: Transformers and Google Colab

Datasets

BIOES and BIO/IOB formats are considered in the evaluation.

The datasets used in my benchmark are Conll-03 and GermEval-14. Additionally, i compared several embeddings on a complaint dataset in my bachelor thesis. Unfortunately, this dataset is not public, but the performance can be found in the thesis.

About

Jupyter Notebook for German named entity recognition benchmark

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published