Skip to content

Library used by Meilisearch to tokenize queries and documents

License

Notifications You must be signed in to change notification settings

crudiedo/charabia

 
 

Repository files navigation

Charabia

Library used by Meilisearch to tokenize queries and documents

Role

The tokenizer’s role is to take a sentence or phrase and split it into smaller units of language, called tokens. It finds and retrieves all the words in a string based on the language’s particularities.

Details

Charabia provides a simple API to segment, normalize, or tokenize (segment + normalize) a text of a specific language by detecting its Script/Language and choosing the specialized pipeline for it.

Supported languages

Charabia is multilingual, featuring optimized support for:

Script - Language specialized segmentation specialized normalization Segmentation Performance level Tokenization Performance level
Latin - Any unicode-segmentation ✅ lowercase + deunicode 🟨 ~13MiB/sec 🟧 ~5MiB/sec
Chinese - CMN 🇨🇳 jieba ✅ traditional-to-simplified conversion 🟨 ~9MiB/sec 🟧 ~5MiB/sec
Hebrew 🇮🇱 unicode-segmentation ✅ diacritics removal 🟩 ~21MiB/sec 🟨 ~11MiB/sec
Japanese 🇯🇵 lindera 🟧 ~5MiB/sec 🟧 ~4MiB/sec
Thai 🇹🇭 dictionary based 🟩 ~23MiB/sec 🟨 ~14MiB/sec

We aim to provide global language support, and your feedback helps us move closer to that goal. If you notice inconsistencies in your search results or the way your documents are processed, please open an issue on our GitHub repository.

If you have a particular need that charabia does not support, please share it in the product repository by creating a dedicated discussion.

About Performance level

Performances are based on the throughput (MiB/sec) of the tokenizer (computed on a scaleway Elastic Metal server EM-A410X-SSD - CPU: Intel Xeon E5 1650 - RAM: 64 Go) using jemalloc:

  • 0️⃣⬛️: 0 -> 1 MiB/sec
  • 1️⃣🟥: 1 -> 3 MiB/sec
  • 2️⃣🟧: 3 -> 8 MiB/sec
  • 3️⃣🟨: 8 -> 20 MiB/sec
  • 4️⃣🟩: 20 -> 50 MiB/sec
  • 5️⃣🟪: 50 MiB/sec or more

Examples

Tokenization

use charabia::Tokenize;

let orig = "Thé quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";

// tokenize the text.
let mut tokens = orig.tokenize();

let token = tokens.next().unwrap();
// the lemma into the token is normalized: `Thé` became `the`.
assert_eq!(token.lemma(), "the");
// token is classfied as a word
assert!(token.is_word());

let token = tokens.next().unwrap();
assert_eq!(token.lemma(), " ");
// token is classfied as a separator
assert!(token.is_separator());

Segmentation

use charabia::Segment;

let orig = "The quick (\"brown\") fox can't jump 32.3 feet, right? Brr, it's 29.3°F!";

// segment the text.
let mut segments = orig.segment_str();

assert_eq!(segments.next(), Some("The"));
assert_eq!(segments.next(), Some(" "));
assert_eq!(segments.next(), Some("quick"));

About

Library used by Meilisearch to tokenize queries and documents

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Rust 100.0%