Skip to content

Latest commit

 

History

History
57 lines (48 loc) · 5.63 KB

README.md

File metadata and controls

57 lines (48 loc) · 5.63 KB

Pre-built Applications Overview

We demonstrate all pre-built applications in HugNLP.

Applications Runing Tasks Task Notes PLM Models Documents
Default Application run_seq_cls.sh Goal: Standard Fine-tuning or Prompt-tuning for sequence classification on user-defined dataset.
Path: applications/default_applications
BERT, RoBERTa, DeBERTa click
run_seq_labeling.sh Goal: Standard Fine-tuning for sequence labeling on user-defined dataset.
Path: applications/default_applications
BERT, RoBERTa, ALBERT
Pre-training run_pretrain_mlm.sh Goal: Pre-training via Masked Language Modeling (MLM).
Path: applications/pretraining/
BERT, RoBERTa click
run_pretrain_casual_lm.sh Goal: Pre-training via Causal Language Modeling (CLM).
Path: applications/pretraining
BERT, RoBERTa click
GLUE Benchmark run_glue.sh Goal: Standard Fine-tuning or Prompt-tuning for GLUE classification tasks.
Path: applications/benchmark/glue
BERT, RoBERTa, DeBERTa
run_causal_incontext_glue.sh Goal: In-context learning for GLUE classification tasks.
Path: applications/benchmark/glue
GPT-2
CLUE Benchmark clue_finetune_dev.sh Goal: Standard Fine-tuning and Prompt-tuning for CLUE classification task。
Path: applications/benchmark/clue
BERT, RoBERTa, DeBERTa
run_clue_cmrc.sh Goal: Standard Fine-tuning for CLUE CMRC2018 task.
Path: applications/benchmark/cluemrc
BERT, RoBERTa, DeBERTa
run_clue_c3.sh Goal: Standard Fine-tuning for CLUE C3 task.
Path: applications/benchmark/cluemrc
BERT, RoBERTa, DeBERTa
run_clue_chid.sh Goal: Standard Fine-tuning for CLUE CHID task.
Path: applications/benchmark/cluemrc
BERT, RoBERTa, DeBERTa
Instruction-Prompting run_causal_instruction.sh Goal: Cross-task training via generative Instruction-tuning based on causal PLM. You can use it to train a small ChatGPT.
Path: applications/instruction_prompting/instruction_tuning
GPT2 click
run_zh_extract_instruction.sh Goal: Cross-task training via extractive Instruction-tuning based on Global Pointer model.
Path: applications/instruction_prompting/chinese_instruction
BERT, RoBERTa, DeBERTa click
run_causal_incontext_cls.sh Goal: In-context learning for user-defined classification tasks.
Path: applications/instruction_prompting/incontext_learning
GPT-2 click
Information Extraction run_extractive_unified_ie.sh Goal: HugIE: training a unified chinese information extraction via extractive instruction-tuning.
Path: applications/information_extraction/HugIE
BERT, RoBERTa, DeBERTa click
api_test.py Goal: HugIE: API test.
Path: applications/information_extraction/HugIE
- click
run_fewnerd.sh Goal: Prototypical learning for named entity recognition, including SpanProto, TokenProto
Path: applications/information_extraction/fewshot_ner
BERT
Code NLU run_clone_cls.sh Goal: Standard Fine-tuning for code clone classification task.
Path: applications/code/code_clone
CodeBERT, CodeT5, GraphCodeBERT, PLBART click
run_defect_cls.sh Goal: Standard Fine-tuning for code defect classification task.
Path: applications/code/code_defect
CodeBERT, CodeT5, GraphCodeBERT, PLBART click

Pre-built Application Settings

We show the settings that matched with each pre-built application.

Notes:

  • ✅: Have finished
  • ⌛️: To do
  • ⛔️: Not-available
Applications Runing Tasks Adv-training Parameter-efficient Pattern-Verbalizer Instruction-Prompting Self-training Calibration
Default Application run_seq_cls.sh
run_seq_labeling.sh
Pre-training run_pretrain_mlm.sh
run_pretrain_casual_lm.sh
GLUE Benchmark run_glue.sh
run_causal_incontext_glue.sh
CLUE Benchmark clue_finetune_dev.sh
run_clue_cmrc.sh
run_clue_c3.sh
run_clue_chid.sh
Instruction-Prompting run_causal_instruction.sh
run_zh_extract_instruction.sh
run_causal_incontext_cls.sh ⛔️ ⛔️
Information Extraction run_extractive_unified_ie.sh
api_test.py ⛔️
run_fewnerd.sh
Code NLU run_clone_cls.sh
run_defect_cls.sh