Edit model card

Bert-base-chinese

Table of Contents

Model Details

Model Description

This model has been pre-trained for Chinese, training and random input masking has been applied independently to word pieces (as in the original BERT paper).

  • Developed by: HuggingFace team
  • Model Type: Fill-Mask
  • Language(s): Chinese
  • License: [More Information needed]
  • Parent Model: See the BERT base uncased model for more information about the BERT base model.

Model Sources

Uses

Direct Use

This model can be used for masked language modeling

Risks, Limitations and Biases

CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.

Significant research has explored bias and fairness issues with language models (see, e.g., Sheng et al. (2021) and Bender et al. (2021)).

Training

Training Procedure

  • type_vocab_size: 2
  • vocab_size: 21128
  • num_hidden_layers: 12

Training Data

[More Information Needed]

Evaluation

Results

[More Information Needed]

How to Get Started With the Model

from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("bert-base-chinese")

model = AutoModelForMaskedLM.from_pretrained("bert-base-chinese")
Downloads last month
1,086,473
Safetensors
Model size
103M params
Tensor type
F32
Β·
Hosted inference API
Fill-Mask
Examples
Mask token: [MASK]
This model can be loaded on the Inference API on-demand.

Spaces using bert-base-chinese 54