The Dataset for Pretraining BERT

https://d2l.ai/chapter_natural-language-processing-pretraining/bert-dataset.html

In bert, replace the word with a random word instead of random number.

red line should be made:
vocab.to_tokens(random.randint(0, len(vocab) - 1))

1 Like

1 Like

I agree. Contribute if you like.
https://d2l.ai/chapter_appendix-tools-for-deep-learning/contributing.html
@goldpiggy

Hi @HeartSea15, feel free to post a PR as @StevenJokess suggested! Thanks!

in masked LM task, should input token whether to consider ## ? in our code, there’s no part of it.

hi, the value of max_mum_mlm_preds according to paper followed or what?
thanks reply.


image