diff --git a/README.md b/README.md index 38eaae523..28c5377a7 100644 --- a/README.md +++ b/README.md @@ -62,7 +62,7 @@ and a list of models that are supported on Windows, see the | Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset | | -------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- | -| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [Int8](/benchmarks/language_modeling/tensorflow/bert_large/inference/int8/README.md) [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) | +| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) | | [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Training | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/training/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/training/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#fine-tuning-with-bert-using-squad-data) and [MRPC](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#classification-training-with-bert) | | [BERT base](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 BFloat16**](/quickstart/language_modeling/pytorch/bert_base/inference/cpu/README.md) | [BERT Base SQuAD1.1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) | | [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 Int8 BFloat16**](/quickstart/language_modeling/pytorch/bert_large/inference/cpu/README.md) | BERT Large SQuAD1.1 | diff --git a/benchmarks/README.md b/benchmarks/README.md index 7dd7728c5..eed066b45 100644 --- a/benchmarks/README.md +++ b/benchmarks/README.md @@ -59,7 +59,7 @@ For information on running more advanced use cases using the workload containers | Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset | | -------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- | -| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [Int8](/benchmarks/language_modeling/tensorflow/bert_large/inference/int8/README.md) [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) | +| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) | | [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Training | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/training/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/training/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#fine-tuning-with-bert-using-squad-data) and [MRPC](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#classification-training-with-bert) | | [BERT base](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 BFloat16**](/quickstart/language_modeling/pytorch/bert_base/inference/cpu/README.md) | [BERT Base SQuAD1.1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) | | [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 Int8 BFloat16**](/quickstart/language_modeling/pytorch/bert_large/inference/cpu/README.md) | BERT Large SQuAD1.1 |