Skip to content

Commit

Permalink
Remove broken BERT Large INT8 pointer (#618)
Browse files Browse the repository at this point in the history
* Remove broken BERT Large INT8 pointer

Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>

* Remove one more broken link

Signed-off-by: Abolfazl Shahbazi <abolfazl.shahbazi@intel.com>
  • Loading branch information
ashahba committed Jun 1, 2022
1 parent 60f5712 commit f46b771
Show file tree
Hide file tree
Showing 2 changed files with 2 additions and 2 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ and a list of models that are supported on Windows, see the

| Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset |
| -------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- |
| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [Int8](/benchmarks/language_modeling/tensorflow/bert_large/inference/int8/README.md) [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) |
| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) |
| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Training | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/training/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/training/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#fine-tuning-with-bert-using-squad-data) and [MRPC](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#classification-training-with-bert) |
| [BERT base](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 BFloat16**](/quickstart/language_modeling/pytorch/bert_base/inference/cpu/README.md) | [BERT Base SQuAD1.1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) |
| [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 Int8 BFloat16**](/quickstart/language_modeling/pytorch/bert_large/inference/cpu/README.md) | BERT Large SQuAD1.1 |
Expand Down
2 changes: 1 addition & 1 deletion benchmarks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ For information on running more advanced use cases using the workload containers

| Model | Framework | Mode | Model Documentation | Benchmark/Test Dataset |
| -------------------------------------------- | ---------- | ----------| ------------------- | ---------------------- |
| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [Int8](/benchmarks/language_modeling/tensorflow/bert_large/inference/int8/README.md) [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) |
| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Inference | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/inference/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/inference/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#inference) |
| [BERT](https://arxiv.org/pdf/1810.04805.pdf) | TensorFlow | Training | [FP32](/benchmarks/language_modeling/tensorflow/bert_large/training/fp32/README.md) [BFloat16**](/benchmarks/language_modeling/tensorflow/bert_large/training/bfloat16/README.md) | [SQuAD](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#fine-tuning-with-bert-using-squad-data) and [MRPC](https://github.com/IntelAI/models/tree/master/datasets/bert_data/README.md#classification-training-with-bert) |
| [BERT base](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 BFloat16**](/quickstart/language_modeling/pytorch/bert_base/inference/cpu/README.md) | [BERT Base SQuAD1.1](https://huggingface.co/csarron/bert-base-uncased-squad-v1) |
| [BERT large](https://arxiv.org/pdf/1810.04805.pdf) | PyTorch | Inference | [FP32 Int8 BFloat16**](/quickstart/language_modeling/pytorch/bert_large/inference/cpu/README.md) | BERT Large SQuAD1.1 |
Expand Down

0 comments on commit f46b771

Please sign in to comment.