diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/LICENSE b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/LICENSE
new file mode 100644
index 0000000000000000000000000000000000000000..a0e03103591c1158a839681f3c404ee9118b182e
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/LICENSE
@@ -0,0 +1,29 @@
+BSD 3-Clause License
+
+Copyright (c) 2017,
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+* Redistributions of source code must retain the above copyright notice, this
+ list of conditions and the following disclaimer.
+
+* Redistributions in binary form must reproduce the above copyright notice,
+ this list of conditions and the following disclaimer in the documentation
+ and/or other materials provided with the distribution.
+
+* Neither the name of the copyright holder nor the names of its
+ contributors may be used to endorse or promote products derived from
+ this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
+AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
+FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
+DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
+SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
+CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
+OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
\ No newline at end of file
diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/ReadMe.md b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/ReadMe.md
new file mode 100644
index 0000000000000000000000000000000000000000..06af276f7d9337f6518801cbf01cafa429e77154
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/ReadMe.md
@@ -0,0 +1,195 @@
+# Bert_Base_Uncased模型-推理指导
+
+
+- [概述](#ZH-CN_TOPIC_0000001172161501)
+
+ - [输入输出数据](#section540883920406)
+
+
+
+- [推理环境准备](#ZH-CN_TOPIC_0000001126281702)
+
+- [快速上手](#ZH-CN_TOPIC_0000001126281700)
+
+ - [获取源码](#section4622531142816)
+ - [准备数据集](#section183221994411)
+ - [模型推理](#section741711594517)
+
+- [模型推理性能&精度](#ZH-CN_TOPIC_0000001172201573)
+
+ ******
+
+
+
+# 概述
+
+BERT,即Bidirectional Encoder Representations from Transformers,是一种基于Transformer的自然语言处理预训练模型,由Google于2018年发布。当时它在许多自然语言任务中表现出了卓越的性能,之后也成为了几乎所有NLP研究中的性能基线。本文使用的是BERT_base模型。
+
+
+- 参考实现:
+
+ ```
+ url = https://github.com/NVIDIA/DeepLearningExamples.git
+ commit_id = dd6b8ca2bb80e17b015c0f61e71c2a84733a5b32
+ code_path = DeepLearningExamples/PyTorch/LanguageModeling/BERT/
+ model_name = BERTBASE
+ ```
+
+
+
+## 输入输出数据
+
+- 输入数据
+
+ | 输入数据 | 数据类型 | 大小 | 数据排布格式 |
+ | :-------: | :----: | :-------------: | :-------: |
+ |input_ids | INT64 | batchsize × 512 | ND |
+ |segment_ids| INT64 | batchsize × 512 | ND |
+ |input_mask | INT64 | batchsize × 512 | ND |
+
+
+- 输出数据
+
+ | 输出数据 | 数据类型 | 大小 | 数据排布格式 |
+ | :-------: | :----: | :-------------: | :-------: |
+ | output | INT64 | batchsize × 512 | ND |
+
+
+# 推理环境准备
+
+- 该模型需要以下插件与驱动
+
+ | 配套 | 版本 | 环境准备指导 |
+ | ------------------------------------------------------------ | ------- | ------------------------------------------------------------ |
+ | 固件与驱动 | 23.0.RC1 | [Pytorch框架推理环境准备](https://www.hiascend.com/document/detail/zh/ModelZoo/pytorchframework/pies) |
+ | CANN | 7.0.RC1.alpha003 | - |
+ | Python | 3.9.0 | - |
+ | PyTorch | 2.0.1 | - |
+ | 说明:Atlas 300I Duo 推理卡请以CANN版本选择实际固件与驱动版本。 | | |
+
+
+# 快速上手
+
+## 获取源码
+
+
+1. 获取本仓代码
+
+ ```bash
+ git clone https://gitee.com/ascend/ModelZoo-PyTorch.git
+ cd ./ModelZoo-PyTorch/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/
+ ```
+
+ 文件说明
+ ```
+ Bert_Base_Uncased_for_Pytorch
+ ├── bert_config.json
+ //bert_base模型网络配置参数
+ ├── aie_compile.py
+ //trace并编译模型
+ ├── run_aie_eval.py
+ //在squadv1.1上验证模型的精度和性能
+ ├── evaluate_data.py
+ //计算模型输出的f1得分
+ ├── ReadMe.md
+ //此文档
+ ```
+
+2. 安装依赖
+
+ ```bash
+ pip3 install -r requirements.txt
+ ```
+
+
+3. 获取BERT源码
+
+ ```bash
+ git clone https://github.com/NVIDIA/DeepLearningExamples.git
+ cd ./DeepLearningExamples
+ git reset --hard dd6b8ca2bb80e17b015c0f61e71c2a84733a5b32
+ cd ..
+ ```
+
+
+## 准备数据集
+
+1. 获取原始数据集(请遵循数据集提供方要求使用)。
+
+ 本模型支持使用squad QA的验证集。
+
+ 以squad v1.1为例,执行以下指令获取[squad v1.1](https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json)数据集。
+
+ ```bash
+ mkdir squadv1.1 && cd squadv1.1
+ wget https://rajpurkar.github.io/SQuAD-explorer/dataset/dev-v1.1.json -O ./dev-v1.1.json --no-check-certificate
+ cd ..
+ ```
+
+
+
+## 模型推理
+
+1. 编译模型。
+
+ 将PyTorch的.pth通过torch_aie编译,使得模型可以在昇腾NPU上进行推理。
+
+ 1. 获取权重文件。
+
+ 在PyTorch开源框架中获取[bert_base_qa.pt](https://catalog.ngc.nvidia.com/orgs/nvidia/models/bert_pyt_ckpt_base_qa_squad11_amp/files)文件。
+
+ ```bash
+ wget 'https://api.ngc.nvidia.com/v2/models/nvidia/bert_pyt_ckpt_base_qa_squad11_amp/versions/19.09.0/files/bert_base_qa.pt' -O ./bert_base_qa.pt --no-check-certificate
+ ```
+
+ 2. 执行编译脚本(以batch_size=8为例)。
+
+ ```bash
+ #设置gelu算子为高性能模式编译
+ export ASCENDIE_FASTER_MODE=1
+ #执行编译
+ python3.9 aie_compile.py --batch_size=8 --compare_cpu
+ ```
+
+ 参数说明:
+ - --batch_size:批次大小。
+ - --compare_cpu:将编译好的模型与原本的模型比对输出,确保精度。
+
+
+ 运行成功后,在当前目录下会生成```bert_base_batch_8.pt```模型文件(```_8```表示batch_size为8)
+
+
+2. 开始推理验证。
+
+ 1. 执行推理脚本(以batch 8为例)。
+
+ ```bash
+ #在squadv1.1上推理
+ python3.9 run_aie_eval.py --aie_model="./bert_base_batch_8.pt" --predict_file="./squadv1.1/dev-v1.1.json" --vocab_file="./DeepLearningExamples/PyTorch/LanguageModeling/BERT/vocab/vocab" --predict_batch_size=8 --do_lower_case
+ ```
+
+ - 参数说明:
+
+ - --aie_model:编译后的模型。
+ - --predict_file:推理预测用的数据集。
+ - --vocab_file:数据字典映射表文件。
+ - --predict_batch_size:推理时的批大小。
+ - --do_lower_case:是否进行大小写转化。
+
+ 执行推理脚本时,模型的预测结果会输出到./output_predictions/下生成.json文件,并且在推理脚本中会拉起子进程运行evaluate_data.py计算f1得分并打屏显示。模型执行推理时的吞吐量也会一并显示。
+
+
+
+
+# 模型推理性能&精度
+
+通过Torch AIE编译推理,性能参考下列数据。
+
+| 芯片型号 | Batch Size | 数据集 | 精度 | 性能 |
+| :-------: | :--------------: | :--------: | :--------: | :-------------: |
+| 310P3 | 1 | SQuAD v1.1 | 88.769% | 158 fps |
+| 310P3 | 4 | SQuAD v1.1 | | 185 fps |
+| 310P3 | 8 | SQuAD v1.1 | - | 164 fps |
+| 310P3 | 16 | SQuAD v1.1 | - | 163 fps |
+| 310P3 | 32 | SQuAD v1.1 | - | 167 fps |
+| 310P3 | 64 | SQuAD v1.1 | - | 166 fps |
diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/aie_compile.py b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/aie_compile.py
new file mode 100644
index 0000000000000000000000000000000000000000..d14c21dc0d9c48dbafd47b8a7115eca608a5f8d4
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/aie_compile.py
@@ -0,0 +1,146 @@
+# Copyright(C) 2023. Huawei Technologies Co.,Ltd. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import sys
+import os
+import argparse
+import torch
+import torch_aie
+from torch_aie import _enums
+
+sys.path.append("./DeepLearningExamples/PyTorch/LanguageModeling/BERT/")
+import modeling
+
+
+COSINE_THRESHOLD = 0.99
+
+
+def cosine_similarity(gt_tensor, pred_tensor):
+ gt_tensor = gt_tensor.flatten().to(torch.float32)
+ pred_tensor = pred_tensor.flatten().to(torch.float32)
+ if torch.sum(gt_tensor) == 0.0 or torch.sum(pred_tensor) == 0.0:
+ if torch.allclose(gt_tensor, pred_tensor, atol=1e-4, rtol=1e-4, equal_nan=True):
+ return 1.0
+ res = torch.nn.functional.cosine_similarity(gt_tensor, pred_tensor, dim=0, eps=1e-6)
+ res = res.cpu().detach().item()
+ return res
+
+
+def aie_compile(torch_model, args):
+ input_shape = (args.batch_size, args.max_seq_length)
+ input_ids = torch.randint(high = 1, size = input_shape, dtype = torch.int32)
+ segment_ids = torch.randint(high = 1, size = input_shape, dtype = torch.int32)
+ input_mask = torch.randint(high = 5, size = input_shape, dtype = torch.int32)
+ input_data = [ input_ids,
+ segment_ids,
+ input_mask]
+
+ # trace model
+ print("trace start. ")
+ traced_model = torch.jit.trace(torch_model, input_data)
+ print("trace done. ")
+ # print("traced model is ", traced_model.graph)
+
+ traced_model.eval()
+ print("torch_aie compile start !")
+ torch_aie.set_device(0)
+ compile_inputs = [torch_aie.Input(shape = input_shape, dtype = torch.int32, format = torch_aie.TensorFormat.ND),
+ torch_aie.Input(shape = input_shape, dtype = torch.int32, format = torch_aie.TensorFormat.ND),
+ torch_aie.Input(shape = input_shape, dtype = torch.int32, format = torch_aie.TensorFormat.ND)]
+ compiled_model = torch_aie.compile(
+ traced_model,
+ inputs = compile_inputs,
+ precision_policy = _enums.PrecisionPolicy.FP16,
+ soc_version = "Ascend310P3",
+ optimization_level = args.op_level
+ )
+ print("torch_aie compile done !")
+ print("compiled model is ", compiled_model.graph)
+ compiled_model.save(args.pt_dir)
+ print("torch aie compiled model saved. ")
+
+ if args.compare_cpu:
+ print("start to check the percision of npu model.")
+ com_res = True
+ compiled_model = torch.jit.load(args.pt_dir)
+ jit_res = traced_model(input_ids, segment_ids, input_mask)
+ print("jit infer done !")
+ input_ids_npu = input_ids.to("npu:0")
+ segment_ids_npu = segment_ids.to("npu:0")
+ input_mask_npu = input_mask.to("npu:0")
+ aie_res = compiled_model(input_ids_npu, segment_ids_npu, input_mask_npu)
+ aie_res_cpu = aie_res.to("cpu")
+ print("aie infer done !")
+
+ for j, a in zip(jit_res, aie_res_cpu):
+ res = cosine_similarity(j, a)
+ print(res)
+ if res < COSINE_THRESHOLD:
+ com_res = False
+
+ if com_res:
+ print("Compare success ! NPU model have the same output with CPU model !")
+ else:
+ print("Compare failed ! Outputs of NPU model are not the same with CPU model !")
+ return
+
+
+if __name__ == '__main__':
+ parser = argparse.ArgumentParser()
+ ## Required parameters
+ parser.add_argument("--torch_pt",
+ default="./bert_base_qa.pt",
+ type=str,
+ help="The original torch pt file from pretraining")
+ parser.add_argument("--save_dir",
+ default="./",
+ type=str,
+ help="The path of the directory that stores the compiled model")
+ parser.add_argument("--config_file",
+ default="./bert_config.json",
+ type=str,
+ help="The BERT model config")
+ parser.add_argument('--batch_size',
+ default=8,
+ type=int,
+ help="batch size")
+ parser.add_argument('--max_seq_length',
+ default=512,
+ type=int,
+ help="position embedding length")
+ parser.add_argument('--op_level',
+ default=0,
+ type=int,
+ help="optimization level in the compile spec ")
+ parser.add_argument("--compare_cpu", action='store_true',
+ help="Whether to check the percision of npu model.")
+ args = parser.parse_args()
+
+ output_root = args.save_dir
+ if not os.path.exists(output_root):
+ os.makedirs(output_root)
+
+ args.pt_dir = os.path.join(output_root, "bert_base_batch_{}.pt".format(args.batch_size))
+ config = modeling.BertConfig.from_json_file(args.config_file)
+ if config.vocab_size % 8 != 0:
+ config.vocab_size += 8 - (config.vocab_size % 8)
+ torch_model = modeling.BertForQuestionAnswering(config)
+ torch_model.load_state_dict(torch.load(args.torch_pt, map_location='cpu')["model"])
+ torch_model.to("cpu").eval()
+
+ aie_compile(torch_model, args)
+ torch_aie.finalize()
+
+
+
diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/bert_config.json b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/bert_config.json
new file mode 100644
index 0000000000000000000000000000000000000000..fca794a5f07ff8f963fe8b61e3694b0fb7f955df
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/bert_config.json
@@ -0,0 +1,13 @@
+{
+ "attention_probs_dropout_prob": 0.1,
+ "hidden_act": "gelu",
+ "hidden_dropout_prob": 0.1,
+ "hidden_size": 768,
+ "initializer_range": 0.02,
+ "intermediate_size": 3072,
+ "max_position_embeddings": 512,
+ "num_attention_heads": 12,
+ "num_hidden_layers": 12,
+ "type_vocab_size": 2,
+ "vocab_size": 30522
+}
diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/evaluate_data.py b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/evaluate_data.py
new file mode 100644
index 0000000000000000000000000000000000000000..100db9ce6793f35cf8bdd1819477859b8e7657b8
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/evaluate_data.py
@@ -0,0 +1,108 @@
+# Copyright 2022 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+""" Official evaluation script for v1.1 of the SQuAD dataset. """
+from __future__ import print_function
+from collections import Counter
+import string
+import re
+import argparse
+import json
+import sys
+
+
+def normalize_answer(s):
+ """Lower text and remove punctuation, articles and extra whitespace."""
+ def remove_articles(text):
+ return re.sub(r'\b(a|an|the)\b', ' ', text)
+
+ def white_space_fix(text):
+ return ' '.join(text.split())
+
+ def remove_punc(text):
+ exclude = set(string.punctuation)
+ return ''.join(ch for ch in text if ch not in exclude)
+
+ def lower(text):
+ return text.lower()
+
+ return white_space_fix(remove_articles(remove_punc(lower(s))))
+
+
+def f1_score(prediction, ground_truth):
+ prediction_tokens = normalize_answer(prediction).split()
+ ground_truth_tokens = normalize_answer(ground_truth).split()
+ common = Counter(prediction_tokens) & Counter(ground_truth_tokens)
+ num_same = sum(common.values())
+ if num_same == 0:
+ return 0
+ precision = 1.0 * num_same / len(prediction_tokens)
+ recall = 1.0 * num_same / len(ground_truth_tokens)
+ f1 = (2 * precision * recall) / (precision + recall)
+ return f1
+
+
+def exact_match_score(prediction, ground_truth):
+ return (normalize_answer(prediction) == normalize_answer(ground_truth))
+
+
+def metric_max_over_ground_truths(metric_fn, prediction, ground_truths):
+ scores_for_ground_truths = []
+ for ground_truth in ground_truths:
+ score = metric_fn(prediction, ground_truth)
+ scores_for_ground_truths.append(score)
+ return max(scores_for_ground_truths)
+
+
+def evaluate(dataset, predictions):
+ f1 = exact_match = total = 0
+ for article in dataset:
+ for paragraph in article['paragraphs']:
+ for qa in paragraph['qas']:
+ total += 1
+ if qa['id'] not in predictions:
+ message = 'Unanswered question ' + qa['id'] + \
+ ' will receive score 0.'
+ print(message, file=sys.stderr)
+ continue
+ ground_truths = list(map(lambda x: x['text'], qa['answers']))
+ prediction = predictions[qa['id']]
+ exact_match += metric_max_over_ground_truths(
+ exact_match_score, prediction, ground_truths)
+ f1 += metric_max_over_ground_truths(
+ f1_score, prediction, ground_truths)
+
+ exact_match = 100.0 * exact_match / total
+ f1 = 100.0 * f1 / total
+
+ return {'exact_match': exact_match, 'f1': f1}
+
+
+if __name__ == '__main__':
+ expected_version = '1.1'
+ parser = argparse.ArgumentParser(
+ description='Evaluation for SQuAD ' + expected_version)
+ parser.add_argument('dataset_file', help='Dataset file')
+ parser.add_argument('prediction_file', help='Prediction File')
+ args = parser.parse_args()
+ with open(args.dataset_file) as dataset_file:
+ dataset_json = json.load(dataset_file)
+ if (dataset_json['version'] != expected_version):
+ print('Evaluation expects v-' + expected_version +
+ ', but got dataset with v-' + dataset_json['version'],
+ file=sys.stderr)
+ dataset = dataset_json['data']
+ with open(args.prediction_file) as prediction_file:
+ predictions = json.load(prediction_file)
+ print(json.dumps(evaluate(dataset, predictions)))
diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/requirements.txt b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..fc24f23fc011ba323a8b1c05929009731a9a17b2
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/requirements.txt
@@ -0,0 +1,10 @@
+torch==2.0.1
+numpy
+boto3
+tqdm
+requests
+decorator
+attrs
+psutil
+absl-py
+tensorflow>=1.10.0
\ No newline at end of file
diff --git a/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/run_aie_eval.py b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/run_aie_eval.py
new file mode 100755
index 0000000000000000000000000000000000000000..fd01ed6d46cb75258e27069ee34966d1b0e0b5aa
--- /dev/null
+++ b/AscendIE/TorchAIE/built-in/nlp/Bert_Base_Uncased_for_Pytorch/run_aie_eval.py
@@ -0,0 +1,815 @@
+# coding=utf-8
+# Copyright 2023 Huawei Technologies Co., Ltd
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+# ============================================================================
+# Copyright (c) 2019 NVIDIA CORPORATION. All rights reserved.
+# Copyright 2018 The Google AI Language Team Authors and The HugginFace Inc. team.
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""Run BERT compiled with torch_aie on SQuAD."""
+
+from __future__ import absolute_import, division, print_function
+
+import argparse
+import collections
+import json
+import logging
+import math
+import os
+import random
+import sys
+import subprocess
+from io import open
+
+import torch
+import torch_aie
+from torch.utils.data import (DataLoader, RandomSampler, SequentialSampler,
+ TensorDataset)
+from tqdm import tqdm, trange
+
+sys.path.append("./DeepLearningExamples/PyTorch/LanguageModeling/BERT/")
+from tokenization import (BasicTokenizer, BertTokenizer, whitespace_tokenize)
+import time
+
+torch._C._jit_set_profiling_mode(False)
+torch._C._jit_set_profiling_executor(False)
+
+logging.basicConfig(format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
+ datefmt='%m/%d/%Y %H:%M:%S',
+ level=logging.INFO)
+logger = logging.getLogger(__name__)
+
+
+class SquadExample(object):
+ """
+ A single training/test example for the Squad dataset.
+ For examples without an answer, the start and end position are -1.
+ """
+
+ def __init__(self,
+ qas_id,
+ question_text,
+ doc_tokens,
+ orig_answer_text=None,
+ start_position=None,
+ end_position=None,
+ is_impossible=None):
+ self.qas_id = qas_id
+ self.question_text = question_text
+ self.doc_tokens = doc_tokens
+ self.orig_answer_text = orig_answer_text
+ self.start_position = start_position
+ self.end_position = end_position
+ self.is_impossible = is_impossible
+
+ def __str__(self):
+ return self.__repr__()
+
+ def __repr__(self):
+ s = ""
+ s += "qas_id: %s" % (self.qas_id)
+ s += ", question_text: %s" % (
+ self.question_text)
+ s += ", doc_tokens: [%s]" % (" ".join(self.doc_tokens))
+ if self.start_position:
+ s += ", start_position: %d" % (self.start_position)
+ if self.end_position:
+ s += ", end_position: %d" % (self.end_position)
+ if self.is_impossible:
+ s += ", is_impossible: %r" % (self.is_impossible)
+ return s
+
+
+class InputFeatures(object):
+ """A single set of features of data."""
+
+ def __init__(self,
+ unique_id,
+ example_index,
+ doc_span_index,
+ tokens,
+ token_to_orig_map,
+ token_is_max_context,
+ input_ids,
+ input_mask,
+ segment_ids,
+ start_position=None,
+ end_position=None,
+ is_impossible=None):
+ self.unique_id = unique_id
+ self.example_index = example_index
+ self.doc_span_index = doc_span_index
+ self.tokens = tokens
+ self.token_to_orig_map = token_to_orig_map
+ self.token_is_max_context = token_is_max_context
+ self.input_ids = input_ids
+ self.input_mask = input_mask
+ self.segment_ids = segment_ids
+ self.start_position = start_position
+ self.end_position = end_position
+ self.is_impossible = is_impossible
+
+
+def read_squad_examples(input_file, is_training):
+ """Read a SQuAD json file into a list of SquadExample."""
+ with open(input_file, "r", encoding='utf-8') as reader:
+ input_data = json.load(reader)["data"]
+
+ def is_whitespace(c):
+ if c == " " or c == "\t" or c == "\r" or c == "\n" or ord(c) == 0x202F:
+ return True
+ return False
+
+ examples = []
+ for entry in input_data:
+ for paragraph in entry["paragraphs"]:
+ paragraph_text = paragraph["context"]
+ doc_tokens = []
+ char_to_word_offset = []
+ prev_is_whitespace = True
+ for c in paragraph_text:
+ if is_whitespace(c):
+ prev_is_whitespace = True
+ else:
+ if prev_is_whitespace:
+ doc_tokens.append(c)
+ else:
+ doc_tokens[-1] += c
+ prev_is_whitespace = False
+ char_to_word_offset.append(len(doc_tokens) - 1)
+
+ for qa in paragraph["qas"]:
+ qas_id = qa["id"]
+ question_text = qa["question"]
+ start_position = None
+ end_position = None
+ orig_answer_text = None
+ is_impossible = False
+ example = SquadExample(
+ qas_id=qas_id,
+ question_text=question_text,
+ doc_tokens=doc_tokens,
+ orig_answer_text=orig_answer_text,
+ start_position=start_position,
+ end_position=end_position,
+ is_impossible=is_impossible)
+ examples.append(example)
+ return examples
+
+
+def convert_examples_to_features(examples, tokenizer, max_seq_length,
+ doc_stride, max_query_length, is_training):
+ """Loads a data file into a list of `InputBatch`s."""
+
+ unique_id = 1000000000
+
+ features = []
+ for (example_index, example) in enumerate(examples):
+ query_tokens = tokenizer.tokenize(example.question_text)
+
+ if len(query_tokens) > max_query_length:
+ query_tokens = query_tokens[0:max_query_length]
+
+ tok_to_orig_index = []
+ orig_to_tok_index = []
+ all_doc_tokens = []
+ for (i, token) in enumerate(example.doc_tokens):
+ orig_to_tok_index.append(len(all_doc_tokens))
+ sub_tokens = tokenizer.tokenize(token)
+ for sub_token in sub_tokens:
+ tok_to_orig_index.append(i)
+ all_doc_tokens.append(sub_token)
+
+ tok_start_position = None
+ tok_end_position = None
+ if is_training and example.is_impossible:
+ tok_start_position = -1
+ tok_end_position = -1
+ if is_training and not example.is_impossible:
+ tok_start_position = orig_to_tok_index[example.start_position]
+ if example.end_position < len(example.doc_tokens) - 1:
+ tok_end_position = orig_to_tok_index[example.end_position + 1] - 1
+ else:
+ tok_end_position = len(all_doc_tokens) - 1
+ (tok_start_position, tok_end_position) = _improve_answer_span(
+ all_doc_tokens, tok_start_position, tok_end_position, tokenizer,
+ example.orig_answer_text)
+
+ # The -3 accounts for [CLS], [SEP] and [SEP]
+ max_tokens_for_doc = max_seq_length - len(query_tokens) - 3
+
+ # We can have documents that are longer than the maximum sequence length.
+ # To deal with this we do a sliding window approach, where we take chunks
+ # of the up to our max length with a stride of `doc_stride`.
+ _DocSpan = collections.namedtuple( # pylint: disable=invalid-name
+ "DocSpan", ["start", "length"])
+ doc_spans = []
+ start_offset = 0
+ while start_offset < len(all_doc_tokens):
+ length = len(all_doc_tokens) - start_offset
+ if length > max_tokens_for_doc:
+ length = max_tokens_for_doc
+ doc_spans.append(_DocSpan(start=start_offset, length=length))
+ if start_offset + length == len(all_doc_tokens):
+ break
+ start_offset += min(length, doc_stride)
+
+ for (doc_span_index, doc_span) in enumerate(doc_spans):
+ tokens = []
+ token_to_orig_map = {}
+ token_is_max_context = {}
+ segment_ids = []
+ tokens.append("[CLS]")
+ segment_ids.append(0)
+ for token in query_tokens:
+ tokens.append(token)
+ segment_ids.append(0)
+ tokens.append("[SEP]")
+ segment_ids.append(0)
+
+ for i in range(doc_span.length):
+ split_token_index = doc_span.start + i
+ token_to_orig_map[len(tokens)] = tok_to_orig_index[split_token_index]
+
+ is_max_context = _check_is_max_context(doc_spans, doc_span_index,
+ split_token_index)
+ token_is_max_context[len(tokens)] = is_max_context
+ tokens.append(all_doc_tokens[split_token_index])
+ segment_ids.append(1)
+ tokens.append("[SEP]")
+ segment_ids.append(1)
+
+ input_ids = tokenizer.convert_tokens_to_ids(tokens)
+
+ # The mask has 1 for real tokens and 0 for padding tokens. Only real
+ # tokens are attended to.
+ input_mask = [1] * len(input_ids)
+
+ # Zero-pad up to the sequence length.
+ while len(input_ids) < max_seq_length:
+ input_ids.append(0)
+ input_mask.append(0)
+ segment_ids.append(0)
+
+ assert len(input_ids) == max_seq_length
+ assert len(input_mask) == max_seq_length
+ assert len(segment_ids) == max_seq_length
+
+ start_position = None
+ end_position = None
+ if is_training and not example.is_impossible:
+ # For training, if our document chunk does not contain an annotation
+ # we throw it out, since there is nothing to predict.
+ doc_start = doc_span.start
+ doc_end = doc_span.start + doc_span.length - 1
+ out_of_span = False
+ if not (tok_start_position >= doc_start and
+ tok_end_position <= doc_end):
+ out_of_span = True
+ if out_of_span:
+ start_position = 0
+ end_position = 0
+ else:
+ doc_offset = len(query_tokens) + 2
+ start_position = tok_start_position - doc_start + doc_offset
+ end_position = tok_end_position - doc_start + doc_offset
+ if is_training and example.is_impossible:
+ start_position = 0
+ end_position = 0
+
+ features.append(
+ InputFeatures(
+ unique_id=unique_id,
+ example_index=example_index,
+ doc_span_index=doc_span_index,
+ tokens=tokens,
+ token_to_orig_map=token_to_orig_map,
+ token_is_max_context=token_is_max_context,
+ input_ids=input_ids,
+ input_mask=input_mask,
+ segment_ids=segment_ids,
+ start_position=start_position,
+ end_position=end_position,
+ is_impossible=example.is_impossible))
+ unique_id += 1
+
+ return features
+
+
+def _improve_answer_span(doc_tokens, input_start, input_end, tokenizer,
+ orig_answer_text):
+ """Returns tokenized answer spans that better match the annotated answer."""
+
+ # The SQuAD annotations are character based. We first project them to
+ # whitespace-tokenized words. But then after WordPiece tokenization, we can
+ # often find a "better match". For example:
+ #
+ # Question: What year was John Smith born?
+ # Context: The leader was John Smith (1895-1943).
+ # Answer: 1895
+ #
+ # The original whitespace-tokenized answer will be "(1895-1943).". However
+ # after tokenization, our tokens will be "( 1895 - 1943 ) .". So we can match
+ # the exact answer, 1895.
+ #
+ # However, this is not always possible. Consider the following:
+ #
+ # Question: What country is the top exporter of electornics?
+ # Context: The Japanese electronics industry is the lagest in the world.
+ # Answer: Japan
+ #
+ # In this case, the annotator chose "Japan" as a character sub-span of
+ # the word "Japanese". Since our WordPiece tokenizer does not split
+ # "Japanese", we just use "Japanese" as the annotation. This is fairly rare
+ # in SQuAD, but does happen.
+ tok_answer_text = " ".join(tokenizer.tokenize(orig_answer_text))
+
+ for new_start in range(input_start, input_end + 1):
+ for new_end in range(input_end, new_start - 1, -1):
+ text_span = " ".join(doc_tokens[new_start:(new_end + 1)])
+ if text_span == tok_answer_text:
+ return (new_start, new_end)
+
+ return (input_start, input_end)
+
+
+def _check_is_max_context(doc_spans, cur_span_index, position):
+ """Check if this is the 'max context' doc span for the token."""
+
+ # Because of the sliding window approach taken to scoring documents, a single
+ # token can appear in multiple documents. E.g.
+ # Doc: the man went to the store and bought a gallon of milk
+ # Span A: the man went to the
+ # Span B: to the store and bought
+ # Span C: and bought a gallon of
+ # ...
+ #
+ # Now the word 'bought' will have two scores from spans B and C. We only
+ # want to consider the score with "maximum context", which we define as
+ # the *minimum* of its left and right context (the *sum* of left and
+ # right context will always be the same, of course).
+ #
+ # In the example the maximum context for 'bought' would be span C since
+ # it has 1 left context and 3 right context, while span B has 4 left context
+ # and 0 right context.
+ best_score = None
+ best_span_index = None
+ for (span_index, doc_span) in enumerate(doc_spans):
+ end = doc_span.start + doc_span.length - 1
+ if position < doc_span.start:
+ continue
+ if position > end:
+ continue
+ num_left_context = position - doc_span.start
+ num_right_context = end - position
+ score = min(num_left_context, num_right_context) + 0.01 * doc_span.length
+ if best_score is None or score > best_score:
+ best_score = score
+ best_span_index = span_index
+
+ return cur_span_index == best_span_index
+
+
+RawResult = collections.namedtuple("RawResult",
+ ["unique_id", "start_logits", "end_logits"])
+
+
+def get_answers(examples, features, results, args):
+ predictions = collections.defaultdict(list) #it is possible that one example corresponds to multiple features
+ Prediction = collections.namedtuple('Prediction', ['text', 'start_logit', 'end_logit'])
+
+ if args.version_2_with_negative:
+ null_vals = collections.defaultdict(lambda: (float("inf"),0,0))
+ for ex, feat, result in match_results(examples, features, results):
+ start_indices = _get_best_indices(result.start_logits, args.n_best_size)
+ end_indices = _get_best_indices(result.end_logits, args.n_best_size)
+ prelim_predictions = get_valid_prelim_predictions(start_indices, end_indices, feat, result, args)
+ prelim_predictions = sorted(
+ prelim_predictions,
+ key=lambda x: (x.start_logit + x.end_logit),
+ reverse=True)
+ if args.version_2_with_negative:
+ score = result.start_logits[0] + result.end_logits[0]
+ if score < null_vals[ex.qas_id][0]:
+ null_vals[ex.qas_id] = (score, result.start_logits[0], result.end_logits[0])
+
+ curr_predictions = []
+ seen_predictions = []
+ for pred in prelim_predictions:
+ if len(curr_predictions) == args.n_best_size:
+ break
+ if pred.start_index > 0: # this is a non-null prediction TODO: this probably is irrelevant
+ final_text = get_answer_text(ex, feat, pred, args)
+ if final_text in seen_predictions:
+ continue
+ else:
+ final_text = ""
+
+ seen_predictions.append(final_text)
+ curr_predictions.append(Prediction(final_text, pred.start_logit, pred.end_logit))
+ predictions[ex.qas_id] += curr_predictions
+
+ #Add empty prediction
+ if args.version_2_with_negative:
+ for qas_id in predictions.keys():
+ predictions[qas_id].append(Prediction('',
+ null_vals[ex.qas_id][1],
+ null_vals[ex.qas_id][2]))
+
+
+ nbest_answers = collections.defaultdict(list)
+ answers = {}
+ for qas_id, preds in predictions.items():
+ nbest = sorted(
+ preds,
+ key=lambda x: (x.start_logit + x.end_logit),
+ reverse=True)[:args.n_best_size]
+
+ # In very rare edge cases we could only have single null prediction.
+ # So we just create a nonce prediction in this case to avoid failure.
+ if not nbest:
+ nbest.append(Prediction(text="empty", start_logit=0.0, end_logit=0.0))
+
+ total_scores = []
+ best_non_null_entry = None
+ for entry in nbest:
+ total_scores.append(entry.start_logit + entry.end_logit)
+ if not best_non_null_entry and entry.text:
+ best_non_null_entry = entry
+ probs = _compute_softmax(total_scores)
+ for (i, entry) in enumerate(nbest):
+ output = collections.OrderedDict()
+ output["text"] = entry.text
+ output["probability"] = probs[i]
+ output["start_logit"] = entry.start_logit
+ output["end_logit"] = entry.end_logit
+ nbest_answers[qas_id].append(output)
+ if args.version_2_with_negative:
+ score_diff = null_vals[qas_id][0] - best_non_null_entry.start_logit - best_non_null_entry.end_logit
+ if score_diff > args.null_score_diff_threshold:
+ answers[qas_id] = ""
+ else:
+ answers[qas_id] = best_non_null_entry.text
+ else:
+ answers[qas_id] = nbest_answers[qas_id][0]['text']
+
+ return answers, nbest_answers
+
+def get_answer_text(example, feature, pred, args):
+ tok_tokens = feature.tokens[pred.start_index:(pred.end_index + 1)]
+ orig_doc_start = feature.token_to_orig_map[pred.start_index]
+ orig_doc_end = feature.token_to_orig_map[pred.end_index]
+ orig_tokens = example.doc_tokens[orig_doc_start:(orig_doc_end + 1)]
+ tok_text = " ".join(tok_tokens)
+
+ # De-tokenize WordPieces that have been split off.
+ tok_text = tok_text.replace(" ##", "")
+ tok_text = tok_text.replace("##", "")
+
+ # Clean whitespace
+ tok_text = tok_text.strip()
+ tok_text = " ".join(tok_text.split())
+ orig_text = " ".join(orig_tokens)
+
+ final_text = get_final_text(tok_text, orig_text, args.do_lower_case, args.verbose_logging)
+ return final_text
+
+def get_valid_prelim_predictions(start_indices, end_indices, feature, result, args):
+
+ _PrelimPrediction = collections.namedtuple(
+ "PrelimPrediction",
+ ["start_index", "end_index", "start_logit", "end_logit"])
+ prelim_predictions = []
+ for start_index in start_indices:
+ for end_index in end_indices:
+ if start_index >= len(feature.tokens):
+ continue
+ if end_index >= len(feature.tokens):
+ continue
+ if start_index not in feature.token_to_orig_map:
+ continue
+ if end_index not in feature.token_to_orig_map:
+ continue
+ if not feature.token_is_max_context.get(start_index, False):
+ continue
+ if end_index < start_index:
+ continue
+ length = end_index - start_index + 1
+ if length > args.max_answer_length:
+ continue
+ prelim_predictions.append(
+ _PrelimPrediction(
+ start_index=start_index,
+ end_index=end_index,
+ start_logit=result.start_logits[start_index],
+ end_logit=result.end_logits[end_index]))
+ return prelim_predictions
+
+def match_results(examples, features, results):
+ unique_f_ids = set([f.unique_id for f in features])
+ unique_r_ids = set([r.unique_id for r in results])
+ matching_ids = unique_f_ids & unique_r_ids
+ features = [f for f in features if f.unique_id in matching_ids]
+ results = [r for r in results if r.unique_id in matching_ids]
+ features.sort(key=lambda x: x.unique_id)
+ results.sort(key=lambda x: x.unique_id)
+
+ for f, r in zip(features, results): #original code assumes strict ordering of examples. TODO: rewrite this
+ yield examples[f.example_index], f, r
+
+def get_final_text(pred_text, orig_text, do_lower_case, verbose_logging=False):
+ """Project the tokenized prediction back to the original text."""
+
+ # When we created the data, we kept track of the alignment between original
+ # (whitespace tokenized) tokens and our WordPiece tokenized tokens. So
+ # now `orig_text` contains the span of our original text corresponding to the
+ # span that we predicted.
+ #
+ # However, `orig_text` may contain extra characters that we don't want in
+ # our prediction.
+ #
+ # For example, let's say:
+ # pred_text = steve smith
+ # orig_text = Steve Smith's
+ #
+ # We don't want to return `orig_text` because it contains the extra "'s".
+ #
+ # We don't want to return `pred_text` because it's already been normalized
+ # (the SQuAD eval script also does punctuation stripping/lower casing but
+ # our tokenizer does additional normalization like stripping accent
+ # characters).
+ #
+ # What we really want to return is "Steve Smith".
+ #
+ # Therefore, we have to apply a semi-complicated alignment heruistic between
+ # `pred_text` and `orig_text` to get a character-to-charcter alignment. This
+ # can fail in certain cases in which case we just return `orig_text`.
+
+ def _strip_spaces(text):
+ ns_chars = []
+ ns_to_s_map = collections.OrderedDict()
+ for (i, c) in enumerate(text):
+ if c == " ":
+ continue
+ ns_to_s_map[len(ns_chars)] = i
+ ns_chars.append(c)
+ ns_text = "".join(ns_chars)
+ return (ns_text, ns_to_s_map)
+
+ # We first tokenize `orig_text`, strip whitespace from the result
+ # and `pred_text`, and check if they are the same length. If they are
+ # NOT the same length, the heuristic has failed. If they are the same
+ # length, we assume the characters are one-to-one aligned.
+
+ tokenizer = BasicTokenizer(do_lower_case=do_lower_case)
+
+ tok_text = " ".join(tokenizer.tokenize(orig_text))
+
+ start_position = tok_text.find(pred_text)
+ if start_position == -1:
+ if verbose_logging:
+ logger.info(
+ "Unable to find text: '%s' in '%s'" % (pred_text, orig_text))
+ return orig_text
+ end_position = start_position + len(pred_text) - 1
+
+ (orig_ns_text, orig_ns_to_s_map) = _strip_spaces(orig_text)
+ (tok_ns_text, tok_ns_to_s_map) = _strip_spaces(tok_text)
+
+ if len(orig_ns_text) != len(tok_ns_text):
+ if verbose_logging:
+ logger.info("Length not equal after stripping spaces: '%s' vs '%s'",
+ orig_ns_text, tok_ns_text)
+ return orig_text
+
+ # We then project the characters in `pred_text` back to `orig_text` using
+ # the character-to-character alignment.
+ tok_s_to_ns_map = {}
+ for (i, tok_index) in tok_ns_to_s_map.items():
+ tok_s_to_ns_map[tok_index] = i
+
+ orig_start_position = None
+ if start_position in tok_s_to_ns_map:
+ ns_start_position = tok_s_to_ns_map[start_position]
+ if ns_start_position in orig_ns_to_s_map:
+ orig_start_position = orig_ns_to_s_map[ns_start_position]
+
+ if orig_start_position is None:
+ if verbose_logging:
+ logger.info("Couldn't map start position")
+ return orig_text
+
+ orig_end_position = None
+ if end_position in tok_s_to_ns_map:
+ ns_end_position = tok_s_to_ns_map[end_position]
+ if ns_end_position in orig_ns_to_s_map:
+ orig_end_position = orig_ns_to_s_map[ns_end_position]
+
+ if orig_end_position is None:
+ if verbose_logging:
+ logger.info("Couldn't map end position")
+ return orig_text
+
+ output_text = orig_text[orig_start_position:(orig_end_position + 1)]
+ return output_text
+
+
+def _get_best_indices(logits, n_best_size):
+ """Get the n-best logits from a list."""
+ index_and_score = sorted(enumerate(logits), key=lambda x: x[1], reverse=True)
+
+ best_indices = []
+ for i in range(len(index_and_score)):
+ if i >= n_best_size:
+ break
+ best_indices.append(index_and_score[i][0])
+ return best_indices
+
+
+def _compute_softmax(scores):
+ """Compute softmax probability over raw logits."""
+ if not scores:
+ return []
+
+ max_score = None
+ for score in scores:
+ if max_score is None or score > max_score:
+ max_score = score
+
+ exp_scores = []
+ total_sum = 0.0
+ for score in scores:
+ x = math.exp(score - max_score)
+ exp_scores.append(x)
+ total_sum += x
+
+ probs = []
+ for score in exp_scores:
+ probs.append(score / total_sum)
+ return probs
+
+
+def main():
+ parser = argparse.ArgumentParser()
+
+ ## Required parameters
+ parser.add_argument("--aie_model", default=None, type=str, required=True,
+ help="Path to bert-base model compiled with torch_aie. ")
+ parser.add_argument("--predict_file", default=None, type=str, required=True,
+ help="SQuAD json for predictions and evaluation. E.g., dev-v1.1.json or test-v1.1.json")
+ parser.add_argument('--vocab_file',
+ type=str, default=None, required=True,
+ help="Vocabulary mapping/file BERT was pretrainined on")
+ parser.add_argument("--predict_batch_size", default=8, type=int, required=True,
+ help="Batch size for predictions, must match the compile spec of the torch_aie model.")
+
+
+ ## Other parameters
+ parser.add_argument("--output_dir", default="./output_predictions", type=str,
+ help="The output directory where the predictions will be written.")
+ parser.add_argument("--max_seq_length", default=512, type=int,
+ help="The maximum total input sequence length after WordPiece tokenization. Sequences "
+ "longer than this will be truncated, and sequences shorter than this will be padded.")
+ parser.add_argument("--doc_stride", default=128, type=int,
+ help="When splitting up a long document into chunks, how much stride to take between chunks.")
+ parser.add_argument("--max_query_length", default=64, type=int,
+ help="The maximum number of tokens for the question. Questions longer than this will "
+ "be truncated to this length.")
+ parser.add_argument("--n_best_size", default=20, type=int,
+ help="The total number of n-best predictions to generate in the nbest_predictions.json "
+ "output file.")
+ parser.add_argument("--max_answer_length", default=30, type=int,
+ help="The maximum length of an answer that can be generated. This is needed because the start "
+ "and end predictions are not conditioned on one another.")
+ parser.add_argument("--verbose_logging", action='store_true',
+ help="If true, all of the warnings related to data processing will be printed. "
+ "A number of warnings are expected for a normal SQuAD evaluation.")
+ parser.add_argument('--seed',
+ type=int,
+ default=42,
+ help="random seed for initialization")
+ parser.add_argument("--do_lower_case",
+ action='store_true',
+ help="Whether to lower case the input text. True for uncased models, False for cased models.")
+ parser.add_argument('--version_2_with_negative',
+ action='store_true',
+ help='If true, the SQuAD examples contain some that do not have an answer.')
+ parser.add_argument("--eval_script",
+ help="Script to evaluate squad predictions",
+ default="./evaluate_data.py",
+ type=str)
+ args = parser.parse_args()
+ torch_aie.set_device(0)
+ logger.info("PARAMETERs are %s", [str(args)])
+
+ random.seed(args.seed)
+ torch.manual_seed(args.seed)
+
+ if not args.predict_file:
+ raise ValueError(
+ "The `predict_file` must be specified.")
+
+ if not os.path.exists(args.output_dir):
+ os.makedirs(args.output_dir)
+
+ tokenizer = BertTokenizer(args.vocab_file, do_lower_case=args.do_lower_case, max_len=512) # for bert large
+
+ eval_examples = read_squad_examples(
+ input_file=args.predict_file, is_training=False)
+ eval_features = convert_examples_to_features(
+ examples=eval_examples,
+ tokenizer=tokenizer,
+ max_seq_length=args.max_seq_length,
+ doc_stride=args.doc_stride,
+ max_query_length=args.max_query_length,
+ is_training=False)
+
+ all_input_ids = torch.tensor([f.input_ids for f in eval_features], dtype=torch.int32)
+ all_input_mask = torch.tensor([f.input_mask for f in eval_features], dtype=torch.int32)
+ all_segment_ids = torch.tensor([f.segment_ids for f in eval_features], dtype=torch.int32)
+ all_example_index = torch.arange(all_input_ids.size(0), dtype=torch.int32)
+ eval_data = TensorDataset(all_input_ids, all_input_mask, all_segment_ids, all_example_index)
+ # Run prediction for full data
+ eval_sampler = SequentialSampler(eval_data)
+ eval_dataloader = DataLoader(eval_data, sampler=eval_sampler, batch_size=args.predict_batch_size, drop_last=True)
+ logger.info("Prepare dataset for evaluation done.")
+
+ model = torch.jit.load(args.aie_model)
+ model.eval()
+ inf_stream = torch_aie.npu.Stream("npu:0")
+ fps = []
+ all_results = []
+ logger.info("Prepare aie model for evaluation done, ready to predict.")
+ for input_ids, input_mask, segment_ids, example_indices in tqdm(eval_dataloader, desc="Evaluating"):
+ input_ids = input_ids.to("npu:0")
+ input_mask = input_mask.to("npu:0")
+ segment_ids = segment_ids.to("npu:0")
+
+ with torch.no_grad():
+ inf_s = time.time()
+ with torch_aie.npu.stream(inf_stream):
+ outputs = model(input_ids, segment_ids, input_mask)
+ inf_stream.synchronize()
+ inf_e = time.time()
+ fps.append(inf_e - inf_s)
+ # print(len(fps))
+
+ batch_start_logits, batch_end_logits = outputs
+ batch_start_logits = batch_start_logits.to("cpu")
+ batch_end_logits = batch_end_logits.to("cpu")
+ for i, example_index in enumerate(example_indices):
+ start_logits = batch_start_logits[i].detach().cpu().tolist()
+ end_logits = batch_end_logits[i].detach().cpu().tolist()
+ eval_feature = eval_features[example_index.item()]
+ unique_id = int(eval_feature.unique_id)
+ all_results.append(RawResult(unique_id=unique_id,
+ start_logits=start_logits,
+ end_logits=end_logits))
+
+ logger.info("Predict on the dataset finished.")
+ output_prediction_file = os.path.join(args.output_dir, "predictions.json")
+ output_nbest_file = os.path.join(args.output_dir, "nbest_predictions.json")
+
+ answers, nbest_answers = get_answers(eval_examples, eval_features, all_results, args)
+ with open(output_prediction_file, "w") as f:
+ f.write(json.dumps(answers, indent=4) + "\n")
+ with open(output_nbest_file, "w") as f:
+ f.write(json.dumps(nbest_answers, indent=4) + "\n")
+ logger.info("Predictions are written to json file.")
+
+ logger.info("Calculating the f1 score...")
+ eval_out = subprocess.check_output([sys.executable, args.eval_script,
+ args.predict_file, args.output_dir + "/predictions.json"])
+ scores = str(eval_out).strip()
+ exact_match = float(scores.split(":")[1].split(",")[0])
+ f1 = float(scores.split(":")[2].split("}")[0])
+ logger.info("The predictions have %s exact match and the f1 score on squadv1.1 is %s", exact_match, f1)
+
+ avg_inf_time = sum(fps[5:]) / len(fps[5:])
+ throughput = args.predict_batch_size / avg_inf_time
+ logger.info("The average inference time is %s and the throughput is %s", avg_inf_time, throughput)
+
+
+if __name__ == "__main__":
+ main()