EasyNLP带你实现中英文机器阅读理解( 七 )


# 模型训练python main.py \--mode train \--app_name=machine_reading_comprehension \--worker_gpu=1 \--tables=train.tsv,dev.tsv \--input_schema=qas_id:str:1,context_text:str:1,question_text:str:1,answer_text:str:1,start_position_character:str:1,title:str:1 \--first_sequence=question_text \--second_sequence=context_text \--sequence_length=384 \--checkpoint_dir=./model_dir \--learning_rate=3.5e-5 \--epoch_num=3 \--random_seed=42 \--save_checkpoint_steps=500 \--train_batch_size=16 \--user_defined_parameters='pretrain_model_name_or_path=bert-base-uncasedlanguage=enanswer_name=answer_textqas_id=qas_idstart_position_name=start_position_characterdoc_stride=128max_query_length=64'
# 模型预测python main.py \--mode predict \--app_name=machine_reading_comprehension \--worker_gpu=1 \--tables=dev.tsv \--outputs=dev.pred.csv \--input_schema=qas_id:str:1,context_text:str:1,question_text:str:1,answer_text:str:1,start_position_character:str:1,title:str:1 \--output_schema=unique_id,best_answer,query,context \--first_sequence=question_text \--second_sequence=context_text \--sequence_length=384 \--checkpoint_dir=./model_dir \--micro_batch_size=256 \--user_defined_parameters='pretrain_model_name_or_path=bert-base-uncasedlanguage=enqas_id=qas_idanswer_name=answer_textstart_position_name=start_position_charactermax_query_length=64max_answer_length=30doc_stride=128n_best_size=10output_answer_file=dev.ans.csv'
除了main.py一步执行之外,我们同样可以使用命令行执行的方式进行快速训练/预测,命令如下:
# 模型训练easynlp \--mode train \--app_name=machine_reading_comprehension \--worker_gpu=1 \--tables=train.tsv,dev.tsv \--input_schema=qas_id:str:1,context_text:str:1,question_text:str:1,answer_text:str:1,start_position_character:str:1,title:str:1 \--first_sequence=question_text \--second_sequence=context_text \--sequence_length=384 \--checkpoint_dir=./model_dir \--learning_rate=3.5e-5 \--epoch_num=5 \--random_seed=42 \--save_checkpoint_steps=600 \--train_batch_size=16 \--user_defined_parameters='pretrain_model_name_or_path=bert-base-uncasedlanguage=enanswer_name=answer_textqas_id=qas_idstart_position_name=start_position_characterdoc_stride=128max_query_length=64'
# 模型预测easynlp \--mode predict \--app_name=machine_reading_comprehension \--worker_gpu=1 \--tables=dev.tsv \--outputs=dev.pred.csv \--input_schema=qas_id:str:1,context_text:str:1,question_text:str:1,answer_text:str:1,start_position_character:str:1,title:str:1 \--output_schema=unique_id,best_answer,query,context \--first_sequence=question_text \--second_sequence=context_text \--sequence_length=384 \--checkpoint_dir=./model_dir \--micro_batch_size=256 \--user_defined_parameters='pretrain_model_name_or_path=bert-base-uncasedlanguage=enqas_id=qas_idanswer_name=answer_textstart_position_name=start_position_charactermax_query_length=64max_answer_length=30doc_stride=128n_best_size=10output_answer_file=dev.ans.csv'
此外,我们还在////文件夹下封装好了多种可直接执行的bash脚本,用户同样可以通过使用bash文件命令行执行的方式来一步完成模型的训练/评估/预测 。bash文件需要传入两个参数,第一个参数为运行程序的GPU编号,一般为0;第二个参数代表模型的训练/评估/预测 。
# 模型训练! bash run_train_eval_predict_user_defined_local_en.sh 0 train# 模型预测! bash run_train_eval_predict_user_defined_local_en.sh 0 predict
模型训练好之后,我们可以对任意英文文本进行阅读理解,只要将文本转成符合上述模型输入的格式,添加相应的问题,便可使用模型进行预测并得到答案 。我们选取近日费德勒退役后,纳达尔为他发表的作为篇章文本,人工添加两个问题:“Where will the Laver Cup take place?” & “Who is he’s wife?”,通过模型预测,便可得到正确的结果:“” & “Mirka” 。