以前にhuggingface/trasformersで固有表現抽出する方法を試してましたが、日本語ではうまく動かせませんでした。今回は日本語の言語モデルの上にファインチューニングして固有表現抽出出来るところまでやってみます。
前回: huggingfaceのtransformersでNER(named entity recognition)を試してみる
huggningface/transformersのexampleのファインチューニングのコードがちょっと複雑だったのでどうしようかと思っていたら、どうやらRasaも対応しているらしいので、Rasaの上で動かしてみようと思います。
まずはRasaで日本語の固有表現抽出出来るところまで(Spacyを利用)
プロジェクトを作成してRasaのCLIで初期化。
$ cd transformer-on-rasa
$ pip install rasa spacy ginza
$ rasa init
$ tree .
.
├── __init__.py
├── __pycache__
│ ├── __init__.cpython-37.pyc
│ └── actions.cpython-37.pyc
├── actions.py
├── config.yml
├── credentials.yml
├── data
│ ├── nlu.md
│ └── stories.md
├── domain.yml
├── endpoints.yml
└── tests
└── conversation_tests.md
nlu.mdに追記。
## intent:restaurant_ja - [渋谷](location)で美味しい[イタリアン](genre)ない? - [和食](genre)食べたいんだけど、[六本木](location)におすすめある? - 今度[麻布](location)行くんだけど、[フレンチ](genre)のお店教えて
config.yml
# Configuration for Rasa NLU. # https://rasa.com/docs/rasa/nlu/components/ language: ja_ginza pipeline: - name: "SpacyNLP" - name: "SpacyTokenizer" - name: "CRFEntityExtractor" # Configuration for Rasa Core. # https://rasa.com/docs/rasa/core/policies/ policies: - name: MemoizationPolicy - name: MappingPolicy
固有表現抽出を実行。genreがうまく取れていないけどまあいいや。
$ rasa train nlu
$ rasa shell nlu
NLU model loaded. Type a message and press enter to parse it.
Next message:
渋谷の美味しいイタリアン
{
"intent": {
"name": null,
"confidence": 0.0
},
"entities": [
{
"start": 0,
"end": 2,
"value": "渋谷",
"entity": "location",
"confidence": 0.5657320332682547,
"extractor": "CRFEntityExtractor"
}
],
"text": "渋谷の美味しいイタリアン"
}
huggingface/transformersを使用する
$ pip install transformers
config.yml
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: ja_ginza
pipeline:
# - name: "SpacyNLP"
# - name: "SpacyTokenizer"
- name: HFTransformersNLP
model_name: "bert"
model_weights: "bert-base-japanese-whole-word-masking"
- name: "LanguageModelTokenizer"
- name: "CRFEntityExtractor"
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
- name: MemoizationPolicy
- name: MappingPolicy
bert-base-japanese-whole-word-maskingなんて名前のモデルはねえぞと怒られた。なんでや!公式サポートしてるはずやろが。
$ rasa train nlu Model name 'bert-base-japanese-whole-word-masking' not found in model shortcut name list (bert-base-uncased, bert-large-uncased, bert-base-cased, bert-large-cased, bert-base-multilingual-uncased, bert-base-multilingual-cased, bert-base-chinese, bert-base-german-cased, bert-large-uncased-whole-word-masking, bert-large-cased-whole-word-masking, bert-large-uncased-whole-word-masking-finetuned-squad, bert-large-cased-whole-word-masking-finetuned-squad, bert-base-cased-finetuned-mrpc, bert-base-german-dbmdz-cased, bert-base-german-dbmdz-uncased, bert-base-finnish-cased-v1, bert-base-finnish-uncased-v1, bert-base-dutch-cased). Assuming 'bert-base-japanese-whole-word-masking' is a path, a model identifier, or url to a directory containing tokenizer files.
該当のソースコードを追って原因を確かめたところ、transformersのtokenization_bert.pyに日本語のモデル名称が登録されていなかった。修正漏れだろうか?あとでpullreq出してみるかな。
一旦動作確認するため、pip installされた/Users/username/.pyenv/versions/3.7.3/envs/transformer-on-rasa/lib/python3.7/site-packages/transformersのtokenization_bert.pyを直接編集する。
PRETRAINED_VOCAB_FILES_MAP = {
"vocab_file": {
"bert-base-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-uncased-vocab.txt",
"bert-large-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-vocab.txt",
"bert-base-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-vocab.txt",
"bert-large-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-vocab.txt",
"bert-base-multilingual-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-uncased-vocab.txt",
"bert-base-multilingual-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-multilingual-cased-vocab.txt",
"bert-base-chinese": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-chinese-vocab.txt",
"bert-base-german-cased": "https://int-deepset-models-bert.s3.eu-central-1.amazonaws.com/pytorch/bert-base-german-cased-vocab.txt",
"bert-large-uncased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-vocab.txt",
"bert-large-cased-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-vocab.txt",
"bert-large-uncased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-uncased-whole-word-masking-finetuned-squad-vocab.txt",
"bert-large-cased-whole-word-masking-finetuned-squad": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-large-cased-whole-word-masking-finetuned-squad-vocab.txt",
"bert-base-cased-finetuned-mrpc": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-cased-finetuned-mrpc-vocab.txt",
"bert-base-german-dbmdz-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-cased-vocab.txt",
"bert-base-german-dbmdz-uncased": "https://s3.amazonaws.com/models.huggingface.co/bert/bert-base-german-dbmdz-uncased-vocab.txt",
"bert-base-finnish-cased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-cased-v1/vocab.txt",
"bert-base-finnish-uncased-v1": "https://s3.amazonaws.com/models.huggingface.co/bert/TurkuNLP/bert-base-finnish-uncased-v1/vocab.txt",
"bert-base-dutch-cased": "https://s3.amazonaws.com/models.huggingface.co/bert/wietsedv/bert-base-dutch-cased/vocab.txt",
"bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking-vocab.txt",
}
}
PRETRAINED_POSITIONAL_EMBEDDINGS_SIZES = {
"bert-base-uncased": 512,
"bert-large-uncased": 512,
"bert-base-cased": 512,
"bert-large-cased": 512,
"bert-base-multilingual-uncased": 512,
"bert-base-multilingual-cased": 512,
"bert-base-chinese": 512,
"bert-base-german-cased": 512,
"bert-large-uncased-whole-word-masking": 512,
"bert-large-cased-whole-word-masking": 512,
"bert-large-uncased-whole-word-masking-finetuned-squad": 512,
"bert-large-cased-whole-word-masking-finetuned-squad": 512,
"bert-base-cased-finetuned-mrpc": 512,
"bert-base-german-dbmdz-cased": 512,
"bert-base-german-dbmdz-uncased": 512,
"bert-base-finnish-cased-v1": 512,
"bert-base-finnish-uncased-v1": 512,
"bert-base-dutch-cased": 512,
"bert-base-japanese-whole-word-masking": 512,
}
PRETRAINED_INIT_CONFIGURATION = {
"bert-base-uncased": {"do_lower_case": True},
"bert-large-uncased": {"do_lower_case": True},
"bert-base-cased": {"do_lower_case": False},
"bert-large-cased": {"do_lower_case": False},
"bert-base-multilingual-uncased": {"do_lower_case": True},
"bert-base-multilingual-cased": {"do_lower_case": False},
"bert-base-chinese": {"do_lower_case": False},
"bert-base-german-cased": {"do_lower_case": False},
"bert-large-uncased-whole-word-masking": {"do_lower_case": True},
"bert-large-cased-whole-word-masking": {"do_lower_case": False},
"bert-large-uncased-whole-word-masking-finetuned-squad": {"do_lower_case": True},
"bert-large-cased-whole-word-masking-finetuned-squad": {"do_lower_case": False},
"bert-base-cased-finetuned-mrpc": {"do_lower_case": False},
"bert-base-german-dbmdz-cased": {"do_lower_case": False},
"bert-base-german-dbmdz-uncased": {"do_lower_case": True},
"bert-base-finnish-cased-v1": {"do_lower_case": False},
"bert-base-finnish-uncased-v1": {"do_lower_case": True},
"bert-base-dutch-cased": {"do_lower_case": False},
"bert-base-japanese-whole-word-masking": {"do_lower_case": False},
}
※2020/04/29追記 どうやら日本語モデルの置き場所のパスが微妙に変わったぽいです(最後のハイフンがスラッシュに変わってます)。一時的な変更でなければhuggingface/transformersの方も修正が必要なので、pullreq出すかどうか様子見します。
configration_bert.py
"bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking/config.json",
modeling_rf_bert.py
"bert-base-japanese-whole-word-masking": "https://s3.amazonaws.com/models.huggingface.co/bert/cl-tohoku/bert-base-japanese-whole-word-masking/tf_model.h5",
学習させる。
$ rasa train nlu
固有表現抽出を実行。
$ rasa shell nlu
NLU model loaded. Type a message and press enter to parse it.
Next message:
六本木でイタリアンを食べたい
{
"intent": {
"name": null,
"confidence": 0.0
},
"entities": [
{
"start": 0,
"end": 3,
"value": "六本木",
"entity": "location",
"confidence": 0.7346290146112572,
"extractor": "CRFEntityExtractor"
},
{
"start": 4,
"end": 9,
"value": "イタリアン",
"entity": "genre",
"confidence": 0.6063883820632936,
"extractor": "CRFEntityExtractor"
}
],
"text": "六本木でイタリアンを食べたい"
}
Next message:
やったあああ!できたあああ!
まとめ
簡単にBERTでファインチューニング出来るようになりました!普通に実装するとめっちゃ大変なので非常に助かる。自社で自然言語処理をしているところをいくつか置き換えてみようかな。
2020/04/29追記
このエントリでは原因箇所の特定までだったので根本的解決はしてませんが、調べたところ問題はhuggingface/transfomersの方ではなくrasaの方にあるようです。pullreq出しておこうと思います(マージされるかは知らんけど)。
2020/05/05追記
pullreq{:target="_blank"}してみましたが、「個別の言語を一つ一つ対応してたら複雑になりすぎるのでムリっす。自分でカスタムコンポーネントで実装してみるといいよ」ってアドバイスもらったのでその方向にしました。
config.yml
# Configuration for Rasa NLU.
# https://rasa.com/docs/rasa/nlu/components/
language: ja_ginza
pipeline:
# - name: "SpacyNLP"
# - name: "SpacyTokenizer"
- name: "components.hf_transformers_japanese_nlp.HFTransformersJapaneseNLP"
model_name: "bert"
model_weights: "bert-base-japanese-whole-word-masking"
cache_dir: null
- name: "LanguageModelTokenizer"
- name: "CRFEntityExtractor"
# Configuration for Rasa Core.
# https://rasa.com/docs/rasa/core/policies/
policies:
- name: MemoizationPolicy
- name: MappingPolicy
カスタムコンポーネントの方はTokenizerを取得するところ以外はもとのソースコードのコピペです。本当はoverrideして基底クラスのメソッドを呼びたかったけど、エラーになってしまうのでやむなし。
components/hf_transformers_japanese_nlp.py
import logging
from typing import Any, Dict, List, Text, Tuple, Optional
from rasa.nlu.tokenizers.whitespace_tokenizer import WhitespaceTokenizer
from rasa.nlu.components import Component
from rasa.nlu.config import RasaNLUModelConfig
from rasa.nlu.training_data import Message, TrainingData
from rasa.nlu.tokenizers.tokenizer import Token
from rasa.nlu.utils.hugging_face.hf_transformers import HFTransformersNLP
from transformers import BertJapaneseTokenizer
import rasa.utils.train_utils as train_utils
import numpy as np
from rasa.nlu.constants import (
TEXT,
LANGUAGE_MODEL_DOCS,
DENSE_FEATURIZABLE_ATTRIBUTES,
TOKEN_IDS,
TOKENS,
SENTENCE_FEATURES,
SEQUENCE_FEATURES,
)
logger = logging.getLogger(__name__)
class HFTransformersJapaneseNLP(HFTransformersNLP):
def _load_model(self) -> None:
"""Try loading the model"""
from rasa.nlu.utils.hugging_face.registry import (
model_class_dict,
model_weights_defaults,
model_tokenizer_dict,
)
self.model_name = self.component_config["model_name"]
if self.model_name not in model_class_dict:
raise KeyError(
f"'{self.model_name}' not a valid model name. Choose from "
f"{str(list(model_class_dict.keys()))}or create"
f"a new class inheriting from this class to support your model."
)
self.model_weights = self.component_config["model_weights"]
self.cache_dir = self.component_config["cache_dir"]
if not self.model_weights:
logger.info(
f"Model weights not specified. Will choose default model weights: "
f"{model_weights_defaults[self.model_name]}"
)
self.model_weights = model_weights_defaults[self.model_name]
logger.debug(f"Loading Tokenizer and Model for {self.model_name}")
self.tokenizer = BertJapaneseTokenizer.from_pretrained(
self.model_weights, cache_dir=self.cache_dir
)
self.model = model_class_dict[self.model_name].from_pretrained(
self.model_weights, cache_dir=self.cache_dir
)
# Use a universal pad token since all transformer architectures do not have a
# consistent token. Instead of pad_token_id we use unk_token_id because
# pad_token_id is not set for all architectures. We can't add a new token as
# well since vocabulary resizing is not yet supported for TF classes.
# Also, this does not hurt the model predictions since we use an attention mask
# while feeding input.
self.pad_token_id = self.tokenizer.unk_token_id
ちなみにhuggingface/transformers側にも、モデルファイルのURL問題の修正が現在最新のv2.8.0ではリリースされていないので、masterブランチを使うと動きます。
$ pip install git+https://github.com/huggingface/transformers.git@master