Charlene Chambliss
2020 Jan 17
Between my time in data science at Curology and joining Primer, I worked with Nina Lopatina at IQT Labs on a project to apply BERT to named-entity recognition.
Nina’s team needed two NER models: one for English text, and one for Russian. Their plan was to embed the models into an error analysis interface that they are using to assess the quality of their Russian-to-English machine translation models. My task was to build a git repo that anyone could use to train an NER model in English or Russian, simply by cloning the repo and running the appropriate scripts.
This was a rewarding learning experience and the project was a great success. The entity type Nina and her team were interested in was the PERSON type, and the English and Russian models achieved an F1 score of 0.95 and 0.93 on the PERSON tag, respectively, which is right around state of the art for person recognition in both languages.
The series is divided into 2 parts:
- How to Fine-Tune BERT for Named Entity Recognition
- Lessons Learned from Fine-Tuning BERT for Named Entity Recognition
In part 1, I describe the steps for training the model and provide links to the relevant code for each. In part 2, I talk about some of the issues that arose over the course of the project and how I tackled them, also providing links where helpful.
Let me know if you find these helpful - recently I chatted with someone who had used these posts to train a token classification model for a document redaction task, which I thought was super cool.
Happy reading!