The NLP Recipes Team
The goal of text summarization is to extract or generate concise and accurate summaries of a given text document while maintaining key information found within the original text document. Text summarization methods can be either extractive or abstractive. Extractive models select (extract) existing key chunks or key sentences of a given text document, while abstractive models generate sequences of words (or sentences) that describe or summarize the input text document.
- UniLM: UniLM is a state-of-the-art model developed by Microsoft Research Asia (MSRA). The model is pre-trained on a large unlabeled natural language corpus (English Wikipedia and BookCorpus) and can be fine-tuned on different types of labeled data for various NLP tasks like text classification and abstractive summarization.
Supported models: unilm-large-cased and unilm-base-cased.
- BERTSum: BERTSum is an encoder architecture designed for text summarization. It can be used together with different decoders to support both extractive and abstractive summarization.
Supported models: bert-base-uncased (extractive and abstractive) and distilbert-base-uncased (extractive).
Figure 1: sample outputs: the sample generated summary is an output of a finetuned “unilm-base-cased” model, and the sample extractive summary is an output of a finetuned “distilbert-base-uncased”, and both are finetuned on CNN/Daily Mail dataset.
All model implementations support distributed training and multi-GPU inferencing. For abstractive summarization, we also support mixed-precision training and inference. Please check out our Azure Machine Learning distributed training example for extractive summarization here.
Informativeness, fluency and succinctness are the three aspects used to evaluate the quality of a summary. To quantitatively evaluate a summary, ROUGE scores are commonly used, which is a standard metric used to measure the overlap between machine-generated text and human-created reference text. Since it is not straightforward to set up the environment to run ROUGE evaluation, we have included utilities and an example notebook to instruct how to set up the evaluation environment and demonstrate how the metrics can be computed.