Skip to content

Commit

Permalink
Merge pull request #797 from kermitt2/finalize-release-0.7.0
Browse files Browse the repository at this point in the history
Finalize release 0.7.0
  • Loading branch information
kermitt2 committed Jul 17, 2021
2 parents 39a5591 + 966bf70 commit 605c646
Show file tree
Hide file tree
Showing 23 changed files with 723 additions and 223 deletions.
26 changes: 26 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,32 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).
### Fixed


## [0.7.0] – 2020-07-

### Added

+ New YAML configuration: all the settings are in one single yaml file, each model can be fully configured independently
+ Improvement of the segmentation and header models (for header, +1 F1-score for PMC evaluation, +4 F1-score for bioRxiv), improvements for body and citations
+ Add figure and table pop-up visualization on PDF in the console demo
+ Add PDF MD5 digest in the TEI results (service only)
+ Language support packages and xpdfrc file for pdfalto (support of CJK and exotic fonts)
+ Prometheus metrics
+ BidLSTM-CRF-FEATURES implementation available for more models
+ Addition of a "How GROBID works" page in the documentation

### Changed

+ JitPack release (RIP jcenter)
+ Improved DOI cleaning
+ Speed improvement (around +10%), by factorizing some layout token manipulation
+ Update CrossRef requests implementation to align to the current usage of CrossRef's `X-Rate-Limit-Limit` response parameter

### Fixed

+ Fix base url in demo console
+ Add missing pdfalto Graphics information when `-noImage` is used, fix graphics data path in TEI
+ Fix the tendency to merge tables when they are in close proximity

## [0.6.2] – 2020-03-20

### Added
Expand Down
8 changes: 4 additions & 4 deletions Dockerfile.delft
Original file line number Diff line number Diff line change
Expand Up @@ -2,14 +2,14 @@

## See https://grobid.readthedocs.io/en/latest/Grobid-docker/

## usage example with version 0.6.2-SNAPSHOT:
## docker build -t grobid/grobid:0.6.2-SNAPSHOT --build-arg GROBID_VERSION=0.6.2-SNAPSHOT --file Dockerfile.delft .
## usage example with version 0.7.1-SNAPSHOT:
## docker build -t grobid/grobid:0.7.1-SNAPSHOT --build-arg GROBID_VERSION=0.7.1-SNAPSHOT --file Dockerfile.delft .

## no GPU:
## docker run -t --rm --init -p 8070:8070 -p 8071:8071 -v /home/lopez/grobid/grobid-home/config/grobid.properties:/opt/grobid/grobid-home/config/grobid.properties:ro grobid/grobid:0.6.2-SNAPSHOT
## docker run -t --rm --init -p 8070:8070 -p 8071:8071 -v /home/lopez/grobid/grobid-home/config/grobid.properties:/opt/grobid/grobid-home/config/grobid.properties:ro grobid/grobid:0.7.1-SNAPSHOT

## allocate all available GPUs (only Linux with proper nvidia driver installed on host machine):
## docker run --rm --gpus all --init -p 8070:8070 -p 8071:8071 -v /home/lopez/obid/grobid-home/config/grobid.properties:/opt/grobid/grobid-home/config/grobid.properties:ro grobid/grobid:0.6.2-SNAPSHOT
## docker run --rm --gpus all --init -p 8070:8070 -p 8071:8071 -v /home/lopez/obid/grobid-home/config/grobid.properties:/opt/grobid/grobid-home/config/grobid.properties:ro grobid/grobid:0.7.1-SNAPSHOT

# -------------------
# build builder image
Expand Down
48 changes: 29 additions & 19 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,28 +24,26 @@ GROBID is a machine learning library for extracting, parsing and re-structuring
The following functionalities are available:

- __Header extraction and parsing__ from article in PDF format. The extraction here covers the usual bibliographical information (e.g. title, abstract, authors, affiliations, keywords, etc.).
- __References extraction and parsing__ from articles in PDF format, around .87 f-score against on an independent PubMed Central set of 1943 PDF containing 90,125 references. All the usual publication metadata are covered (including DOI, PMID, etc.).
- __Citation contexts recognition and resolution__ to the full bibliographical references of the article. The accuracy of citation contexts resolution is above .76 f-score (which corresponds to both the correct identification of the citation callout and its correct association with a full bibliographical reference).
- Parsing of __references in isolation__ (around .90 f-score at instance-level, .95 f-score at field level).
- __References extraction and parsing__ from articles in PDF format, around .87 F1-score against on an independent PubMed Central set of 1943 PDF containing 90,125 references, and around .89 on a similar bioRxiv set. All the usual publication metadata are covered (including DOI, PMID, etc.).
- __Citation contexts recognition and resolution__ of the full bibliographical references of the article. The accuracy of citation contexts resolution is above .78 f-score (which corresponds to both the correct identification of the citation callout and its correct association with a full bibliographical reference).
- Parsing of __references in isolation__ (above .90 F1-score at instance-level, .95 F1-score at field level).
- __Parsing of names__ (e.g. person title, forenames, middlename, etc.), in particular author names in header, and author names in references (two distinct models).
- __Parsing of affiliation and address__ blocks.
- __Parsing of dates__, ISO normalized day, month, year.
- __Full text extraction and structuring__ from PDF articles, including a model for the overall document segmentation and models for the structuring of the text body (paragraph, section titles, reference callout, figure, table, etc.).
- __Consolidation/resolution of the extracted bibliographical references__ using the [biblio-glutton](https://github.com/kermitt2/biblio-glutton) service or the [CrossRef REST API](https://github.com/CrossRef/rest-api-doc). In both cases, DOI resolution performance is higher than 0.95 f-score from PDF extraction.
- __Full text extraction and structuring__ from PDF articles, including a model for the overall document segmentation and models for the structuring of the text body (paragraph, section titles, reference callout, figure, table, etc.).
- __Consolidation/resolution of the extracted bibliographical references__ using the [biblio-glutton](https://github.com/kermitt2/biblio-glutton) service or the [CrossRef REST API](https://github.com/CrossRef/rest-api-doc). In both cases, DOI resolution performance is higher than 0.95 F1-score from PDF extraction.
- __Extraction and parsing of patent and non-patent references in patent__ publications.
- __PDF coordinates__ for extracted information, allowing to create "augmented" interactive PDF.

In a complete PDF processing, GROBID manages 55 final labels used to build relatively fine-grained structures, from traditional publication metadata (title, author first/last/middlenames, affiliation types, detailed address, journal, volume, issue, pages, doi, pmid, etc.) to full text structures (section title, paragraph, reference markers, head/foot notes, figure headers, etc.).
In a complete PDF processing, GROBID manages 55 final labels used to build relatively fine-grained structures, from traditional publication metadata (title, author first/last/middlenames, affiliation types, detailed address, journal, volume, issue, pages, doi, pmid, etc.) to full text structures (section title, paragraph, reference markers, head/foot notes, figure captions, etc.).

GROBID includes a comprehensive web service API, batch processing, a JAVA API, a Docker image, a generic evaluation framework (precision, recall, etc., n-fold cross-evaluation) and the semi-automatic generation of training data.

GROBID can be considered as production ready. Deployments in production includes ResearchGate, HAL Research Archive, INIST-CNRS, CERN (Invenio), scite.ai, and many more. The tool is designed for high scalability in order to address the full scientific literature corpus.
GROBID can be considered as production ready. Deployments in production includes ResearchGate, Internet Archive Scholar, HAL Research Archive, INIST-CNRS, CERN (Invenio), scite.ai, Academia.edu and many more. The tool is designed for speed and high scalability in order to address the full scientific literature corpus.

GROBID should run properly "out of the box" on Linux (64 bits) and macOS. We cannot ensure currently support for Windows as we did before (help welcome!).

GROBID uses optionnally Deep Learning models relying on the [DeLFT](https://github.com/kermitt2/delft) library, a task-agnostic Deep Learning framework for sequence labelling and text classification. The tool can run with feature engineered CRF (default), Deep Learning architectures (with or without layout feature channels) or any mixtures of CRF and DL to balance scalability and accuracy.

For more information on how the tool works, on its key features and [benchmarking](https://grobid.readthedocs.io/en/latest/Benchmarking/), visit the [GROBID documentation](https://grobid.readthedocs.org).
GROBID uses optionnally Deep Learning models relying on the [DeLFT](https://github.com/kermitt2/delft) library, a task-agnostic Deep Learning framework for sequence labelling and text classification. The tool can run with feature engineered CRF (default), Deep Learning architectures (with or without layout feature channels) or any mixtures of CRF and DL to balance scalability and accuracy. These models use joint text and visual/layout information provided by [pdfalto](https://github.com/kermitt2/pdfalto).

## Demo

Expand All @@ -57,7 +55,7 @@ _Warning_: Some quota and query limitation apply to the demo server! Please be c

## Clients

For helping to exploit GROBID service at scale, we provide clients written in Python, Java, node.js using the [web services](https://grobid.readthedocs.io/en/latest/Grobid-service/) for parallel batch processing:
For facilitating the usage GROBID service at scale, we provide clients written in Python, Java, node.js using the [web services](https://grobid.readthedocs.io/en/latest/Grobid-service/) for parallel batch processing:

- <a href="https://github.com/kermitt2/grobid-client-python" target="_blank">Python GROBID client</a>
- <a href="https://github.com/kermitt2/grobid-client-java" target="_blank">Java GROBID client</a>
Expand All @@ -69,21 +67,35 @@ We have been able recently to run the complete fulltext processing at around 10.

In addition, a Java example project is available to illustrate how to use GROBID as a Java library: [https://github.com/kermitt2/grobid-example](https://github.com/kermitt2/grobid-example). The example project is using GROBID Java API for extracting header metadata and citations from a PDF and output the results in BibTeX format.

Finally, the following python utilities can be used to create structured full text corpora of scientific articles simply by indicating a list of strong identifiers like DOI or PMID, performing the identification of online Open Access PDF, the harvesting, the metadata agreegation and the Grobid processing in one step at scale: [article-dataset-builder](https://github.com/kermitt2/article-dataset-builder)
Finally, the following python utilities can be used to create structured full text corpora of scientific articles. The tool simply takes a list of strong identifiers like DOI or PMID, performing the identification of online Open Access PDF, full text harvesting, metadata agreegation and Grobid processing in one workflow at scale: [article-dataset-builder](https://github.com/kermitt2/article-dataset-builder)

## How GROBID works

Visit the [documentation page describing the system](https://grobid.readthedocs.io/en/latest/Principles/). To summarize, the key design principles of GROBID are:

- GROBID uses a [cascade of sequence labeling models](https://grobid.readthedocs.io/en/latest/Principles/#document-parsing-as-a-cascade-of-sequence-labeling-models) to parse a document.

- The different models [do not work on text, but on **Layout Tokens**](https://grobid.readthedocs.io/en/latest/Principles/#layout-tokens-not-text) to exploit various visual/layout information avalable for every tokens.

- GROBID does not use training data derived from existing publisher XML documents, but [small, high quality sets](https://grobid.readthedocs.io/en/latest/Principles/#training-data-qualitat-statt-quantitat) of manually labeled training data.

- Technical choices and [default settings](https://grobid.readthedocs.io/en/latest/Principles/#balancing-accuracy-and-scalability) are driven by the ability to process PDF quickly, with commodity hardware and with good parallelization and scalabilty capacities.

Detailed end-to-end [benchmarking](https://grobid.readthedocs.io/en/latest/Benchmarking/) are available [GROBID documentation](https://grobid.readthedocs.org) and continuously updated.

## GROBID Modules

A series of additional modules have been developed for performing __structure aware__ text mining directly on scholar PDF, reusing GROBID's PDF processing and sequence labelling weaponery:

- [grobid-ner](https://github.com/kermitt2/grobid-ner): named entity recognition
- [grobid-quantities](https://github.com/kermitt2/grobid-quantities): recognition and normalization of physical quantities/measurements
- [software-mention](https://github.com/Impactstory/software-mentions): recognition of software mentions and attributes in scientific literature
- [grobid-astro](https://github.com/kermitt2/grobid-astro): recognition of astronomical entities in scientific papers
- [grobid-bio](https://github.com/kermitt2/grobid-bio): a bio-entity tagger using BioNLP/NLPBA 2004 dataset
- [grobid-dictionaries](https://github.com/MedKhem/grobid-dictionaries): structuring dictionaries in raw PDF format
- [grobid-quantities](https://github.com/kermitt2/grobid-quantities): recognition and normalization of physical quantities/measurements
- [grobid-superconductors](https://github.com/lfoppiano/grobid-superconductors): recognition of superconductor material and properties in scientific literature
- [entity-fishing](https://github.com/kermitt2/entity-fishing), a tool for extracting Wikidata entities from text and document, can also use Grobid to pre-process scientific articles in PDF, leading to more precise and relevant entity extraction and the capacity to annotate the PDF with interative layout.
- [dataseer-ml](https://github.com/dataseer/dataseer-ml): identification of sections and sentences introducing a dataset in a scientific article, and classification of the type of this dataset.
- [grobid-ner](https://github.com/kermitt2/grobid-ner): named entity recognition
- [grobid-astro](https://github.com/kermitt2/grobid-astro): recognition of astronomical entities in scientific papers
- [grobid-bio](https://github.com/kermitt2/grobid-bio): a bio-entity tagger using BioNLP/NLPBA 2004 dataset
- [grobid-dictionaries](https://github.com/MedKhem/grobid-dictionaries): structuring dictionaries in raw PDF format

## Release and changes

Expand Down Expand Up @@ -121,5 +133,3 @@ If you want to cite this work, please refer to the present GitHub project, toget
```

See the [GROBID documentation](https://grobid.readthedocs.org/en/latest/References) for more related resources.

<!-- markdownlint-disable-file MD033 -->
Loading

0 comments on commit 605c646

Please sign in to comment.