Information extraction from text documents of the zoning plan of the City of Vienna
Work supported by BRISE-Vienna (UIA04-081), a European Union Urban Innovative Actions project.
The asail2021 tag contains the code in the state presented in our 2021 ASAIL paper. Legacy code can be found in the asail folder.
- Requirements
- Coding guidelines
- Annotated Data Description
- Extraction service
- Demo for attribute names only
- Preprocessing
- Attribute extraction task
- Annotation process
- Development
- References
Install the brise_plandok repository:
pip install .
# To follow changes
pip install -e .
Installing this repository will also install the tuw_nlp
repository, a graph-transformation framework. To get to know more, visit https://github.com/recski/tuw-nlp.
Ensure that you have at least Java 8
for the alto library.
This repository uses black for code formatting and flake8 for PEP8 compliance. To install the pre-commit hooks run:
pre-commit install
This creates the .git/hooks/pre-commit
file, which automatically reformats all the modified files prior to any commit.
pip install black
black .
pip install flake8
flake8 .
See DATA.md.
python brise_plandok/services/full_extractor.py -d <DATA_DIR>
Example: python brise_plandok/services/full_extractor.py -d data/train
The docker image downloads the data from our cloud storage.
# Build docker image
docker build --tag brise-attr-extraction .
# Start service
docker run -p 5000:5000 brise-attr-extraction
You can now reach the service in both cases by calling curl http://localhost:5000/<endpoint>/<doc_id>
. If the doc_id
does not exist, Not found
will be returned.
curl http://localhost:5000/brise-extract-api/7377
# To get minimal psets
curl http://localhost:5000/psets/7377
# To get full psets
curl http://localhost:5000/psets/7377?full=true
To run the browser-based demo described in the paper (also available online), first start rule extraction as a service like this:
python brise_plandok/services/attribute_extractor.py
Then run the frontend with this command:
streamlit run brise_plandok/frontend/extract.py
To run the prover of our system, also start the prover service from this repository: https://github.com/adaamko/BRISEprover. This will start a small Flask service on port 5007 that will be used by the demo service.
The demo can then be accessed from your web browser at http://localhost:8501/
All steps described below can be run on the sample documents included in this repository under sample_data.
The preprocessed version of all plan documents (as of December 2020) can be downloaded as a single JSON file. If you would like to customize preprocessing, you can also download the raw text documents
Extract section structure from raw text and run NLP pipeline (sentence segmentation, tokenization, dependency parsing):
python brise_plandok/preproc/plandok.py sample_data/txt/*.txt > sample_data/json/sample.jsonl
To run the current best rule-based extraction, see here.
To run experiments with POTATO, see here.
To have a look at our baseline experiments, see here.
For details about the annotation process, see here.
For development details read more here.
The rule extraction system is described in the following paper:
Gabor Recski, Björn Lellmann, Adam Kovacs, Allan Hanbury: Explainable rule extraction via semantic graphs (...)
The demo also uses the deontic logic prover described in this paper
The preprocessing pipeline relies on the Stanza library