Skip to content

hotosm/fAIr

HOTOSM Logo

fAIr: AI-assisted Mapping

Open AI-assisted mapping service for Humanitarian

Release Version

CI/CD Backend Build Frontend Build
Tech Stack Django React Docker PostgreSQL TensorFlow PyTorch
Code Style pre-commit Black
Community Slack All Contributors
Submodules fairpredictor Repo fair-utilities Repo
Other Info Roadmap Figma Design Features Plan License

fAIr is an open AI-assisted mapping service developed by the Humanitarian OpenStreetMap Team (HOT) that aims to improve the efficiency and accuracy of mapping efforts for humanitarian purposes. The service uses AI models, specifically computer vision techniques, to detect objects such as buildings, roads, waterways, and trees from satellite and UAV imagery.

The name fAIr is derived from the following terms:

  • f: for freedom and free and open-source software
  • AI: for Artificial Intelligence
  • r: for resilience and our responsibility for our communities and the role we play within humanitarian mapping

Features

  • Intuitive and fair AI-assisted mapping tool
  • Open-source AI models created and trained by local communities
  • Uses open-source satellite and UAV imagery from HOT's OpenAerialMap (OAM) to detect map features and suggest additions to OpenStreetMap (OSM)
  • Constant feedback loop to eliminate model biases and ensure models are relevant to local communities

Unlike other AI data producers, fAIr is a free and open-source AI service that allows OSM community members to create and train their own AI models for mapping in their region of interest and/or humanitarian need. The goal of fAIr is to provide access to AI-assisted mapping across mobile and in-browser editors, using community-created AI models, and to ensure that the models are relevant to the communities where the maps are being created to improve the conditions of the people living there.

To eliminate model biases, fAIr is built to work with the local communities and receive constant feedback on the models, which will result in the progressive intelligence of computer vision models. The AI models suggest detected features to be added to OpenStreetMap (OSM), but mass import into OSM is not planned. Whenever an OSM mapper uses the AI models for assisted mapping and completes corrections, fAIr can take those corrections as feedback to enhance the AI model’s accuracy.

Product Roadmap (Users' Roadmap)

Status Feature Detailed Description Release
✅ Adopting YOLOv8 model Improvements to the prediction algorithm v2.0.1+
✅ New UI/UX redesign to enhance the user experience v2.0.10+
✅ fAIr evaluation detailed research with Masaryk University & Missing Maps Czechia and Slovakia, welcome to join the efforts, here is the final report
✅ Handling User Profile Enable users to log in easily and have insights in their user activity, their own models/datasets and submitted trainings
✅ Notifications features Training status change would trigger a notification on the web/email to let user know training is finished successfully or with a failure
🔄 Replicable Models Enable users to run a pre-trained model on new imagery/on a different area of their choice and using different satellite imagery
🔄 Offline AI Prediction Enable users to submit requests for prediction using any pre-trained model and any imagery and process it in the background and provide the results back to user.
📅 Post Processing Enhancement Users would get enhanced geometry features (points/polygons) based on the need of the mapping process
📅 fAIrSwipe Enable users to validate fAIR generated features and push them into OSM by integrating fAIr with MapSwipe, more details

|👀| You can follow here the details and scope of each of the above features. and you can see and follow the Figma design progress for current in development 🔄 features

A higher level roadmap for 2025 can be found on Github.

General Workflow of fAIr

fAIr1

  1. First We expect there should be a fully mapped and validated task in project Area where model will be trained on
  2. fAIr uses OSM features as labels which are fetched from [Raw Data API] (https://github.com/hotosm/raw-data-api) and Tiles from OpenAerialMap (https://map.openaerialmap.org/)
  3. Once data is ready fAIr supports creation of local model with the input area provided , Publishes model for that area which can be implemented on the rest of the similar area
  4. Feedback is important aspect , If mappers is not satisfied with the prediction that fAIr is making they can submit their feedback and community manager can apply feedback to model so that model will learn

fAIr Architecture

fAIr2

The backend is using library we call it fAIr utilities to handle:

 1. Data preparation for the models
 2. Models trainings
 3. Inference process
 4. Post processing (converting the predicted features to geo data)

Local Installation [DEV]

Checkout Docker Installation docs

Get involved!

Licenses

Imagery License

Imagery Submission

By submitting imagery links to fAIr for model creation, you:

  1. Grant fAIr permission to download tiles covering your specified area of interest.
  2. Authorize fAIr to use these tiles for training and inference.
  3. Allow fAIr to redistribute the downloaded tiles to anyone who wishes to view or reproduce the dataset used for model training.

Training vs Inference

  • Training: Image tiles will be downloaded, stored, and will be published as part of the training dataset for the specified area only (Not the full image).
  • Inference: Image tiles will not be stored. They are only cached temporarily during prediction and removed after. Only the results, the prediction area, and the source imagery reference are stored.

Copyright

  • The original copyright remains with the imagery’s source or rights holder.

License Grant

  • You grant fAIr the right to license the downloaded tiles under CC BY 4.0.

Commercial TMS Notice

  • If you are using a commercial TMS (Tile Map Service) with your own token, fAIr will download, store, and derive information from the tiles for training in your specified area.
  • For inference, tiles are cached only for prediction and removed afterward.

You must verify that your imagery provider’s license is compatible with fAIr’s intended use.

Imagery License Compliance

  • When submitting imagery to fAIr, ensure you are not violating the license of the TMS or imagery provider.
  • If you are using imagery from OpenAerialMap, review the legal page for applicable terms.

Extended Use

  • If you plan to use the API or imagery services beyond the scope of this license, reach out to info@hotosm.org for guidance.

Labels License

Labels Submission

By submitting labels to fAIr for model creation, you:

  1. Grant fAIr permission to use the labels you provide for training.
  2. Allow fAIr to redistribute these labels under the same license applied to the training area you specify.
  3. Acknowledge responsibility for the content of the labels you share.

License Type

  • All labels used and shared by fAIr are licensed under the ODbL 1.0.
  • This is the same license used by OpenStreetMap.

Labels from OpenStreetMap

  • If labels are pulled from OpenStreetMap, no user account information is included.
  • Only the attributes and geometry are extracted for use.

Labels You Upload Yourself

  • If you upload your own labels, you must comply with ODbL.
  • You are fully responsible for the data you submit and what information you choose to include or exclude.

Compliance

  • Make sure the labels you provide can legally be shared under ODbL.
  • fAIr will redistribute labels only under ODbL, making them available for others working with the same training area.

Sponsor this project

 

Packages

 
 
 

Contributors 23