This project attempts to map the organization of the US Federal Government by gathering and consolidating information from various directories.
Current sources:
- SAM.gov Federal Hierarchy Public API
- Federal Register Agencies API
- USA.gov A-Z Index of U.S. Government Departments and Agencies
- OPM Federal Agencies List
- CISA .gov data
- FY 2024 Federal Budget Outlays
- USASpending API Agencies & Sub-agencies
- United States Government Manual
- U.S. Digital Registry
Each source is scraped (see out directory) in raw JSON format, including fields for the organizational unit name/parent (if any), unique ID/parent-ID fields (if the names are not unique) as well as any other attribute data for that organization available from that source.
A normalized name (still WIP) is then added, which corrects letter case, spacing and expands acronyms. Acronyms are selected and verified manually using data from USCD GovSpeak and the DOD Dictionary of Military and Associated Terms as well as manual entry when needed.
Each source is them imported into a tree and exported into the following formats for easy consumption:
- Plain text tree
- JSON flat format (with path to each element)
- JSON nested tree format
- CSV format (with embedded JSON attributes)
- Wide CSV format (with flattened attributes)
- DOT file (does not include attributes)
- GEXF graph file (includes flattened attributes)
- GraphQL graph file (includes flattened attributes)
- Cytoscape.js JSON format (includes flattened attributes)
To merge the lists, each tree is merged into a selected base tree by comparing the normalized names of each node in the tree to the names of each node in the base tree using a fuzzy matching algorithm. Similarity scores between each pair of parents are incorporated into the score to more correctly identify cases where the same/similar office or program name is used for different organizations.
Note that the fuzzy matching is imperfect and may have some inaccurate mappings (although most appear OK) and will certainly have some entries which actually should be merged, but aren't.
The final merged dataset is written in the above formats to the data/merged directory.
- Python 3.10+
- Poetry
Check out this repository, then from the repository root, install dependencies:
$ poetry install
See command line usage:
poetry run allusgov --help
Run a complete scrape and merge:
poetry run allusgov