Skip to content

This project aims to create a comprehensive database that maps different Regulatory Compliance Standards and provides a user-friendly search functionality for easy cross-referencing. The goal is to simplify the process of navigating through various standards and enhance regulatory compliance efforts.

Notifications You must be signed in to change notification settings

DarriusChen/Regulatory-Compliance-Mapping-Database

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Regulatory-Compliance-Mapping-Database

Overview

This project aims to build a comprehensive database that maps various regulatory compliance standards, offering a powerful search function that allows users to cross-reference standards efficiently. It simplifies the compliance process, helping organizations ensure adherence to multiple regulations.

Features

  • Compliance Mapping: Seamless cross-referencing between multiple regulatory standards.
  • Search Functionality: Easy-to-use, fast lookup for specific standards.
  • Scalability: Supports future regulatory frameworks.

Installation

Prerequisites:

  1. SQLite (3.39.5)
  2. Docker (24.0.5) (Optional but recommended)
  3. Python (>=3.11)

Steps:

  1. Clone the repository:
    bash
    git clone https://github.com/DarriusChen/Regulatory-Compliance-Mapping-Database.git
    cd Regulatory-Compliance-Mapping-Database
    

Data sources: Regulatory compliance standards

The spreadsheets of five standards and frameworks:

  1. NIST Cybersecurity Framework (CSF)
  2. NIST SP 800-53
  3. MITRE ATT&CK (V13)
  4. MITRE D3FEND
  5. ISO 27001

In order to join all data from different frameworks, we need some mapping spreadsheets as follows:

  1. CSF ←→ SP 800-53
  2. SP 800-53 ←→ ISO27001
  3. CSF ←→ SP 800-207
  4. SP 800-53 ←→ MITRE ATT&CK (V12)
  5. SP 800-53 ←→ MITRE D3FEND
  6. MITRE D3FEND ←→ MITRE ATT&CK (Tool)

Tools used in this project

There are still many alternatives of tools like Elastic Stack or Redis, but we use GraphQL in this case.

  1. Metabase is a simple and powerful analytics tool which lets anyone learn and make decisions from their company’s data—no technical knowledge required.

  2. GraghQL is a query language for APIs and a runtime for fulfilling those queries with your existing data. It provides a complete and understandable description of the data in your API, gives clients the power to ask for exactly what they need and nothing more, makes it easier to evolve APIs over time, and enables powerful developer tools.

Database

The Database is for internal use, so I can't share it in public. You can try with your own database with the workflow of this project. (The original database is put under "graphql-server/src/database".)

GraphQL server

Use case: Search data in SQLite database using gql.
Run npm start under "graphql-server/" to launch the server, and then there will be a playground at port 4000.
You can try different functions you want, type the columns to query, and the variable you want to filter.
image In the example above, what I did is query six columns, "csfFId, csfFName, defendTName, attackTName, sp80053CName, iso27001CName", where the values of column "csfFId" equal to "ID"

Metabase

Use case: Get data from the Metabase server using Python.
Prerequisites: Docker

Launch Metabase server

docker-compose up  # run this command under metabase directory

image This is the edit interface which can let you do some actions like join, filter, or sort without writing SQL commands. (You can also get the SQL commands of what you did.) image Above is the result of joining tables using Metabase.

Metabse API

You can either directly use the requests library to get data or use the "metabase_api" library.

  1. Get data using python requests:
    Commands in terminal

    pip3 install pandas # if needed
    pip3 install requests # if needed
    pip3 install pdm # virtual environment tool
    pdm add
    

    Python codes

    import pandas as pd
    import requests
    
    
    # get the credentials
    response_id = requests.post('http://{your-host-name}/api/session',
                         headers = {'Content-Type': 'application/json'},
                         json={'username': {your-user-name},
                               'password': {your-passwords})
    session_id = response_id.json()['id']
    headers = {'X-Metabase-Session': session_id}
    
    
    # get data (ask questions)
    
    # how many tables & names of those tables
    url = 'http://{your-host-name}/api/table/'
    question2 = requests.get(url=url,headers=headers).json()
    table_name = [i["display_name"] for i in question2]
    print(f"There are {len(question2)} tables.\n",f"Names of those tables: {table_name}")
    
    
    # get the all mapping question (card)
    url_card = 'http://{your-host-name}/api/card/{your-card-id}/query/json'
    card_question = requests.post(url=url_card,headers=headers).json()
    pd.DataFrame(card_question)
    

    image

  2. Get data using metabase_api:

    from metabase_api import Metabase_API
    mb = Metabase_API('http://{your-host-name}', {your-account}, {your-passwords)  # if password is not given, it will prompt for password
    results = mb.get_card_data(card_id=5, data_format='json') # data format can be json or csv
    pd.DataFrame(results)
    

    image

About

This project aims to create a comprehensive database that maps different Regulatory Compliance Standards and provides a user-friendly search functionality for easy cross-referencing. The goal is to simplify the process of navigating through various standards and enhance regulatory compliance efforts.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published