diff --git a/DEVELOPING.md b/DEVELOPING.md deleted file mode 100644 index e44d470d..00000000 --- a/DEVELOPING.md +++ /dev/null @@ -1,56 +0,0 @@ -# Developing - -## Project Maintenance Notes - -### Updating GraphQL Engine for integration tests - -It's important to keep the GraphQL Engine version updated to make sure that the -connector is working with the latest engine version. To update run, - -```sh -$ nix flake lock --update-input graphql-engine-source -``` - -Then commit the changes to `flake.lock` to version control. - -A specific engine version can be specified by editing `flake.lock` instead of -running the above command like this: - -```diff - graphql-engine-source = { -- url = "github:hasura/graphql-engine"; -+ url = "github:hasura/graphql-engine/"; - flake = false; - }; -``` - -### Updating Rust version - -Updating the Rust version used in the Nix build system requires two steps (in -any order): - -- update `rust-overlay` which provides Rust toolchains -- edit `rust-toolchain.toml` to specify the desired toolchain version - -To update `rust-overlay` run, - -```sh -$ nix flake lock --update-input rust-overlay -``` - -If you are using direnv to automatically apply the nix dev environment note that -edits to `rust-toolchain.toml` will not automatically update your environment. -You can make a temporary edit to `flake.nix` (like adding a space somewhere) -which will trigger an update, and then you can revert the change. - -### Updating other project dependencies - -You can update all dependencies declared in `flake.nix` at once by running, - -```sh -$ nix flake update -``` - -This will update `graphql-engine-source` and `rust-overlay` as described above, -and will also update `advisory-db` to get updated security notices for cargo -dependencies, `nixpkgs` to get updates to openssl. diff --git a/README.md b/README.md index b3deac50..c10dd484 100644 --- a/README.md +++ b/README.md @@ -1,180 +1,188 @@ -# Hasura MongoDB Connector - -This repo provides a service that connects [Hasura v3][] to MongoDB databases. -Supports MongoDB 6 or later. - -[Hasura v3]: https://hasura.io/ +# Hasura MongoDB Data Connector + +[![Docs](https://img.shields.io/badge/docs-v3.x-brightgreen.svg?style=flat)](https://hasura.io/docs/3.0/connectors/mongodb/) +[![ndc-hub](https://img.shields.io/badge/ndc--hub-postgres-blue.svg?style=flat)](https://hasura.io/connectors/mongodb) +[![License](https://img.shields.io/badge/license-Apache--2.0-purple.svg?style=flat)](LICENSE.txt) + +This Hasura data connector connects MongoDB to your data graph giving you an +instant GraphQL API to access your MongoDB data. Supports MongoDB 6 or later. + +This connector is built using the [Rust Data Connector SDK](https://github.com/hasura/ndc-hub#rusk-sdk) and implements the [Data Connector Spec](https://github.com/hasura/ndc-spec). + +- [See the listing in the Hasura Hub](https://hasura.io/connectors/mongodb) +- [Hasura V3 Documentation](https://hasura.io/docs/3.0/) + +Docs for the MongoDB data connector: + +- [Usage](https://hasura.io/docs/3.0/connectors/mongodb/) +- [Building](./docs/building.md) +- [Development](./docs/development.md) +- [Docker Images](./docs/docker-images.md) +- [Code of Conduct](./docs/code-of-conduct.md) +- [Contributing](./docs/contributing.md) +- [Limitations](./docs/limitations.md) +- [Support](./docs/support.md) +- [Security](./docs/security.md) + +## Features + +Below, you'll find a matrix of all supported features for the MongoDB data connector: + +| Feature | Supported | Notes | +| ----------------------------------------------- | --------- | ----- | +| Native Queries + Logical Models | ✅ | | +| Simple Object Query | ✅ | | +| Filter / Search | ✅ | | +| Filter by fields of Nested Objects | ✅ | | +| Filter by values in Nested Arrays | ✅ | | +| Simple Aggregation | ✅ | | +| Aggregate fields of Nested Objects | ❌ | | +| Aggregate values of Nested Arrays | ❌ | | +| Sort | ✅ | | +| Sorty by fields of Nested Objects | ❌ | | +| Paginate | ✅ | | +| Collection Relationships | ✅ | | +| Remote Relationships | ✅ | | +| Relationships Keyed by Fields of Nested Objects | ❌ | | +| Mutations | ✅ | Provided by custom [Native Mutations](TODO) - predefined basic mutations are also planned | + +## Before you get Started + +1. The [DDN CLI](https://hasura.io/docs/3.0/cli/installation) and [Docker](https://docs.docker.com/engine/install/) installed +2. A [supergraph](https://hasura.io/docs/3.0/getting-started/init-supergraph) +3. A [subgraph](https://hasura.io/docs/3.0/getting-started/init-subgraph) + +The steps below explain how to initialize and configure a connector for local +development on your data graph. You can learn how to deploy a connector — after +it's been configured +— [here](https://hasura.io/docs/3.0/getting-started/deployment/deploy-a-connector). + +For instructions on local development on the MongoDB connector itself see +[development.md](development.md). + +## Using the MongoDB connector + +### Step 1: Authenticate your CLI session + +```bash +ddn auth login +``` -## Docker Images +### Step 2: Configure the connector -The MongoDB connector is available from the [Hasura connectors directory][]. -There are also Docker images available at: +Once you have an initialized supergraph and subgraph, run the initialization command in interactive mode while +providing a name for the connector in the prompt: -https://github.com/hasura/ndc-mongodb/pkgs/container/ndc-mongodb +```bash +ddn connector init -i +``` -The published Docker images are multi-arch, supporting amd64 and arm64 Linux. +`` may be any name you choose for your particular project. -[Hasura connectors directory]: https://hasura.io/connectors/mongodb +#### Step 2.1: Choose the hasura/mongodb from the list -## Build Requirements +#### Step 2.2: Choose a port for the connector -The easiest way to set up build and development dependencies for this project is -to use Nix. If you don't already have Nix we recommend the [Determinate Systems -Nix Installer][] which automatically applies settings required by this project. +The CLI will ask for a specific port to run the connector on. Choose a port that is not already in use or use the +default suggested port. -[Determinate Systems Nix Installer]: https://github.com/DeterminateSystems/nix-installer/blob/main/README.md +#### Step 2.3: Provide env vars for the connector -If you prefer to manage dependencies yourself you will need, +| Name | Description | +|------------------------|----------------------------------------------------------------------| +| `MONGODB_DATABASE_URI` | Connection URI for the MongoDB database to connect - see notes below | -* Rust via Rustup -* MongoDB `>= 6` -* OpenSSL development files +`MONGODB_DATABASE_URI` is a string with your database' hostname, login +credentials, and database name. A simple example is +`mongodb://admin@pass:localhost/my_database`. If you are using a hosted database +on MongoDB Atlas you can get the URI from the "Data Services" tab in the project +dashboard: -## Quickstart +- open the "Data Services" tab +- click "Get connection string" +- you will see a 3-step dialog - ignore all 3 steps, you don't need to change anything +- copy the string that begins with `mongodb+srv://` + +## Step 3: Introspect the connector -To run everything you need run this command to start services in Docker -containers: +Set up configuration for the connector with this command. This will introspect +your database to infer a schema with types for your data. -```sh -$ just up +```bash +ddn connector introspect ``` -Next access the GraphQL interface at http://localhost:7100/ +Remember to use the same value for `` That you used in step 2. -If you are using the development shell (see below) the `just` command will be -provided automatically. +This will create a tree of files that looks like this (this example is based on the +[sample_mflix][] sample database): -Run the above command again to restart after making code changes. +[sample_mflix]: https://www.mongodb.com/docs/atlas/sample-data/sample-mflix/ -## Build - -To build the MongoDB connector run, - -```sh -$ nix build --print-build-logs && cp result/bin/mongodb-connector ``` - -To cross-compile statically-linked binaries for x86_64 or ARM for Linux run, - -```sh -$ nix build .#mongo-connector-x86_64-linux --print-build-logs && cp result/bin/mongodb-connector -$ nix build .#mongo-connector-aarch64-linux --print-build-logs && cp result/bin/mongodb-connector +app/connector +└── + ├── compose.yaml -- defines a docker service for the connector + ├── connector.yaml -- defines connector version to fetch from hub, subgraph, env var mapping + ├── configuration.json -- options for configuring the connector + ├── schema -- inferred types for collection documents - one file per collection + │ ├── comments.json + │ ├── movies.json + │ ├── sessions.json + │ ├── theaters.json + │ └── users.json + ├── native_mutations -- custom mongodb commands to appear in your data graph + │ └── your_mutation.json + └── native_queries -- custom mongodb aggregation pipelines to appear in your data graph + └── your_query.json ``` -The Nix configuration outputs Docker images in `.tar.gz` files. You can use -`docker load -i` to install these to the local machine's docker daemon. But it -may be more helpful to use `skopeo` for this purpose so that you can apply -a chosen tag, or override the image name. +The `native_mutations` and `native_queries` directories will not be created +automatically - create those directories as needed. -To build and install a Docker image locally (you can change -`mongodb-connector:1.2.3` to whatever image name and tag you prefer), +Feel free to edit these files to change options, or to make manual tweaks to +inferred schema types. If inferred types do not look accurate you can edit +`configuration.json`, change `sampleSize` to a larger number to randomly sample +more collection documents, and run the `introspect` command again. -```sh -$ nix build .#docker --print-build-logs \ - && skopeo --insecure-policy copy docker-archive:result docker-daemon:mongo-connector:1.2.3 -``` +## Step 4: Add your resources -To build a Docker image with a cross-compiled ARM binary, +This command will query the MongoDB connector to produce DDN metadata that +declares resources provided by the connector in your data graph. -```sh -$ nix build .#docker-aarch64-linux --print-build-logs \ - && skopeo --insecure-policy copy docker-archive:result docker-daemon:mongo-connector:1.2.3 +```bash +ddn connector-link add-resources ``` -If you don't want to install `skopeo` you can run it through Nix, `nix run -nixpkgs#skopeo -- --insecure-policy copy docker-archive:result docker-daemon:mongo-connector:1.2.3` - +The connector must be running before you run this command! If you have not +already done so you can run the connector with `ddn run docker-start`. -## Developing +If you have changed the configuration described in Step 3 it is important to +restart the connector. Running `ddn run docker-start` again will restart the +connector if configuration has changed. -### The development shell +This will create and update DDN metadata files. Once again this example is based +on the [sample_mflix][] data set: -This project uses a development shell configured in `flake.nix` that automatically -loads specific version of Rust along with all other project dependencies. The -simplest way to start a development shell is with this command: - -```sh -$ nix develop ``` - -If you are going to be doing a lot of work on this project it can be more -convenient to set up [direnv][] which automatically links project dependencies -in your shell when you cd to the project directory, and automatically reverses -all shell modifications when you navigate to another directory. You can also set -up direnv integration in your editor to get your editor LSP to use the same -version of Rust that the project uses. - -[direnv]: https://direnv.net/ - -### Running the Connector During Development - -There is a `justfile` for getting started quickly. You can use its recipes to -run relevant services locally including the MongoDB connector itself, a MongoDB -database server, and the Hasura GraphQL Engine. Use these commands: - -```sh -just up # start services; run this again to restart after making code changes -just down # stop services -just down-volumes # stop services, and remove MongoDB database volume -just logs # see service logs -just test # run unit and integration tests -just # list available recipes +app/metadata +├── mongodb.hml -- DataConnectorLink has connector connection details & database schema +├── mongodb-types.hml -- maps connector scalar types to GraphQL scalar types +├── Comments.hml -- The remaining files map database collections to GraphQL object types +├── Movies.hml +├── Sessions.hml +├── Theaters.hml +└── Users.hml ``` -Integration tests run in an independent set of ephemeral docker containers. - -The `just` command is provided automatically if you are using the development -shell. Or you can install it yourself. - -The `justfile` delegates to arion which is a frontend for docker-compose that -adds a layer of convenience where it can easily load agent code changes. If you -are using the devShell you can run `arion` commands directly. They mostly work -just like `docker-compose` commands: - -To start all services run: - - $ arion up -d - -To recompile and restart the connector after code changes run: - - $ arion up -d connector - -The arion configuration runs these services: - -- connector: the MongoDB data connector agent defined in this repo (port 7130) -- mongodb -- Hasura GraphQL Engine -- a stubbed authentication server -- jaeger to collect logs (see UI at http://localhost:16686/) - -Connect to the HGE GraphiQL UI at http://localhost:7100/ - -Instead of a `docker-compose.yaml` configuration is found in `arion-compose.nix`. - -### Working with Test Data - -The arion configuration in the previous section preloads MongoDB with test data. -There is corresponding OpenDDN configuration in the `fixtures/hasura/` -directory. - -Preloaded databases are populated by scripts in `fixtures/mongodb/`. Any `.js` -or `.sh` scripts added to this directory will be run when the mongodb service is -run from a fresh state. Note that you will have to remove any existing docker -volume to get to a fresh state. Using arion you can remove volumes by running -`arion down --volumes`. - -### Running with a different MongoDB version - -Override the MongoDB version that arion runs by assigning a Docker image name to -the environment variable `MONGODB_IMAGE`. For example, +## Documentation - $ arion down --volumes # delete potentially-incompatible MongoDB data - $ MONGODB_IMAGE=mongo:6 arion up -d +View the full documentation for the MongoDB connector [here](https://hasura.io/docs/3.0/connectors/mongodb/). -Or run integration tests against a specific MongoDB version, +## Contributing - $ MONGODB_IMAGE=mongo:6 just test-integration +Check out our [contributing guide](./docs/contributing.md) for more details. ## License -The Hasura MongoDB Connector is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0) (Apache-2.0). +The MongoDB connector is available under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). diff --git a/docs/building.md b/docs/building.md new file mode 100644 index 00000000..ea820668 --- /dev/null +++ b/docs/building.md @@ -0,0 +1,58 @@ +# Building the MongoDB Data Connector + +## Prerequisites + +- [Nix][Determinate Systems Nix Installer] +- [Docker](https://docs.docker.com/engine/install/) +- [skopeo](https://github.com/containers/skopeo) (optional) + +The easiest way to set up build and development dependencies for this project is +to use Nix. If you don't already have Nix we recommend the [Determinate Systems +Nix Installer][] which automatically applies settings required by this project. + +[Determinate Systems Nix Installer]: https://github.com/DeterminateSystems/nix-installer/blob/main/README.md + +For more on project setup, and resources provided by the development shell see +[development](./development.md). + +## Building + +To build the MongoDB connector run, + +```sh +$ nix build --print-build-logs && cp result/bin/mongodb-connector +``` + +To cross-compile statically-linked binaries for x86_64 or ARM for Linux run, + +```sh +$ nix build .#mongo-connector-x86_64-linux --print-build-logs && cp result/bin/mongodb-connector +$ nix build .#mongo-connector-aarch64-linux --print-build-logs && cp result/bin/mongodb-connector +``` + +The Nix configuration outputs Docker images in `.tar.gz` files. You can use +`docker load -i` to install these to the local machine's docker daemon. But it +may be more helpful to use `skopeo` for this purpose so that you can apply +a chosen tag, or override the image name. + +To build and install a Docker image locally (you can change +`mongodb-connector:1.2.3` to whatever image name and tag you prefer), + +```sh +$ nix build .#docker --print-build-logs \ + && skopeo --insecure-policy copy docker-archive:result docker-daemon:mongo-connector:1.2.3 +``` + +To build a Docker image with a cross-compiled ARM binary, + +```sh +$ nix build .#docker-aarch64-linux --print-build-logs \ + && skopeo --insecure-policy copy docker-archive:result docker-daemon:mongo-connector:1.2.3 +``` + +If you don't want to install `skopeo` you can run it through Nix, `nix run +nixpkgs#skopeo -- --insecure-policy copy docker-archive:result docker-daemon:mongo-connector:1.2.3` + +## Pre-build Docker Images + +See [docker-images](./docker-images.md) diff --git a/docs/code-of-conduct.md b/docs/code-of-conduct.md new file mode 100644 index 00000000..03c982fd --- /dev/null +++ b/docs/code-of-conduct.md @@ -0,0 +1,60 @@ +# Hasura GraphQL Engine Community Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to make +participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, +disability, ethnicity, sex characteristics, gender identity and expression, level of experience, education, +socio-economic status, nationality, personal appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment include: + +* Using welcoming, inclusive and gender-neutral language (example: instead of "Hey guys", you could use "Hey folks" or + "Hey all") +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take +appropriate and fair corrective action in response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, +issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any +contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the +project or its community. Examples of representing a project or community include using an official project e-mail +address, posting via an official social media account, or acting as an appointed representative at an online or offline +event. Representation of a project may be further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at +community@hasura.io. All complaints will be reviewed and investigated and will result in a response that is deemed +necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to +the reporter of an incident. Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent +repercussions as determined by other members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at +https://www.contributor-covenant.org/version/1/4/code-of-conduct.html + +[homepage]: https://www.contributor-covenant.org \ No newline at end of file diff --git a/docs/contributing.md b/docs/contributing.md new file mode 100644 index 00000000..bd5036b8 --- /dev/null +++ b/docs/contributing.md @@ -0,0 +1,33 @@ +# Contributing + +_First_: if you feel insecure about how to start contributing, feel free to ask us on our +[Discord channel](https://discordapp.com/invite/hasura) in the #contrib channel. You can also just go ahead with your contribution and we'll give you feedback. Don't worry - the worst that can happen is that you'll be politely asked to change something. We appreciate any contributions, and we don't want a wall of rules to stand in the way of that. + +However, for those individuals who want a bit more guidance on the best way to contribute to the project, read on. This document will cover what we're looking for. By addressing the points below, the chances that we can quickly merge or address your contributions will increase. + +## 1. Code of conduct + +Please follow our [Code of conduct](./code-of-conduct.md) in the context of any contributions made to Hasura. + +## 2. CLA + +For all contributions, a CLA (Contributor License Agreement) needs to be signed +[here](https://cla-assistant.io/hasura/ndc-mongodb) before (or after) the pull request has been submitted. A bot will prompt contributors to sign the CLA via a pull request comment, if necessary. + +## 3. Ways of contributing + +### Reporting an Issue + +- Make sure you test against the latest released cloud version. It is possible that we may have already fixed the bug you're experiencing. +- Provide steps to reproduce the issue, including Database (e.g. MongoDB) version and Hasura DDN version. +- Please include logs, if relevant. +- Create a [issue](https://github.com/hasura/ndc-mongodb/issues/new/choose). + +### Working on an issue + +- We use the [fork-and-branch git workflow](https://blog.scottlowe.org/2015/01/27/using-fork-branch-git-workflow/). +- Please make sure there is an issue associated with the work that you're doing. +- If you're working on an issue, please comment that you are doing so to prevent duplicate work by others also. +- See [`development.md`](./development.md) for instructions on how to build, run, and test the connector. +- If possible format code with `rustfmt`. If your editor has a code formatting feature it probably does the right thing. +- If you're up to it we welcome updates to `CHANGELOG.md`. Notes on the change in your PR should go in the "Unreleased" section. diff --git a/docs/development.md b/docs/development.md new file mode 100644 index 00000000..31d9adbe --- /dev/null +++ b/docs/development.md @@ -0,0 +1,353 @@ +# MongoDB Data Connector Development + +These are instructions for building and running the MongoDB Data Connector - and +supporting services - locally for purposes of working on the connector itself. + +This repo is set up to run all necessary services for interactive and +integration testing in docker containers with pre-populated MongoDB databases +with just one command, `just up`, if you have the prerequisites installed. +Repeating that command restarts services as necessary to apply code or +configuration changes. + +## Prerequisites + +- [Nix][Determinate Systems Nix Installer] +- [Docker](https://docs.docker.com/engine/install/) +- [Just](https://just.systems/man/en/) (optional) + +The easiest way to set up build and development dependencies for this project is +to use Nix. If you don't already have Nix we recommend the [Determinate Systems +Nix Installer][] which automatically applies settings required by this project. + +[Determinate Systems Nix Installer]: https://github.com/DeterminateSystems/nix-installer/blob/main/README.md + +You may optionally install `just`. If you are using a Nix develop shell it +provides `just` automatically. (See "The development shell" below). + +If you prefer to manage dependencies yourself you will need, + +* Rust via Rustup +* MongoDB `>= 6` +* OpenSSL development files + +## Quickstart + +To run everything you need run this command to start services in Docker +containers: + +```sh +$ just up +``` + +Next access the GraphQL interface at http://localhost:7100/ + +Run the above command again to restart any services that are affected by code +changes or configuration changes. + +## The development shell + +This project uses a development shell configured in `flake.nix` that automatically +loads specific version of Rust along with all other project dependencies. The +development shell provides: + +- a Rust toolchain: `cargo`, `cargo-clippy`, `rustc`, `rustfmt`, etc. +- `cargo-insta` for reviewing test snapshots +- `just` +- `mongosh` +- `arion` which is a Nix frontend for docker-compose +- The DDN CLI +- The MongoDB connector plugin for the DDN CLI which is automatically rebuilt after code changes in this repo (can be run directly with `mongodb-cli-plugin`) + +Development shell features are specified in the `devShells` definition in +`flake.nix`. You can add dependencies by [looking up the Nix package +name](https://search.nixos.org/), and adding the package name to the +`nativeBuildInputs` list. + +The simplest way to start a development shell is with this command: + +```sh +$ nix develop +``` + +If you are going to be doing a lot of work on this project it can be more +convenient to set up [direnv][] which automatically links project dependencies +in your shell when you cd to the project directory, and automatically reverses +all shell modifications when you navigate to another directory. You can also set +up direnv integration in your editor to get your editor LSP to use the same +version of Rust that the project uses. + +[direnv]: https://direnv.net/ + +## Running and Testing + +There is a `justfile` for getting started quickly. You can use its recipes to +run relevant services locally including the MongoDB connector itself, a MongoDB +database server, and the Hasura GraphQL Engine. Use these commands: + +```sh +just up # start services; run this again to restart after making code changes +just down # stop services +just down-volumes # stop services, and remove MongoDB database volume +just logs # see service logs +just test # run unit and integration tests +just # list available recipes +``` + +Integration tests run in an independent set of ephemeral docker containers. + +The `just` command is provided automatically if you are using the development +shell. Or you can install it yourself. + +The typical workflow for interactive testing (testing by hand) is to interact +with the system through the Hasura GraphQL Engine's GraphQL UI at +http://localhost:7100/. If you can get insight into what the connector is doing +by reading the logs which you can access by running `just logs`, or via the +Jaeger UI at http://localhost:16686/. + +### Running with a different MongoDB version + +Override the MongoDB version by assigning a Docker image name to the environment +variable `MONGODB_IMAGE`. For example, + + $ just down-volumes # delete potentially-incompatible MongoDB data + $ MONGODB_IMAGE=mongo:6 arion up -d + +Or run integration tests against a specific MongoDB version, + + $ MONGODB_IMAGE=mongo:6 just test-integration + +There is a predefined just recipe that runs integration tests using MongoDB +versions 5, 6, and 7. There is some functionality that does not work in MongoDB +v5 so some tests are skipped when running that MongoDB version. + +### Where to find the tests + +Unit tests are found in conditionally-compiled test modules in the same Rust +source code files with the code that the tests test. + +Integration tests are found in `crates/integration-tests/src/tests/` + +### Writing Integration Tests + +Integration tests are run with `just test-integration`. Typically integration +tests run a GraphQL query, and compare the response to a saved snapshot. Here is +an example: + +```rust +#[tokio::test] +async fn filters_by_date() -> anyhow::Result<()> { + assert_yaml_snapshot!( + graphql_query( + r#" + query ($dateInput: Date) { + movies( + order_by: {id: Asc}, + where: {released: {_gt: $dateInput}} + ) { + title + released + } + } + "# + ) + .variables(json!({ "dateInput": "2016-03-01T00:00Z" })) + .run() + .await? + ); + Ok(()) +} +``` + +On the first test run after a test is created or changed the test runner will +create a new snapshot file with the GraphQL response. To make the test pass it +is necessary to approve the snapshot (if the response is correct). To do that +run, + +```sh +$ cargo insta review +``` + +Approved snapshot files must be checked into version control. + +Please be aware that MongoDB query results do not have consistent ordering. It +is important to have `order_by` clauses in every test that produces more than +one result to explicitly order everything. Otherwise tests will fail when the +order of a response does not match the exact order of data in an approved +snapshot. + +## Building + +For instructions on building binaries or Docker images see [building.md](./building.md). + +## Working with Test Data + +### Predefined MongoDB databases + +This repo includes fixture data and configuration to provide a fully-configured +data graph for testing. + +There are three provided MongoDB databases. Development services run three +connector instances to provide access to each of those. Listing these by Docker +Compose service names: + +- `connector` serves the [sample_mflix][] database +- `connector-chinook` serves a version of the [chinook][] sample database that has been adapted for MongoDB +- `connector-test-cases` serves the test_cases database - if you want to set up data for integration tests put it in this database + +[sample_mflix]: https://www.mongodb.com/docs/atlas/sample-data/sample-mflix/ +[chinook]: https://github.com/lerocha/chinook-database + +Those databases are populated by scripts in `fixtures/mongodb/`. There is +a subdirectory with fixture data for each database. + +Integration tests use an ephemeral MongoDB container so a fresh database will be +populated with those fixtures on every test run. + +Interactive services (the ones you get with `just up`) use a persistent volume +for MongoDB databases. To get updated data after changing fixtures, or any time +you want to get a fresh database, you will have to delete the volume and +recreate the MongoDB container. To do that run, + +```sh +$ just down-volumes +$ just up +``` + +### Connector Configuration + +If you followed the Quickstart in [README.md](../README.md) then you got +connector configuration in your data graph project in +`app/connector//`. This repo provides predefined connector +configurations so you don't have to create your own during development. + +As mentioned in the previous section development test services run three MongoDB +connector instances. There is a separate configuration directory for each +instance. Those are in, + +- `fixtures/hasura/sample_mflix/connector/` +- `fixtures/hasura/chinook/connector/` +- `fixtures/hasura/test_cases/connector/` + +Connector instances are automatically restarted with updated configuration when +you run `just up`. + +If you make changes to MongoDB databases you may want to run connector +introspection to automatically update configurations. See the specific +instructions in the [fixtures readme](../fixtures/hasura/README.md). + +### DDN Metadata + +The Hasura GraphQL Engine must be configured with DDN metadata which is +configured in `.hml` files. Once again this repo provides configuration in +`fixtures/hasura/`. + +If you have made changes to MongoDB fixture data or to connector configurations +you may want to update metadata using the DDN CLI by querying connectors. +Connectors must be restarted with updated configurations before you do this. For +specific instructions see the [fixtures readme](../fixtures/hasura/README.md). + +The Engine will automatically restart with updated configuration after any +changes to `.hml` files when you run `just up`. + +## Docker Compose Configuration + +The [`justfile`](../justfile) recipes delegate to arion which is a frontend for +docker-compose that adds a layer of convenience where it can easily load +connector code changes. If you are using the development shell you can run +`arion` commands directly. They mostly work just like `docker-compose` commands: + +To start all services run: + + $ arion up -d + +To recompile and restart the connector after code changes run: + + $ arion up -d connector + +The arion configuration runs these services: + +- connector: the MongoDB data connector agent defined in this repo serving the sample_mflix database (port 7130) +- two more instances of the connector - one connected to the chinook sample database, the other to a database of ad-hoc data that is queried by integration tests (ports 7131 & 7132) +- mongodb (port 27017) +- Hasura GraphQL Engine (HGE) (port 7100) +- a stubbed authentication server +- jaeger to collect logs (see UI at http://localhost:16686/) + +Connect to the HGE GraphiQL UI at http://localhost:7100/ + +Instead of a `docker-compose.yaml` configuration is found in +`arion-compose.nix`. That file imports from modular configurations in the +`arion-compose/` directory. Here is a quick breakdown of those files: + +``` +arion-compose.nix -- entrypoint for interactive services configuration +arion-pkgs.nix -- defines the `pkgs` variable that is passed as an argument to other arion files +arion-compose +├── default.nix -- arion-compose.nix delegates to the function exported from this file +├── integration-tests.nix -- entrypoint for integration test configuration +├── integration-test-services.nix -- high-level service configurations used by interactive services, and by integration tests +├── fixtures +│ └── mongodb.nix -- provides a dictionary of MongoDB fixture data directories +└── services -- each file here exports a function that configures a specific service + ├── connector.nix -- configures the MongoDB connector with overridable settings + ├── dev-auth-webhook.nix -- stubbed authentication server + ├── engine.nix -- Hasura GraphQL Engine + ├── integration-tests.nix -- integration test runner + ├── jaeger.nix -- OpenTelemetry trace collector + └── mongodb.nix -- MongoDB database server +``` + +## Project Maintenance Notes + +### Updating GraphQL Engine for integration tests + +It's important to keep the GraphQL Engine version updated to make sure that the +connector is working with the latest engine version. To update run, + +```sh +$ nix flake lock --update-input graphql-engine-source +``` + +Then commit the changes to `flake.lock` to version control. + +A specific engine version can be specified by editing `flake.lock` instead of +running the above command like this: + +```diff + graphql-engine-source = { +- url = "github:hasura/graphql-engine"; ++ url = "github:hasura/graphql-engine/"; + flake = false; + }; +``` + +### Updating Rust version + +Updating the Rust version used in the Nix build system requires two steps (in +any order): + +- update `rust-overlay` which provides Rust toolchains +- edit `rust-toolchain.toml` to specify the desired toolchain version + +To update `rust-overlay` run, + +```sh +$ nix flake lock --update-input rust-overlay +``` + +If you are using direnv to automatically apply the nix dev environment note that +edits to `rust-toolchain.toml` will not automatically update your environment. +You can make a temporary edit to `flake.nix` (like adding a space somewhere) +which will trigger an update, and then you can revert the change. + +### Updating other project dependencies + +You can update all dependencies declared in `flake.nix` at once by running, + +```sh +$ nix flake update +``` + +This will update `graphql-engine-source` and `rust-overlay` as described above, +and will also update `advisory-db` to get updated security notices for cargo +dependencies, `nixpkgs` to get updates to openssl. diff --git a/docs/docker-images.md b/docs/docker-images.md new file mode 100644 index 00000000..3a4acdce --- /dev/null +++ b/docs/docker-images.md @@ -0,0 +1,13 @@ +# MongoDB Data Connector Docker Images + +The DDN CLI can automatically create a Docker configuration for you. But if you +want to access connector Docker images directly they are available from as +`ghcr.io/hasura/ndc-mongodb`. For example, + +```sh +$ docker run ghcr.io/hasura/ndc-mongodb:v1.1.0 +``` + +The Docker images are multi-arch, supporting amd64 and arm64 Linux. + +A listing of available image versions can be seen [here](https://github.com/hasura/ndc-mongodb/pkgs/container/ndc-mongodb). diff --git a/docs/limitations.md b/docs/limitations.md new file mode 100644 index 00000000..c2349888 --- /dev/null +++ b/docs/limitations.md @@ -0,0 +1,5 @@ +# Limitations of the MongoDB Data Connector + +- Filtering and sorting by scalar values in arrays is not yet possible. APIPG-294 +- Fields with names that begin with a dollar sign ($) or that contain dots (.) currently cannot be selected. NDC-432 +- Referencing relations in mutation requests does not work. NDC-157 diff --git a/docs/pull_request_template.md b/docs/pull_request_template.md deleted file mode 100644 index 22eeddf0..00000000 --- a/docs/pull_request_template.md +++ /dev/null @@ -1,34 +0,0 @@ -## Describe your changes - -## Issue ticket number and link - -_(if you have one)_ - -## Changelog - -- Add a changelog entry (in the "Changelog entry" section below) if the changes in this PR have any user-facing impact. -- If no changelog is required ignore/remove this section and add a `no-changelog-required` label to the PR. - -### Type -_(Select only one. In case of multiple, choose the most appropriate)_ -- [ ] highlight -- [ ] enhancement -- [ ] bugfix -- [ ] behaviour-change -- [ ] performance-enhancement -- [ ] security-fix - - -### Changelog entry - - -_Replace with changelog entry_ - - - - diff --git a/docs/security.md b/docs/security.md new file mode 100644 index 00000000..495d8f2d --- /dev/null +++ b/docs/security.md @@ -0,0 +1,33 @@ +# Security + +## Reporting Vulnerabilities + +We’re extremely grateful for security researchers and users that report vulnerabilities to the Hasura Community. All reports are thoroughly investigated by a set of community volunteers and the Hasura team. + +To report a security issue, please email us at [security@hasura.io](mailto:security@hasura.io) with all the details, attaching all necessary information. + +### When Should I Report a Vulnerability? + +- You think you have discovered a potential security vulnerability in the Hasura GraphQL Engine or related components. +- You are unsure how a vulnerability affects the Hasura GraphQL Engine. +- You think you discovered a vulnerability in another project that Hasura GraphQL Engine depends on (e.g. Heroku, Docker, etc). +- You want to report any other security risk that could potentially harm Hasura GraphQL Engine users. + +### When Should I NOT Report a Vulnerability? + +- You need help tuning Hasura GraphQL Engine components for security. +- You need help applying security related updates. +- Your issue is not security related. + +## Security Vulnerability Response + +Each report is acknowledged and analyzed by the project's maintainers and the security team within 3 working days. + +The reporter will be kept updated at every stage of the issue's analysis and resolution (triage -> fix -> release). + +## Public Disclosure Timing + +A public disclosure date is negotiated by the Hasura product security team and the bug submitter. We prefer to fully disclose the bug as soon as possible once a user mitigation is available. It is reasonable to delay disclosure when the bug or the fix is not yet fully understood, the solution is not well-tested, or for vendor coordination. The timeframe for disclosure is from immediate (especially if it's already publicly known) to a few weeks. We expect the time-frame between a report to a public disclosure to typically be in the order of 7 days. The Hasura GraphQL Engine maintainers and the security team will take the final call on setting a disclosure date. + +(Some sections have been inspired and adapted from +[https://github.com/kubernetes/website/blob/master/content/en/docs/reference/issues-security/security.md](https://github.com/kubernetes/website/blob/master/content/en/docs/reference/issues-security/security.md). \ No newline at end of file diff --git a/docs/support.md b/docs/support.md new file mode 100644 index 00000000..c6e0c20c --- /dev/null +++ b/docs/support.md @@ -0,0 +1,140 @@ +# Support & Troubleshooting + +The documentation and community will help you troubleshoot most issues. If you have encountered a bug or need to get in touch with us, you can contact us using one of the following channels: +* Support & feedback: [Discord](https://discord.gg/hasura) +* Issue & bug tracking: [GitHub issues](https://github.com/hasura/ndc-mongodb/issues) +* Follow product updates: [@HasuraHQ](https://twitter.com/hasurahq) +* Talk to us on our [website chat](https://hasura.io) + +We are committed to fostering an open and welcoming environment in the community. Please see the [Code of Conduct](code-of-conduct.md). + +If you want to report a security issue, please [read this](security.md). + +## Frequently Asked Questions + +If your question is not answered here please also check +[limitations](./limitations.md). + +### Why am I getting strings instead of numbers? + +MongoDB stores data in [BSON][] format which has several numeric types: + +- `double`, 64-bit floating point +- `decimal`, 128-bit floating point +- `int`, 32-bit integer +- `long`, 64-bit integer + +[BSON]: https://bsonspec.org/ + +But GraphQL uses JSON so data must be converted from BSON to JSON in GraphQL +responses. Some JSON parsers cannot precisely decode the `decimal` and `long` +types. Specifically in JavaScript running `JSON.parse(data)` will silently +convert `decimal` and `long` values to 64-bit floats which causes loss of +precision. + +If you get a `long` value that is larger than `Number.MAX_SAFE_INTEGER` +(9,007,199,254,740,991) but that is less than `Number.MAX_VALUE` (1.8e308) then +you will get a number, but it might be silently changed to a different number +than the one you should have gotten. + +Some databases use `long` values as IDs - if you get loss of precision with one +of these values instead of a calculation that is a little off you might end up +with access to the wrong records. + +There is a similar problem when converting a 128-bit float to a 64-bit float. +You'll get a number, but not exactly the right one. + +Serializing `decimal` and `long` as strings prevents bugs that might be +difficult to detect in environments like JavaScript. + +### Why am I getting data in this weird format? + +You might encounter a case where you expect a simple value in GraphQL responses, +like a number or a date, but you get a weird object wrapper. For example you +might expect, + +```json +{ "total": 3.0 } +``` + +But actually get: + +```json +{ "total": { "$numberDouble": "3.0" } } +``` + +That weird format is [Extended JSON][]. MongoDB stores data in [BSON][] format +which includes data types that don't exist in JSON. But GraphQL responses use +JSON. Extended JSON is a means of encoding data BSON data with inline type +annotations. That provides a semi-standardized way to express, for example, date +values in JSON. + +[Extended JSON]: https://www.mongodb.com/docs/manual/reference/mongodb-extended-json/ + +In cases where the specific type of a document field is known in your data graph +the MongoDB connector serializes values for that field using "simple" JSON which +is probably what you expect. In these cases the type of each field is known +out-of-band so inline type annotations that you would get from Extended JSON are +not necessary. But in cases where the data graph does not have a specific type +for a field (which we represent using the ExtendedJSON type in the data graph) +we serialize using Extended JSON instead to provide type information which might +be important for you. + +What often happens is that when the `ddn connector introspect` command samples +your database to infer types for each collection document it encounters +different types of data under the same field name in different documents. DDN +does not support union types so we can't configure a specific type for these +cases. Instead the data schema that gets written uses the ExtendedJSON type for +those fields. + +You have two options: + +#### configure a precise type for the field + +Edit your connector configuration to change a type in +`schema/.json` to change the type of a field from +`{ "type": "extendedJSON" }` to something specific like, +`{ "type": { "scalar": "double" } }`. + +#### change Extended JSON serialization settings + +In your connector configuration edit `configuration.json` and change the setting +`serializationOptions` from `canonical` to `relaxed`. Extended JSON has two +serialization flavors: "relaxed" mode outputs JSON-native types like numbers as +plain values without inline type annotations. You will still see type +annotations on non-JSON-native types like dates. + +## How Do I ...? + +### select an entire object without listing its fields + +GraphQL requires that you explicitly list all of the object fields to include in +a response. If you want to fetch entire objects the MongoDB connector provides +a workaround. The connector defines an ExtendedJSON types that represents +arbitrary BSON values. In GraphQL terms ExtendedJSON is a "scalar" type so when +you select a field of that type instead of listing nested fields you get the +entire structure, whether it's an object, an array, or anything else. + +Edit the schema in your data connector configuration. (There is a schema +configuration file for each collection in the `schema/` directory). Change the +object field you want to fetch from an object type like this one: + +```json +{ "type": { "object": "" } } +``` + +Change the type to `extendedJSON`: + +```json +{ "type": "extendedJSON" } +``` + +After restarting the connector you will also need to update metadata to +propagate the type change by running the appropriate `ddn connector-link` +command. + +This is an all-or-nothing change: if a field type is ExtendedJSON you cannot +select a subset of fields. You will always get the entire structure. Also note +that fields of type ExtendedJSON are serialized according to the [Extended +JSON][] spec. (See the section above, "Why am I getting data in this weird +format?")