diff --git a/docs/indexer/build/dynamicdatasources.md b/docs/indexer/build/dynamicdatasources.md index 4e54177f4c4..2ffd4691d28 100644 --- a/docs/indexer/build/dynamicdatasources.md +++ b/docs/indexer/build/dynamicdatasources.md @@ -88,7 +88,7 @@ async function handleNewTradingPair(event: MoonbeamEvent): Promise { } async function handleLiquidityAdded(event: MoonbeamEvent): Promise { - /* mapping fuction implementation here */ + /* mapping function implementation here */ } ``` diff --git a/docs/indexer/build/graph-migration.md b/docs/indexer/build/graph-migration.md index e6a03e96bc0..33a44b5b4fd 100644 --- a/docs/indexer/build/graph-migration.md +++ b/docs/indexer/build/graph-migration.md @@ -31,7 +31,7 @@ Reach out to our team at [professionalservices@subquery.network](mailto:professi ### Recommended Migration Steps -This is the recommended proccess that we use at SubQuery whenever we migrate projects from a SubGraph to SubQuery: +This is the recommended process that we use at SubQuery whenever we migrate projects from a SubGraph to SubQuery: 1. [Initialise a new SubQuery project](./introduction.md) for the same network using the `subql init` command. When migrating an existing SubGraph, it's not necessary to perform code scaffolding. It also ensures that you are using TS `strict` mode, which will help you identify any potential bugs. 2. Copy over your `schema.graphql` and replace any `Bytes` and `BigDecimals`. [More info](#graphql-schema). @@ -41,7 +41,7 @@ This is the recommended proccess that we use at SubQuery whenever we migrate pro 6. Copy over the `mappings` directory, and then go through one by one to migrate them across. The key differences: - Imports will need to be updated - Store operations are asynchronous, e.g. `.load(id)` should be replaced by `await .get(id)` and `.save()` to `await .save()` (note the `await`). - - With strict mode, you must construct new entites with all the required properties. You may want to replace `new (id)` with `.create({ ... })` + - With strict mode, you must construct new entities with all the required properties. You may want to replace `new (id)` with `.create({ ... })` - [More info](#mapping). 7. Test and update your clients to follow the GraphQL api differences and take advantage of additional features. [More info](#graphql-query-differences) @@ -231,7 +231,7 @@ dataSources: The `codegen` command is also intentionally similar between SubQuery and SubGraphs -All GraphQL entities will have generated entity classes that provide type-safe entity loading, read and write access to entity fields - see more about this process in [the GraphQL Schema](../build/graphql.md). All entites can be imported from the following directory: +All GraphQL entities will have generated entity classes that provide type-safe entity loading, read and write access to entity fields - see more about this process in [the GraphQL Schema](../build/graphql.md). All entities can be imported from the following directory: ```ts import { Gravatar } from "../types"; diff --git a/docs/indexer/build/graphql.md b/docs/indexer/build/graphql.md index 0dcebe805d7..cf3f0217053 100644 --- a/docs/indexer/build/graphql.md +++ b/docs/indexer/build/graphql.md @@ -9,7 +9,7 @@ The `schema.graphql` file outlines the various GraphQL schemas. The structure of 3. [Entity Relationships](#entity-relationships): An entity often has nested relationships with other entities. Setting the field value to another entity name will define a relationship between these two entities. 4. [Indexing](#indexing-by-non-primary-key-field): Enhance query performance by implementing the @index annotation on a non-primary-key field. -Here's an example of what your GraphQL Here is an example of a schema which implements all of these recomendations, as well a relationship of many-to-many: +Here's an example of what your GraphQL Here is an example of a schema which implements all of these recommendations, as well a relationship of many-to-many: ::: tip @@ -104,7 +104,7 @@ We currently support the following scalar types: - `Boolean` - `` for nested relationship entities, you might use the defined entity's name as one of the fields. Please see in [Entity Relationships](graphql.md#entity-relationships). - `JSON` can alternatively store structured data, please see [JSON type](graphql.md#json-type) -- `` types are a special kind of enumerated scalar that is restricted to a particular set of allowed values. Please see [Graphql Enum](https://graphql.org/learn/schema/#enumeration-types) +- `` types are a special kind of enumerated scalar that is restricted to a particular set of allowed values. Please see [GraphQL Enum](https://graphql.org/learn/schema/#enumeration-types) ### Naming Constraints @@ -205,7 +205,7 @@ Composite indexes work just like regular indexes, except they provide even faste For example, a composite index on columns `col_a` and `col_b` will significantly help when there are queries that filter across both (e.g. `WHERE col_a=x AND col_b=y`). -You can create composite indexes though the `@compositeIndexes` annotation on an entity, and you can specify as many as you want. +You can create composite indexes through the `@compositeIndexes` annotation on an entity, and you can specify as many as you want. ```graphql type Account @entity { @@ -430,7 +430,7 @@ type User @entity { ### JSON field indexes -By default we automatically add indexes to JSON fields to improve querying performance. This can be disabled by specifying the `indexed: false` argument on the `jsonField` directive like so. This is useful if you are using alternative databases like Cockroach DB, as there can be some perfomance issues with inserting JSON data with an index (Cockroach does not support gin index and Jsonb data). +By default we automatically add indexes to JSON fields to improve querying performance. This can be disabled by specifying the `indexed: false` argument on the `jsonField` directive like so. This is useful if you are using alternative databases like Cockroach DB, as there can be some performance issues with inserting JSON data with an index (Cockroach does not support gin index and Jsonb data). ```graphql type AddressDetail @jsonField(indexed: false) { @@ -446,7 +446,7 @@ The drawback of using JSON types is a slight impact on query efficiency when fil However, the impact is still acceptable in our query service. Here is an example of how to use the `contains` operator in the GraphQL query on a JSON field to find the first 5 users who own a phone number that contains '0064'. ```graphql -#To find the the first 5 users own phone numbers contains '0064'. +#To find the first 5 users own phone numbers contains '0064'. query { user(first: 5, filter: { contactCard: { contains: [{ phone: "0064" }] } }) { diff --git a/docs/indexer/build/introduction.md b/docs/indexer/build/introduction.md index 420ed90d3b9..d452d2ce40b 100644 --- a/docs/indexer/build/introduction.md +++ b/docs/indexer/build/introduction.md @@ -44,9 +44,9 @@ Scaffolding saves time during SubQuery project creation by automatically generat ### When Initialising New SubQuery Projects -When you are initalising a new project using the `subql init` command, SubQuery will give you the option to set up a scaffolded SubQuery project based on your JSON ABI. +When you are initialising a new project using the `subql init` command, SubQuery will give you the option to set up a scaffolded SubQuery project based on your JSON ABI. -If you have select an compatiable network type (EVM), it will prompt +If you have select an compatible network type (EVM), it will prompt ```shell ? Do you want to generate scaffolding with an existing abi contract? @@ -66,7 +66,7 @@ You will then be prompted to select what `events` and/or `functions` that you wa ### For an Existing SubQuery Project -You can also generate additional scaffolded code new new contracts and append this code to your existing `project.ts`. This is done using the `subql codegen:generate` command from within your project workspace. +You can also generate additional scaffolded code new contracts and append this code to your existing `project.ts`. This is done using the `subql codegen:generate` command from within your project workspace. ```shell subql codegen:generate \ @@ -146,13 +146,13 @@ import { } from "../types/abi-interfaces/Gravity"; export async function handleNewGravatarGravityLog( - log: NewGravatarLog, + log: NewGravatarLog ): Promise { // Place your code logic here } export async function handleUpdatedGravatarGravityLog( - log: UpdatedGravatarLog, + log: UpdatedGravatarLog ): Promise { // Place your code logic here } @@ -268,7 +268,7 @@ The `schema.graphql` file outlines the various GraphQL schemas. The structure of 3. [Entity Relationships](./graphql.md#entity-relationships): An entity often has nested relationships with other entities. Setting the field value to another entity name will define a relationship between these two entities. 4. [Indexing](./graphql.md#indexing-by-non-primary-key-field): Enhance query performance by implementing the @index annotation on a non-primary-key field. -Here's an example of what your GraphQL Here is an example of a schema which implements all of these recomendations, as well a relationship of many-to-many: +Here's an example of what your GraphQL Here is an example of a schema which implements all of these recommendations, as well a relationship of many-to-many: ::: tip @@ -332,7 +332,7 @@ npm run-script codegen ::: -This will create a new directory (or update the existing) `src/types` which contain generated entity classes for each type you have defined previously in `schema.graphql`. These classes provide type-safe entity loading, read and write access to entity fields - see more about this process in [the GraphQL Schema](../build/graphql.md). All entites can be imported from the following directory: +This will create a new directory (or update the existing) `src/types` which contain generated entity classes for each type you have defined previously in `schema.graphql`. These classes provide type-safe entity loading, read and write access to entity fields - see more about this process in [the GraphQL Schema](../build/graphql.md). All entities can be imported from the following directory: ```ts import { GraphQLEntity1, GraphQLEntity2 } from "../types"; @@ -342,7 +342,7 @@ import { GraphQLEntity1, GraphQLEntity2 } from "../types"; If you're creating a new Ethereum based project (including Ethereum EVM, Cosmos Ethermint, Avalanche, and Substrate's Frontier EVM & Acala EVM+), the `codegen` command will also generate types and save them into `src/types` using the `npx typechain --target=ethers-v5` command, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. -It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `src/typs/abi-interfaces` and `src/typs/contracts` directories. +It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `src/types/abi-interfaces` and `src/types/contracts` directories. In the example [Gravatar SubQuery project](../quickstart/quickstart_chains/ethereum-gravatar.md), you would import these types like so. @@ -354,7 +354,7 @@ import { GraphQLEntity1, GraphQLEntity2 } from "../types"; Codegen will also generate wrapper types for Cosmos Protobufs, the `codegen` command will also generate types and save them into `src/types` directory, providing you with more typesafety specifically for Cosmos Message Handers. -It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to the `src/typs/proto-interfaces` directory. +It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to the `src/types/proto-interfaces` directory. **Note**: The protobuf types you wish to generate must be kept in the `proto` directory (at the root of your project) and you must also ensure the structure of the protobufs are in accordance with the provided protobuf. For example `osmosis.gamm.v1beta1` would have the file structure of `/proto/osmosis/gamm/v1beta1/.proto` @@ -390,14 +390,14 @@ Once `codegen` is executed you will find the message types under `src/types/Cosm } ``` -If you are uncertain of the available messages, you can always check the generated proto interfaces udner `src/types/proto-interfaces/`. You import them into your message handlers like so: +If you are uncertain of the available messages, you can always check the generated proto interfaces under `src/types/proto-interfaces/`. You import them into your message handlers like so: ```ts import { CosmosMessage } from "@subql/types-cosmos"; import { MsgSwapExactAmountIn } from "../types/proto-interfaces/osmosis/gamm/v1beta1/tx"; export async function handleMessage( - msg: CosmosMessage, + msg: CosmosMessage ): Promise { // Do something with typed event const messagePayload: MsgSwapExactAmountIn = msg.msg.decodedMsg; @@ -426,7 +426,7 @@ Similar to Ethereum ABI codegen, you will need to include the path and name for } ``` -All generated files could be found under `src/typs/cosmwasm-interfaces` and `src/typs/cosmwasm-interface-wrappers` directories. +All generated files could be found under `src/types/cosmwasm-interfaces` and `src/types/cosmwasm-interface-wrappers` directories. **Note**: For contract ABIs you wish to generate, you must ensure that each ABI is in its own directory. For example `/abis/baseMinter/base-minter.json` and `/abis/cw20/cw20.json`. diff --git a/docs/indexer/build/multi-chain.md b/docs/indexer/build/multi-chain.md index e8a27c1b40b..4407d4ed132 100644 --- a/docs/indexer/build/multi-chain.md +++ b/docs/indexer/build/multi-chain.md @@ -47,7 +47,7 @@ This feature is not compatible with [Historical State](../run_publish/historical This feature is only supported for Partner Plan Customers in the [SubQuery Managed Service](https://managedservice.subquery.network). All others can run this locally in their own infrastructure provider. ::: -## Intialising and creating a multi-chain project +## Initialising and creating a multi-chain project Creating a multi-chain project involves several steps that enable you to index multiple networks into a single database. This is achieved by configuring a multi-chain manifest file, generating required entities and datasource templates, adding new projects to the manifest, and publishing the multi-chain project. diff --git a/docs/indexer/build/optimisation.md b/docs/indexer/build/optimisation.md index 6543538732b..f2fee1336d0 100644 --- a/docs/indexer/build/optimisation.md +++ b/docs/indexer/build/optimisation.md @@ -90,7 +90,7 @@ There is more information focussed on the DevOps and configuration of [running h ## Review Project Architecture -If your project requires indexing all the blocks, transactions alongside more specific data, consider dividing it into separate SubQuery projects responsible for different data sources. If such separation is possible it can provide better development experience and efficient workflow. This decision can be compared to a design decision between microservices and monolith project architecture. +If your project requires indexing all the blocks, transactions alongside more specific data, consider dividing it into separate SubQuery projects responsible for different data sources. If such separation is possible it can provide better development experience and efficient workflow. This decision can be compared to a design decision between micro-services and monolith project architecture. We recommend this approach, because it takes time to index all the blocks and it can slow down your project significantly. If you want to apply some changes to your filters or entities shape you may need to remove your database and reindex the whole project from the beginning. diff --git a/docs/indexer/build/project-upgrades.md b/docs/indexer/build/project-upgrades.md index a909d7cd4ee..e81409bcbc4 100644 --- a/docs/indexer/build/project-upgrades.md +++ b/docs/indexer/build/project-upgrades.md @@ -5,7 +5,7 @@ Project upgrades allow you to safely make changes to your project at a specified - Perform upgrades to your project at a specific height - Change the GraphQL/database schema - Support changes or new deployments to smart contracts -- When you find a bug, but want to maintain previous data for backwards compatiability +- When you find a bug, but want to maintain previous data for backwards compatibility It's particularly useful when you want to maintain the data of the previous project (e.g. when the previous project takes a long time to index from scratch), and you only want to add a new feature from a specific point in time. @@ -49,7 +49,7 @@ Schema migrations allow you to make updates to your GraphQL schema, and the data When a project upgrade is executed with valid schema migrations, it will compare your current schema with the schema provided in the latest version (the one you are upgrading too), and attempt to make non-destructive changes your database. ::: warning -If you re-run a previous version of you project accidentally, SubQuery will attempt to downgrade changes to your schema. +If you re-run a previous version of your project accidentally, SubQuery will attempt to downgrade changes to your schema. ::: ### Schema Migration Requirements diff --git a/docs/indexer/build/substrate-evm.md b/docs/indexer/build/substrate-evm.md index ea5cde95c1b..227cbd6961f 100644 --- a/docs/indexer/build/substrate-evm.md +++ b/docs/indexer/build/substrate-evm.md @@ -146,7 +146,7 @@ There are a couple of improvements from basic log filters: ### Codegen -If you're creating a new Substrate Frontier EVM or Acala EVM+ based project, the normal [codegen](./introduction.md#code-generation) command will also generate ABI types and save them into `src/types` using the `npx typechain --target=ethers-v5` command, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `src/typs/**.ts`. In the example [Moonriver EVM Starter SubQuery project](https://github.com/subquery/subql-starter/tree/main/Moonriver/moonriver-evm-starter), you would import these types like so. +If you're creating a new Substrate Frontier EVM or Acala EVM+ based project, the normal [codegen](./introduction.md#code-generation) command will also generate ABI types and save them into `src/types` using the `npx typechain --target=ethers-v5` command, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `src/types/**.ts`. In the example [Moonriver EVM Starter SubQuery project](https://github.com/subquery/subql-starter/tree/main/Moonriver/moonriver-evm-starter), you would import these types like so. ```ts import { GraphQLEntity1, GraphQLEntity2 } from "../types"; @@ -183,7 +183,7 @@ type ApproveCallArgs = [string, BigNumber] & { }; export async function handleFrontierEvmEvent( - event: FrontierEvmEvent, + event: FrontierEvmEvent ): Promise { const transaction = new Transaction(event.transactionHash); @@ -196,7 +196,7 @@ export async function handleFrontierEvmEvent( } export async function handleFrontierEvmCall( - event: FrontierEvmCall, + event: FrontierEvmCall ): Promise { const approval = new Approval(event.hash); @@ -228,7 +228,7 @@ type ApproveCallArgs = [string, BigNumber] & { }; export async function handleAcalaEvmEvent( - event: AcalaEvmEvent, + event: AcalaEvmEvent ): Promise { const transaction = new Transaction(event.transactionHash); @@ -241,7 +241,7 @@ export async function handleAcalaEvmEvent( } export async function handleAcalaEvmCall( - event: AcalaEvmCall, + event: AcalaEvmCall ): Promise { const approval = new Approval(event.hash); diff --git a/docs/indexer/build/testing.md b/docs/indexer/build/testing.md index 0acc217ddfc..4b4012f34e1 100644 --- a/docs/indexer/build/testing.md +++ b/docs/indexer/build/testing.md @@ -4,7 +4,7 @@ This document outlines the various testing approaches when building a SubQuery Project. -The SubQuery testing framework provides an easy way to test the behavior of mapping handlers and validate the data being indexed in an automated way. +The SubQuery testing framework provides an easy way to test the behaviour of mapping handlers and validate the data being indexed in an automated way. ## Manual Approaches @@ -236,13 +236,13 @@ Each [SubQuery starter project](https://github.com/subquery/subql-starter/blob/m ### When not to use the SubQuery Testing Framework -While the testing framework is a powerful tool for testing the behavior of mapping handlers and validating the data being indexed in SubQuery projects, there are certain limitations and use cases that are not suitable for this framework. +While the testing framework is a powerful tool for testing the behaviour of mapping handlers and validating the data being indexed in SubQuery projects, there are certain limitations and use cases that are not suitable for this framework. Limitations: - Integration and end-to-end testing: The testing framework is specifically designed for testing individual mapping handlers. It is not suitable for testing the integration of multiple components or the end-to-end functionality of your SubQuery project. - State persistence: The testing framework does not persist state between test cases. This means that any state changes made during a test case will not carry over to subsequent test cases. If your mapping handlers rely on previous state changes, the testing framework may not be suitable. -- Dynamic data sources: The testing framework cannot be used to test dynamic data sources. It is designed to test the behavior of mapping handlers and validate the data being indexed in SubQuery projects, but it does not support testing the functionality related to dynamically adding or removing data sources during runtime. +- Dynamic data sources: The testing framework cannot be used to test dynamic data sources. It is designed to test the behaviour of mapping handlers and validate the data being indexed in SubQuery projects, but it does not support testing the functionality related to dynamically adding or removing data sources during runtime. What You Should Not Use It For: diff --git a/docs/indexer/miscellaneous/avalanche-eth-migration.md b/docs/indexer/miscellaneous/avalanche-eth-migration.md index 65f027adfb9..342d3b68760 100644 --- a/docs/indexer/miscellaneous/avalanche-eth-migration.md +++ b/docs/indexer/miscellaneous/avalanche-eth-migration.md @@ -1,7 +1,7 @@ # Avalanche SDK Migration :::info TLDR -We are no longer supporting `@subql/node-avalanche` and all Avalanche users should migrate their projects to use `@subql/node-ethereum` to recieve the latest updates. +We are no longer supporting `@subql/node-avalanche` and all Avalanche users should migrate their projects to use `@subql/node-ethereum` to receive the latest updates. The new `@subql/node-ethereum` is feature equivalent, and provides some massive performance improvements and support for new features. @@ -19,7 +19,7 @@ The new package is largely the same, with the following main changes: - `avalanche/BlockHandler` to `ethereum/BlockHandler` - `avalanche/TransactionHandler` to `ethereum/TransactionHandler` - `avalanche/LogHandler` to `ethereum/LogHandler` -- Handler functions now recieve `EthereumBlock`, `EthereumTransaction`, or `EthereumLog` instead of `AvalancheBlock`, `AvalancheTransaction`, or `AvalancheLog` +- Handler functions now receive `EthereumBlock`, `EthereumTransaction`, or `EthereumLog` instead of `AvalancheBlock`, `AvalancheTransaction`, or `AvalancheLog` ## Migrating @@ -186,7 +186,7 @@ There are minimal type changes here, but before we dive in, you should be famili If you're creating a new Etheruem based project (including Avalanche), this command will also generate ABI types and save them into `src/types` using the `npx typechain --target=ethers-v5` command, allowing you to bind these contracts to specific addresses in the mappings and call read-only contract methods against the block being processed. -It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `src/typs/abi-interfaces` and `src/typs/contracts` directories. In the [Avalanche Quick Start](https://github.com/subquery/ethereum-subql-starter/blob/main/Avalanche/avalanche-starter/src/mappings/mappingHandlers.ts#L5), you would import these types like so. +It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. All of these types are written to `src/types/abi-interfaces` and `src/types/contracts` directories. In the [Avalanche Quick Start](https://github.com/subquery/ethereum-subql-starter/blob/main/Avalanche/avalanche-starter/src/mappings/mappingHandlers.ts#L5), you would import these types like so. ```ts import { Approval, Transaction } from "../types"; @@ -248,6 +248,6 @@ Once you have completed the above steps, your project should complete without fa - [Real-time indexing (Block Confirmations)](../build/manifest/avalanche.md#real-time-indexing-block-confirmations) resulting in an insanely quick user experience for your customers. - [Contract query support](../build/mapping/avalanche.md#querying-contracts) allowing querying contract state - [Third-party Library Support](../build/mapping/avalanche.md#third-party-library-support---the-sandbox) allowing you to retrieve data from external API endpoints, non historical RPC calls, and import your own external libraries into your projects -- [Testing Framework](../build/testing.md) providing an easy way to test the behavior of mapping handlers and validate the data being indexed in an automated way. +- [Testing Framework](../build/testing.md) providing an easy way to test the behaviour of mapping handlers and validate the data being indexed in an automated way. - [Multi-chain indexing support](../build/multi-chain.md) to index data from across different networks (e.g. Ethereum and Avalanche) into the same database, this allows you to query a single endpoint to get data for all supported networks. - [Dynamic data sources](../build/dynamicdatasources.md) to index factory contracts that create other contracts (e.g. a DEX) diff --git a/docs/indexer/miscellaneous/faqs.md b/docs/indexer/miscellaneous/faqs.md index abb0875961f..a428bdcb74b 100644 --- a/docs/indexer/miscellaneous/faqs.md +++ b/docs/indexer/miscellaneous/faqs.md @@ -74,7 +74,7 @@ Our goal is to save developers' time and money by eliminating the need of buildi **SubQuery Managed Service** -SubQuery also provides free, production grade hosting of projects for developers. Our Managed Service removes the responsiblity of managing infrastructure, so that developers do what they do best — build. Find out more [here](../run_publish/publish.md). +SubQuery also provides free, production grade hosting of projects for developers. Our Managed Service removes the responsibility of managing infrastructure, so that developers do what they do best — build. Find out more [here](../run_publish/publish.md). **The SubQuery Network** @@ -96,7 +96,7 @@ SubQuery is open-source, meaning you have the freedom to run it in the following - Locally on your own computer (or a cloud provider of your choosing), [view the instructions on how to run SubQuery Locally](../run_publish/run.md) - By publishing it to our enterprise-level [Managed Service](https://managedservice.subquery.network), where we'll host your SubQuery project in production ready services for mission critical data with zero-downtime blue/green deployments. We even have a generous free tier. [Find out how](../run_publish/publish.md) -- By publishing it to the decentralised [SubQuery Network](https://subquery.network/network), the most open, performant, reliable, and scalable data service for dApp developers. The SubQuery Network indexes and services data to the global community in an incentivised and verifiable way. [Read more](../../subquery_network/publish.md) +- By publishing it to the decentralised [SubQuery Network](https://subquery.network/network), the most open, performant, reliable, and scalable data service for dApp developers. The SubQuery Network indexes and services data to the global community in an incentivised and verifiable way. [Read more](../../subquery_network/architects/publish.md) ## How can I optimise my project to speed it up? diff --git a/docs/indexer/quickstart/quickstart.md b/docs/indexer/quickstart/quickstart.md index 0a3bd0e8b0c..4ae244266ce 100644 --- a/docs/indexer/quickstart/quickstart.md +++ b/docs/indexer/quickstart/quickstart.md @@ -104,7 +104,7 @@ SubQuery supports various blockchain networks and provides a dedicated guide for Scaffolding saves time during SubQuery project creation by automatically generating typescript facades for EVM transactions, logs, and types. -When you are initalising a new project using the `subql init` command, SubQuery will give you the option to set up a scaffolded SubQuery project based on your JSON ABI. If you select a compatible network type (EVM), it will prompt: +When you are initialising a new project using the `subql init` command, SubQuery will give you the option to set up a scaffolded SubQuery project based on your JSON ABI. If you select a compatible network type (EVM), it will prompt: ```shell ? Do you want to generate scaffolding with an existing abi contract? @@ -122,9 +122,9 @@ You can read more about this feature in [Project Scaffolding](../build/introduct SubQuery provides support for environment variables to configure your project dynamically. This enables flexibility in managing different configurations for development, testing, and production environments. -To utilize environment variable support: +To utilise environment variable support: -The .env files are automatically created when you initialize a project using the CLI. You can modify these files to adjust configurations according to your requirements. +The .env files are automatically created when you initialise a project using the CLI. You can modify these files to adjust configurations according to your requirements. ```shell # Example .env @@ -148,8 +148,8 @@ The package.json file includes build scripts that allow you to build with either "build:develop": "NODE_ENV=develop subql codegen && NODE_ENV=develop subql build" } ``` -Use `build` script to generate artifacts using the default production .env settings. -Use `build:develop` script to generate artifacts using the development .env.develop settings. +Use `build` script to generate artefacts using the default production .env settings. +Use `build:develop` script to generate artefacts using the development .env.develop settings. Using environment variables and .env files provides a convenient way to manage project configurations and keep sensitive information secure. diff --git a/docs/indexer/quickstart/snippets/evm-abi.md b/docs/indexer/quickstart/snippets/evm-abi.md index 74287c05f8a..2991e96a82e 100644 --- a/docs/indexer/quickstart/snippets/evm-abi.md +++ b/docs/indexer/quickstart/snippets/evm-abi.md @@ -2,4 +2,4 @@ As you're creating a new Etheruem based project, this command will also generate It will also generate a class for every contract event to provide easy access to event parameters, as well as the block and transaction the event originated from. Read about how this is done in [EVM Codegen from ABIs](../../build/introduction.md#evm-codegen-from-abis). -All of these types are written to `src/typs/abi-interfaces` and `src/typs/contracts` directories. In this example project, you would import these types like so. +All of these types are written to `src/types/abi-interfaces` and `src/types/contracts` directories. In this example project, you would import these types like so. diff --git a/docs/indexer/run_publish/monitor.md b/docs/indexer/run_publish/monitor.md index 52d81efd7cd..9dd68dd01a1 100644 --- a/docs/indexer/run_publish/monitor.md +++ b/docs/indexer/run_publish/monitor.md @@ -1,6 +1,6 @@ # Monitor your SubQuery Project with Prometheus and Grafana -This guide shows you how to pull metrics into [Prometheus](https://prometheus.io/), an open-source tool for storing, aggregating, and querying time series data. It also shows you how to connect [Grafana](https://grafana.com/) to Prometheus for flexible data visualizations. +This guide shows you how to pull metrics into [Prometheus](https://prometheus.io/), an open-source tool for storing, aggregating, and querying time series data. It also shows you how to connect [Grafana](https://grafana.com/) to Prometheus for flexible data visualisations. ## Setting Up Monitoring diff --git a/docs/indexer/run_publish/publish.md b/docs/indexer/run_publish/publish.md index 693cca74e3f..22feba18b45 100644 --- a/docs/indexer/run_publish/publish.md +++ b/docs/indexer/run_publish/publish.md @@ -83,7 +83,7 @@ Note: With `@subql/cli` version 1.3.0 or above, when using `subql publish`, a co ::: details What happens during IPFS Deployment? -IPFS deployment represents an independent and unique existence of a SubQuery project on a decentralized network. Therefore, any changes with the code in the project will affect its uniqueness. If you need to adjust your business logic, e.g. change the mapping function, you must republish the project, and the `CID` will change. +IPFS deployment represents an independent and unique existence of a SubQuery project on a decentralised network. Therefore, any changes with the code in the project will affect its uniqueness. If you need to adjust your business logic, e.g. change the mapping function, you must republish the project, and the `CID` will change. For now, to view the project you have published, use a `REST` API tool such as [Postman](https://web.postman.co/), and use the `POST` method with the following example URL to retrieve it:`https://ipfs.subquery.network/ipfs/api/v0/cat?arg=`. @@ -122,7 +122,7 @@ specVersion: 0.2.0 To create your first project, head to [SubQuery Managed Service](https://managedservice.subquery.network). You'll need to authenticate with your GitHub account to login. -On first login, you will be asked to authorize SubQuery. We only need your email address to identify your account, and we don't use any other data from your GitHub account for any other reasons. In this step, you can also request or grant access to your GitHub Organization account so you can post SubQuery projects under your GitHub Organization instead of your personal account. +On first login, you will be asked to authorise SubQuery. We only need your email address to identify your account, and we don't use any other data from your GitHub account for any other reasons. In this step, you can also request or grant access to your GitHub Organisation account so you can post SubQuery projects under your GitHub Organisation instead of your personal account. ![Revoke approval from a GitHub account](/assets/img/run_publish/project_auth_request.png) @@ -130,7 +130,7 @@ SubQuery Projects is where you manage all your hosted projects uploaded to the S ![Projects Login](/assets/img/run_publish/projects_dashboard.png) -If you have a GitHub Organization accounts connected, you can use the switcher on the header to change between your personal account and your GitHub Organization account. Projects created in a GitHub Organization account are shared between members in that GitHub Organization. To connect your GitHub Organization account, you can [follow the steps here](publish.md#add-github-organization-account-to-subquery-projects). +If you have a GitHub Organisation accounts connected, you can use the switcher on the header to change between your personal account and your GitHub Organisation account. Projects created in a GitHub Organisation account are shared between members in that GitHub Organisation. To connect your GitHub Organisation account, you can [follow the steps here](publish.md#add-github-organization-account-to-subquery-projects). ![Switch between GitHub accounts](/assets/img/run_publish/projects_account_switcher.png) @@ -174,7 +174,7 @@ With your new project, you'll see a "Deploy your first version" button. Click th - **Override Network and Dictionary Endpoints:** You can override the endpoints in your project manifest here. - **Indexer Version:** This is the version of SubQuery's node service that you want to run this SubQuery on. See [`@subql/node`](https://www.npmjs.com/package/@subql/node). - **Query Version:** This is the version of SubQuery's query service that you want to run this SubQuery on. See [`@subql/query`](https://www.npmjs.com/package/@subql/query). -- **Advanced Settings:** There are numerous advanced settings which are explained via the inbuild help feature. +- **Advanced Settings:** There are numerous advanced settings which are explained via the in built help feature. ![Deploy your first Project](/assets/img/run_publish/projects_first_deployment.png) @@ -243,7 +243,7 @@ jobs: ## Next Steps - Connect to your Project -Once your deployment has succesfully completed and our nodes have indexed your data from the chain, you'll be able to connect to your project via the displayed GraphQL Query endpoint. +Once your deployment has successfully completed and our nodes have indexed your data from the chain, you'll be able to connect to your project via the displayed GraphQL Query endpoint. ![Project being deployed and synced](/assets/img/run_publish/projects_deploy_sync.png) @@ -291,7 +291,7 @@ If you just want to upgrade to the latest indexer ([`@subql/node`](https://www.n ## Add GitHub Organization Account to SubQuery Projects -It is common to publish your SubQuery project under the name of your GitHub Organization account rather than your personal GitHub account. At any point your can change your currently selected account on [SubQuery Managed Service](https://managedservice.subquery.network) using the account switcher. +It is common to publish your SubQuery project under the name of your GitHub Organization account rather than your personal GitHub account. At any point you can change your currently selected account on [SubQuery Managed Service](https://managedservice.subquery.network) using the account switcher. If you can't see your GitHub Organization account listed in the switcher, then you may need to grant access to SubQuery for your GitHub Organization (or request it from an administrator). To do this, you first need to revoke permissions from your GitHub account to the SubQuery Application. Then, login to your account settings in GitHub, go to Applications, and under the Authorized OAuth Apps tab, revoke SubQuery - [you can follow the exact steps here](https://docs.github.com/en/github/authenticating-to-github/keeping-your-account-and-data-secure/reviewing-your-authorized-applications-oauth). **Don't worry, this will not delete your SubQuery project and you will not lose any data.** diff --git a/docs/indexer/run_publish/query/aggregate.md b/docs/indexer/run_publish/query/aggregate.md index 433a9ba7786..40a800fc85c 100644 --- a/docs/indexer/run_publish/query/aggregate.md +++ b/docs/indexer/run_publish/query/aggregate.md @@ -8,7 +8,7 @@ Aggregate functions are usually used with the GroupBy function in your query. GroupBy allows you to quickly get distinct values in a set from SubQuery in a single query. -![Graphql Groupby](/assets/img/run_publish/graphql_aggregation.png) +![GraphQL Groupby](/assets/img/run_publish/graphql_aggregation.png) ## Advanced Aggregate Functions diff --git a/docs/indexer/run_publish/query/graphql.md b/docs/indexer/run_publish/query/graphql.md index 5597947320c..07a815c7e0f 100644 --- a/docs/indexer/run_publish/query/graphql.md +++ b/docs/indexer/run_publish/query/graphql.md @@ -8,7 +8,7 @@ You may want to take a look at the information we have [on the differences](../. You can follow the [official GraphQL guide here](https://graphql.org/learn/) to learn more about GraphQL, how it works, and how to use it: -- There are libraries to help you implement GraphQL in [many different languages](https://graphql.org/code/) - we recommend [Apollo Client](https://www.apollographql.com/docs/react/) as it will allow a [seamless migration to our decentralised network](../../../subquery_network/publish.md#changes-to-your-dapp) when you publish your project in the future. +- There are libraries to help you implement GraphQL in [many different languages](https://graphql.org/code/) - we recommend [Apollo Client](https://www.apollographql.com/docs/react/) as it will allow a [seamless migration to our decentralised network](../../../subquery_network/architects/publish.md#changes-to-your-dapp) when you publish your project in the future. - You will want to review advice on how to [structure your GraphQL queries to maximise performance](../../build/optimisation.md#query-performance-advice). - For an in-depth learning experience with practical tutorials, see [How to GraphQL](https://www.howtographql.com/). - Check out the free online course, [Exploring GraphQL: A Query Language for APIs](https://www.edx.org/course/exploring-graphql-a-query-language-for-apis). @@ -19,7 +19,7 @@ On the top right of the playground, you'll find a _Docs_ button that will open a ### Full Text Search -The result of the directive will provide new connections to the graphql schema allowing you to search. They follow the pattern `search` and take a `search` parameter. +The result of the directive will provide new connections to the GraphQL schema allowing you to search. They follow the pattern `search` and take a `search` parameter. The search parameter allows for more than just searching for strings, you can do AND (`&`), OR (`|`) , NOT (`!`, `-`), begins with (`:*`, `*`>) and follows (`>`, `<->`). diff --git a/docs/indexer/run_publish/query/other_tools/bigquery.md b/docs/indexer/run_publish/query/other_tools/bigquery.md index fb513b94c9b..a3c0c181fb0 100644 --- a/docs/indexer/run_publish/query/other_tools/bigquery.md +++ b/docs/indexer/run_publish/query/other_tools/bigquery.md @@ -1,12 +1,12 @@ # Querying Data with BigQuery -Google BigQuery is a fully managed, serverless data warehouse provided by Google Cloud. It allows you to run super-fast, SQL-like queries against large datasets. BigQuery is particularly well-suited for analyzing large volumes of data, including blockchain data, due to its scalability, speed, and ease of use. You might use BigQuery to analyse indexed SubQuery data due to: +Google BigQuery is a fully managed, server-less data warehouse provided by Google Cloud. It allows you to run super-fast, SQL-like queries against large datasets. BigQuery is particularly well-suited for analysing large volumes of data, including blockchain data, due to its scalability, speed, and ease of use. You might use BigQuery to analyse indexed SubQuery data due to: -1. Scalability: BigQuery is designed to handle massive datasets, making it suitable for analyzing the vast amounts of data generated by blockchain networks. +1. Scalability: BigQuery is designed to handle massive datasets, making it suitable for analysing the vast amounts of data generated by blockchain networks. 2. Speed: BigQuery can process queries on large datasets quickly, allowing you to get insights from your blockchain data in near real-time. -3. SQL-like Queries: BigQuery supports standard SQL queries, making it easy for analysts and developers familiar with SQL to analyze blockchain data without having to learn a new query language. -4. Serverless: With BigQuery, you don't need to manage any infrastructure. Google handles the infrastructure, so you can focus on analyzing your data. -5. Integration: BigQuery integrates seamlessly with other Google Cloud services, such as Google Cloud Storage and Google Data Studio, making it easy to ingest, store, and visualize blockchain data. +3. SQL-like Queries: BigQuery supports standard SQL queries, making it easy for analysts and developers familiar with SQL to analyse blockchain data without having to learn a new query language. +4. Server less: With BigQuery, you don't need to manage any infrastructure. Google handles the infrastructure, so you can focus on analysing your data. +5. Integration: BigQuery integrates seamlessly with other Google Cloud services, such as Google Cloud Storage and Google Data Studio, making it easy to ingest, store, and visualise blockchain data. SubQuery can easily be integrated with BigQuery in only a few steps, this means that you can export indexed blockchain data directly from SubQuery to BigQuery. @@ -76,7 +76,7 @@ After loading the data, you can proceed to query it. The provided screenshot fro ![](/assets/img/run_publish/bigquery/consoleBigquery.png) -By uploading your data to BigQuery, you not only gain access to a platform designed for limitless scalability and seamless integration with Google Cloud services but also benefit from a serverless architecture. This allows you to focus on analytics rather than infrastructure management, marking a strategic move towards maximizing the potential of your data. +By uploading your data to BigQuery, you not only gain access to a platform designed for limitless scalability and seamless integration with Google Cloud services but also benefit from a server-less architecture. This allows you to focus on analytics rather than infrastructure management, marking a strategic move towards maximising the potential of your data. ## Synchronise Updates Automatically diff --git a/docs/indexer/run_publish/query/query.md b/docs/indexer/run_publish/query/query.md index 503af49fe8b..f8f111f80a1 100644 --- a/docs/indexer/run_publish/query/query.md +++ b/docs/indexer/run_publish/query/query.md @@ -10,7 +10,7 @@ Both the decentralised SubQuery Network and the SubQuery Managed Service only pr ## Integrations with other Developer Tools -SubQuery is open-source, and we are busy creating a rich ecosytem of developer tools that it works well with. +SubQuery is open-source, and we are busy creating a rich ecosystem of developer tools that it works well with. We now have guides to expose SubQuery data to the following locations. @@ -19,6 +19,6 @@ We now have guides to expose SubQuery data to the following locations. - [Direct Postgres Access](../run.md#connect-to-database) - you can directly connect to the Postgres data from any other tool or service. - [Metabase](./other_tools/metabase.md) - an industry leading open-source and free data visualisation and business intelligence tool. - [CSV Export](../references.md#csv-out-dir) - export indexed datasets to CSV files easily. -- [BigQuery](./other_tools/bigquery.md) - a fully managed, serverless data warehouse provided by Google Cloud, well-suited for analyzing large volumes of data. +- [BigQuery](./other_tools/bigquery.md) - a fully managed, server-less data warehouse provided by Google Cloud, well-suited for analyzing large volumes of data. ![Integration Ecosystem](/assets/img/run_publish/integration_ecosystem.png) diff --git a/docs/indexer/run_publish/references.md b/docs/indexer/run_publish/references.md index 520ed50195e..b64ac2f4f06 100644 --- a/docs/indexer/run_publish/references.md +++ b/docs/indexer/run_publish/references.md @@ -51,7 +51,7 @@ For more info, visit [basic workflows](../build/introduction.md#build). ### --block-confirmations -**Positive Integer (default: `20`)** - (Only for `subql-node-ethereum`) The number of blocks behind the head to be considered finalized, this has no effect with non-EVM networks. +**Positive Integer (default: `20`)** - (Only for `subql-node-ethereum`) The number of blocks behind the head to be considered finalised, this has no effect with non-EVM networks. ### -c, --config @@ -149,7 +149,7 @@ subql-node --subquery . ### force-clean -This subcommand forces the project schemas and tables to be regenerated. It is helpful to use when iteratively developing graphql schemas in order to ensure a clean state when starting a project. Note that this flag will also wipe all indexed data. +This subcommand forces the project schemas and tables to be regenerated. It is helpful to use when iteratively developing GraphQL schemas in order to ensure a clean state when starting a project. Note that this flag will also wipe all indexed data. This will also drop all related schema and tables of the project. `-f`, `--subquery` flag must be passed in, to set path of the targeted project. diff --git a/docs/indexer/run_publish/run.md b/docs/indexer/run_publish/run.md index 01242293ab3..27c6beab5f5 100644 --- a/docs/indexer/run_publish/run.md +++ b/docs/indexer/run_publish/run.md @@ -370,7 +370,7 @@ export DB_HOST=localhost subql-query --name --playground ``` -Make sure the project name is the same as the project name when you [initialize the project](../quickstart/quickstart.md#_2-initialise-the-subquery-starter-project). Also, check the environment variables are correct. +Make sure the project name is the same as the project name when you [initialise the project](../quickstart/quickstart.md#_2-initialise-the-subquery-starter-project). Also, check the environment variables are correct. After running the subql-query service successfully, open your browser and head to `http://localhost:3000`. You should see a GraphQL playground showing in the Explorer and the schema that is ready to query. diff --git a/docs/subquery_network/architects/publish.md b/docs/subquery_network/architects/publish.md index 0939c108f43..46f9cfcb669 100644 --- a/docs/subquery_network/architects/publish.md +++ b/docs/subquery_network/architects/publish.md @@ -15,12 +15,12 @@ The SubQuery Network is the future of web3 infrastructure, it allows you to comp ## Prerequisites for your project running on the Network -1. The SubQuery Network does not support GraphQL subscriptions, so you can't enable the `--subscription` [command line argument](../../run_publish/query/subscription.md) +1. The SubQuery Network does not support GraphQL subscriptions, so you can't enable the `--subscription` [command line argument](../../indexer/run_publish/query/subscription.md) 2. Your client application (the one that will query data) must be able to run a JS library 3. Your project can generate stable proof of indexing results. This means you should avoid: 1. Random ordered DB operations, e.g. avoid using `Promise.all()` in your mapping functions. 2. Introducing external data dependent on runtime, e.g. initialising a new current date using `new Date()`. -4. Your project is published to IPFS, [follow the guide here](../../run_publish/publish.md#publish-your-subquery-project-to-ipfs). +4. Your project is published to IPFS, [follow the guide here](../../indexer/run_publish/publish.md#publish-your-subquery-project-to-ipfs). ## Deploying your Project @@ -28,7 +28,7 @@ The SubQuery Network is a public permission-less network, anyone can deploy thei ![Explorer - Publish Button](/assets/img/network/architect_publish.png) -You will need to publish your project to IPFS first, [follow the guide here](../../run_publish/publish.md#publish-your-subquery-project-to-ipfs). Please enter the project CID and give your project a nice name. +You will need to publish your project to IPFS first, [follow the guide here](../../indexer/run_publish/publish.md#publish-your-subquery-project-to-ipfs). Please enter the project CID and give your project a nice name. ![Publish - Enter CID](/assets/img/network/architect_publish_ipfs.png) @@ -58,7 +58,7 @@ You can easily make changes to your project or deploy a new version by accessing Firstly, you can publish a new version by clicking "Deploy New Version". This will let Node Operators know and allow them to upgrade to the new version of your Project. For the deployment you should provide: -- the deployment CID, you will need to publish your project to IPFS first, [follow the guide here](../../run_publish/publish.md#publish-your-subquery-project-to-ipfs) +- the deployment CID, you will need to publish your project to IPFS first, [follow the guide here](../../indexer/run_publish/publish.md#publish-your-subquery-project-to-ipfs) - a version number, we recommend it follows [semantic versioning rules](https://semver.org/) - check the box if you want to make this version recommended, this means that you are recommending Node Operators to immediately update to it. Don't check this if it's a test build or if it has major breaking changes - the deployment description, which might include additional information for Node Operators about migration steps or breaking changes in this version