diff --git a/DOD.md b/DOD.md index e2a1c5b28..41b8a182e 100644 --- a/DOD.md +++ b/DOD.md @@ -1,7 +1,7 @@ # Defintion of Delivery -The Definition of Delivery or definition of done is provide to give developers -on direction of what is required to deliver a issue. It also helps code +The Definition of Delivery or definition of done is provided to give developers +direction on what is required to deliver an issue. It also helps code reviewers with what to review when reviewing an issue. The three core concepts that all issues should contain for developers are: @@ -45,5 +45,5 @@ Does the code accomplish the issue? Is there automated tests that reliably verify any logic added to the delivery? -Is there enough documentation to satify my future self (or someone else?) to +Is there enough documentation to satisfy my future self (or someone else?) to maintain this issue? diff --git a/connect/README.md b/connect/README.md index 5843a1111..61bbdf162 100644 --- a/connect/README.md +++ b/connect/README.md @@ -93,13 +93,13 @@ let results = await results({ Parameters -| Name | Description | Optional? | -| ------- | ----------------------------------------------------------------- | --------- | -| process | the process identifier | false | -| from | cursor starting point | true | -| to | cursor ending point | true | -| sort | list results in decending or ascending order, default will be ASC | true | -| limit | the number of results to return (default: 25) | true | +| Name | Description | Optional? | +| ------- | ------------------------------------------------------------------ | --------- | +| process | the process identifier | false | +| from | cursor starting point | true | +| to | cursor ending point | true | +| sort | list results in descending or ascending order, default will be ASC | true | +| limit | the number of results to return (default: 25) | true | #### `message` @@ -181,7 +181,7 @@ connect() == { spawn, message, result } #### `monitor` -When using cron messages, ao users need a way to start injesting the messages, +When using cron messages, ao users need a way to start injecting the messages, using this monitor method, ao users can initiate the subscription service for cron messages. @@ -229,7 +229,7 @@ const processId = await assign({ }); ``` -Create a Assignment for an `ao` process with an L1 transaction +Create an Assignment for an `ao` process with an L1 transaction ```js import { assign } from "@permaweb/aoconnect"; @@ -355,7 +355,7 @@ unit test. Because the contract wrapping is done by the business logic itself, it also ensures the stubs we use in our unit tests accurately implement the contract -API. Thus our unit tests are simoultaneously contract tests. +API. Thus our unit tests are simultaneously contract tests. `client` contains implementations, of the contracts in `dal.js`, for various platforms. The unit tests for the implementations in `client` also import diff --git a/dev-cli/README.md b/dev-cli/README.md index 6cc8f8a7f..00763a4bc 100644 --- a/dev-cli/README.md +++ b/dev-cli/README.md @@ -24,7 +24,7 @@ SmartWeaveContracts written in [Lua](https://www.lua.org/) and - [For Developers](#for-developers) - [Contributing](#contributing) - [Publish a new Version of the CLI](#publish-a-new-version-of-the-cli) - - [Need a to also Publish a new Docker Image version?](#need-a-to-also-publish-a-new-docker-image-version) + - [Need to also Publish a new Docker Image version?](#need-to-also-publish-a-new-docker-image-version) @@ -89,7 +89,7 @@ This will create a new directory, if needed, named `{myproject}` ### Run a Lua Repl -This gives you a Lua interpeter +This gives you a Lua interpreter ```sh ao lua @@ -179,7 +179,7 @@ Workflow Dispatch that will: > For now, if Turbo needs more funding, contact `@TillaTheHun0`. (Maybe > eventually we add a Workflow Dispatch script to automatically fund Turbo) -#### Need a to also Publish a new Docker Image version? +#### Need to also Publish a new Docker Image version? If you need to also publish a new Docker Image, you will currently need to do this manually. diff --git a/servers/cu/README.md b/servers/cu/README.md index ca272f30c..01189aad4 100644 --- a/servers/cu/README.md +++ b/servers/cu/README.md @@ -73,7 +73,7 @@ There are a few environment variables that you can set. Besides - `DB_MODE`: Whether the database being used by the CU is embedded within the CU or is remote to the CU. Can be either `embedded` or `remote` (defaults to `embedded`) -- `DB_URL`: the name of the embdeeded database (defaults to `ao-cache`) +- `DB_URL`: the name of the embedded database (defaults to `ao-cache`) - `PROCESS_WASM_MEMORY_MAX_LIMIT`: The maximum `Memory-Limit`, in bytes, supported for `ao` processes (defaults to `1GB`) - `PROCESS_WASM_COMPUTE_MAX_LIMIT`: The maximum `Compute-Limit`, in bytes, @@ -105,7 +105,7 @@ There are a few environment variables that you can set. Besides - `PROCESS_MEMORY_CACHE_FILE_DIR`: The directory to store drained process memory (Defaults to the os temp directory) - `PROCESS_MEMORY_CACHE_CHECKPOINT_INTERVAL`: The interval at which the CU - should Checkpoint all processes stored in it's cache. Set to `0` to disabled + should Checkpoint all processes stored in its cache. Set to `0` to disabled (defaults to `0`) - `PROCESS_CHECKPOINT_CREATION_THROTTLE`: The amount of time, in milliseconds, that the CU should wait before creating a process `Checkpoint` IFF it has @@ -226,7 +226,7 @@ to stub, and business logic easy to unit tests for correctness. Because the contract wrapping is done by the business logic itself, it also ensures the stubs we use in our unit tests accurately implement the contract -API. Thus our unit tests are simoultaneously contract tests. +API. Thus our unit tests are simultaneously contract tests. #### Driven Adapters @@ -283,7 +283,7 @@ fulfill incoming requests, and egress to other `ao` Units over `HTTP(S)`. It will also need some sort of file system available, whether it be persistent or ephemeral. -So in summary, this `ao` Compute Unit system requirments are: +So in summary, this `ao` Compute Unit system requirements are: - a Containerization Environment or `node` to run the application - a Filesystem to store files and an embedded database diff --git a/servers/su/README.md b/servers/su/README.md index 620ff103f..866f6a89c 100644 --- a/servers/su/README.md +++ b/servers/su/README.md @@ -56,7 +56,7 @@ Create a .env file with the following variables, or set them in the OS: - `ARWEAVE_URL_LIST` list of arweave urls that have tx access aka url/txid returns the tx. Used by gateway calls for checking transactions etc... ## Experimental environment variables -To use the expirimental fully local storage system set the following evnironment variables. +To use the experimental fully local storage system set the following environment variables. - `USE_LOCAL_STORE` if true the SU will operate on purely RocksDB - `SU_FILE_DB_DIR` a local RocksDB directory of bundles - `SU_INDEX_DB_DIR` a local index of processes and messages @@ -166,7 +166,7 @@ docker run --env-file .env.router -v ./.wallet.json:/app/.wallet.json -v ./sched Over time the su database has evolved. It started as only Postgres then went to Postgres + RocksDB for performance enhancement. It now has a purely RocksDB implementation. For existing su's that already have data, you can follow the below to migration processes to bring it up to date to the latest implementation. ### Migrating data to disk for an existing su instance -If a su has been running using postgres for sometime there may be performance issues. Writing to and reading files from disk has been added. In order to switch this on set the environment variables +If a su has been running using postgres for some time there may be performance issues. Writing to and reading files from disk has been added. In order to switch this on set the environment variables - `USE_DISK` whether or not to read and write binary files from/to the disk/rocksdb. If the su has already been running for a while the data will need to be migrated using the mig binary before turning this on. - `SU_DATA_DIR` the data directory on disk where the su will read from and write binaries to @@ -231,6 +231,6 @@ Lastly the SU and SU-R require a postgresql database for each node that is alrea In summary the SU + SU-R requirements are - A docker environment to run 2 different dockerfiles - A server pointing to port 9000 -- Ablity to define and modify secrect files availabe in the same path as the dockerfiles, .wallet.json and .schedulers.json -- Environement variables available in the container. +- Ability to define and modify secret files available in the same path as the dockerfiles, .wallet.json and .schedulers.json +- Environment variables available in the container. - a postgresql database per node, defined with a database called "su" at the time of deployment.