Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: fix typos #1051

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions DOD.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Defintion of Delivery

The Definition of Delivery or definition of done is provide to give developers
on direction of what is required to deliver a issue. It also helps code
The Definition of Delivery or definition of done is provided to give developers
direction on what is required to deliver an issue. It also helps code
reviewers with what to review when reviewing an issue.

The three core concepts that all issues should contain for developers are:
Expand Down Expand Up @@ -45,5 +45,5 @@ Does the code accomplish the issue?

Is there automated tests that reliably verify any logic added to the delivery?

Is there enough documentation to satify my future self (or someone else?) to
Is there enough documentation to satisfy my future self (or someone else?) to
maintain this issue?
20 changes: 10 additions & 10 deletions connect/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,13 +93,13 @@ let results = await results({

Parameters

| Name | Description | Optional? |
| ------- | ----------------------------------------------------------------- | --------- |
| process | the process identifier | false |
| from | cursor starting point | true |
| to | cursor ending point | true |
| sort | list results in decending or ascending order, default will be ASC | true |
| limit | the number of results to return (default: 25) | true |
| Name | Description | Optional? |
| ------- | ------------------------------------------------------------------ | --------- |
| process | the process identifier | false |
| from | cursor starting point | true |
| to | cursor ending point | true |
| sort | list results in descending or ascending order, default will be ASC | true |
| limit | the number of results to return (default: 25) | true |

#### `message`

Expand Down Expand Up @@ -181,7 +181,7 @@ connect() == { spawn, message, result }

#### `monitor`

When using cron messages, ao users need a way to start injesting the messages,
When using cron messages, ao users need a way to start injecting the messages,
using this monitor method, ao users can initiate the subscription service for
cron messages.

Expand Down Expand Up @@ -229,7 +229,7 @@ const processId = await assign({
});
```

Create a Assignment for an `ao` process with an L1 transaction
Create an Assignment for an `ao` process with an L1 transaction

```js
import { assign } from "@permaweb/aoconnect";
Expand Down Expand Up @@ -355,7 +355,7 @@ unit test.

Because the contract wrapping is done by the business logic itself, it also
ensures the stubs we use in our unit tests accurately implement the contract
API. Thus our unit tests are simoultaneously contract tests.
API. Thus our unit tests are simultaneously contract tests.

`client` contains implementations, of the contracts in `dal.js`, for various
platforms. The unit tests for the implementations in `client` also import
Expand Down
6 changes: 3 additions & 3 deletions dev-cli/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ SmartWeaveContracts written in [Lua](https://www.lua.org/) and
- [For Developers](#for-developers)
- [Contributing](#contributing)
- [Publish a new Version of the CLI](#publish-a-new-version-of-the-cli)
- [Need a to also Publish a new Docker Image version?](#need-a-to-also-publish-a-new-docker-image-version)
- [Need to also Publish a new Docker Image version?](#need-to-also-publish-a-new-docker-image-version)

<!-- tocstop -->

Expand Down Expand Up @@ -89,7 +89,7 @@ This will create a new directory, if needed, named `{myproject}`

### Run a Lua Repl

This gives you a Lua interpeter
This gives you a Lua interpreter

```sh
ao lua
Expand Down Expand Up @@ -179,7 +179,7 @@ Workflow Dispatch that will:
> For now, if Turbo needs more funding, contact `@TillaTheHun0`. (Maybe
> eventually we add a Workflow Dispatch script to automatically fund Turbo)

#### Need a to also Publish a new Docker Image version?
#### Need to also Publish a new Docker Image version?

If you need to also publish a new Docker Image, you will currently need to do
this manually.
Expand Down
8 changes: 4 additions & 4 deletions servers/cu/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ There are a few environment variables that you can set. Besides
- `DB_MODE`: Whether the database being used by the CU is embedded within the CU
or is remote to the CU. Can be either `embedded` or `remote` (defaults to
`embedded`)
- `DB_URL`: the name of the embdeeded database (defaults to `ao-cache`)
- `DB_URL`: the name of the embedded database (defaults to `ao-cache`)
- `PROCESS_WASM_MEMORY_MAX_LIMIT`: The maximum `Memory-Limit`, in bytes,
supported for `ao` processes (defaults to `1GB`)
- `PROCESS_WASM_COMPUTE_MAX_LIMIT`: The maximum `Compute-Limit`, in bytes,
Expand Down Expand Up @@ -105,7 +105,7 @@ There are a few environment variables that you can set. Besides
- `PROCESS_MEMORY_CACHE_FILE_DIR`: The directory to store drained process memory
(Defaults to the os temp directory)
- `PROCESS_MEMORY_CACHE_CHECKPOINT_INTERVAL`: The interval at which the CU
should Checkpoint all processes stored in it's cache. Set to `0` to disabled
should Checkpoint all processes stored in its cache. Set to `0` to disabled
(defaults to `0`)
- `PROCESS_CHECKPOINT_CREATION_THROTTLE`: The amount of time, in milliseconds,
that the CU should wait before creating a process `Checkpoint` IFF it has
Expand Down Expand Up @@ -226,7 +226,7 @@ to stub, and business logic easy to unit tests for correctness.

Because the contract wrapping is done by the business logic itself, it also
ensures the stubs we use in our unit tests accurately implement the contract
API. Thus our unit tests are simoultaneously contract tests.
API. Thus our unit tests are simultaneously contract tests.

#### Driven Adapters

Expand Down Expand Up @@ -283,7 +283,7 @@ fulfill incoming requests, and egress to other `ao` Units over `HTTP(S)`.
It will also need some sort of file system available, whether it be persistent
or ephemeral.

So in summary, this `ao` Compute Unit system requirments are:
So in summary, this `ao` Compute Unit system requirements are:

- a Containerization Environment or `node` to run the application
- a Filesystem to store files and an embedded database
Expand Down
8 changes: 4 additions & 4 deletions servers/su/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Create a .env file with the following variables, or set them in the OS:
- `ARWEAVE_URL_LIST` list of arweave urls that have tx access aka url/txid returns the tx. Used by gateway calls for checking transactions etc...

## Experimental environment variables
To use the expirimental fully local storage system set the following evnironment variables.
To use the experimental fully local storage system set the following environment variables.
- `USE_LOCAL_STORE` if true the SU will operate on purely RocksDB
- `SU_FILE_DB_DIR` a local RocksDB directory of bundles
- `SU_INDEX_DB_DIR` a local index of processes and messages
Expand Down Expand Up @@ -166,7 +166,7 @@ docker run --env-file .env.router -v ./.wallet.json:/app/.wallet.json -v ./sched
Over time the su database has evolved. It started as only Postgres then went to Postgres + RocksDB for performance enhancement. It now has a purely RocksDB implementation. For existing su's that already have data, you can follow the below to migration processes to bring it up to date to the latest implementation.

### Migrating data to disk for an existing su instance
If a su has been running using postgres for sometime there may be performance issues. Writing to and reading files from disk has been added. In order to switch this on set the environment variables
If a su has been running using postgres for some time there may be performance issues. Writing to and reading files from disk has been added. In order to switch this on set the environment variables

- `USE_DISK` whether or not to read and write binary files from/to the disk/rocksdb. If the su has already been running for a while the data will need to be migrated using the mig binary before turning this on.
- `SU_DATA_DIR` the data directory on disk where the su will read from and write binaries to
Expand Down Expand Up @@ -231,6 +231,6 @@ Lastly the SU and SU-R require a postgresql database for each node that is alrea
In summary the SU + SU-R requirements are
- A docker environment to run 2 different dockerfiles
- A server pointing to port 9000
- Ablity to define and modify secrect files availabe in the same path as the dockerfiles, .wallet.json and .schedulers.json
- Environement variables available in the container.
- Ability to define and modify secret files available in the same path as the dockerfiles, .wallet.json and .schedulers.json
- Environment variables available in the container.
- a postgresql database per node, defined with a database called "su" at the time of deployment.