diff --git a/website/blog/2021-07-16-welcome-to-the-new-orbitjs.md b/website/blog/2021-07-16-welcome-to-the-new-orbitjs.md index 8397e809..d71701b0 100644 --- a/website/blog/2021-07-16-welcome-to-the-new-orbitjs.md +++ b/website/blog/2021-07-16-welcome-to-the-new-orbitjs.md @@ -18,7 +18,7 @@ with code, preferably even in the same PRs. I'm especially excited to announce that we are finally publishing API reference docs, generated with [TypeDoc](https://typedoc.org/), alongside the Orbit guides. The first API docs available are for the upcoming v0.17, which can be -accessed directly [here](/docs/next/api) or by choosing `Next` from the +accessed directly [here](/docs/api/index.md) or by choosing `Next` from the documentation version selector in the upper right. While the current API docs are much better than nothing, the prose and examples diff --git a/website/blog/2022-01-31-v0-17-released.md b/website/blog/2022-01-31-v0-17-released.md new file mode 100644 index 00000000..e48920e1 --- /dev/null +++ b/website/blog/2022-01-31-v0-17-released.md @@ -0,0 +1,136 @@ +--- +title: v0.17 is finally final! +author: Dan Gebhardt +author_title: Orbit.js Creator +author_url: https://github.com/dgeb +author_image_url: https://avatars.githubusercontent.com/u/29122?v=4 +tags: [release] +--- + +After two years, 28 beta releases, and over 400 commits, Orbit v0.17 is finally +ready! 🎉 + +Orbit's docs have been updated to reflect all the changes. If you are upgrading +from v0.16, the place to start is the overview of [what's +new](/docs/whats-new). + +Some highlights of this release include: + +* **New API reference docs** — At long last, Orbit v0.17 has [API + docs](/docs/api) for all its packages. These docs are generated by + [TypeDoc](https://typedoc.org/) from Orbit's typings and code annotations. + Although a bit sparse for now, this reference should only improve with time + and help from the community. + +* **Improved, strict typings throughout** — By improving the quality of + Orbit's TypeScript, we have been able to refactor more confidently, provide + better documentation, and make for a better developer experience all around. + +* **Extraction of `@orbit/records` from `@orbit/data`** — As part of the + push to improve typings, it became clear that [`@orbit/data`](/docs/api/data) + contains a number of interfaces and classes that could prove useful for _any_ + type of data, not just records. Thus, record-specific types and classes + were extracted into a new package: [`@orbit/records`](/docs/api/records). + Apologies for the breaking changes with module imports. We wanted to get this + churn out of the way before the semver constraints that will come with v1.0. + +* **Multi-expression queries** — Just as transforms can contain multiple + operations, queries can now contain multiple expressions. This allows sources, + such as `JSONAPISource`, to optionally perform these expressions in parallel. + +* **Per-expression/operation options** — Along with the move to + multi-expression queries, we've introduced per-expression options. This can be + useful if, for instance, you want to specify a different target `url` per + expression. Similarly, transform operations can also each have their own + options. + +* **Full vs. data-only responses** — All requests (queries and updates) + can now be made with the `{ fullResponse: true }` option to receive responses + in the form `{ data, details, transforms, sources }`. `data` will include the + primary data that would be returned without the `fullResponse` option. + `details` includes response details particular to the source, and `sources` + includes a named map of all the responses from downstream sources that + participated in this request. This allows you to access full response + documents, inverse operations, etc. _from the initial request call point_. + +* **Deprecation of `Pullable` and `Pushable` interfaces** — Now that + responses can include full processing details, everything that was unique to + the `push` and `pull` methods on source is redundant. The `Pullable` and + `Pushable` interfaces have been deprecated to focus on the more capable + `Queryable` and `Updatable` interfaces for making requests. + +* **Transform buffers for faster cache processing** — Record-cache-based + sources that interact with browser storage have had performance issues when + dealing with large datasets, especially when paired with read/write heavy + processors that ensure relationship tracking and correctness. A new paradigm + has been developed, the `RecordTransformBuffer`, that acts as a memory buffer + for these operations. For now, using this buffer is opt-in, with the `{ + useBuffer: true }` option. You'll be reminded to explicitly set this option to + either `true` or `false` until you do. Early users are reporting promising + results with IndexedDB, such as [performance boosts of > 20x with large + numbers of + operations](https://github.com/orbitjs/orbit/issues/798#issuecomment-800544909). + +* **New serializers** — Concepts of serialization have, up until now, been + very specific to usage by the `JSONAPISource`, and particularly the + `JSONAPISerializer` class. This class has been deprecated and replaced with a + series of composable serializers all built upon a simple and flexible + [`Serializer`](/docs/api/serializers/interfaces/Serializer) interface. This + interface, as well as some serializers for primitives (booleans, dates, + date-times, etc.) have been published in a new package, + [`@orbit/serializers`](/docs/api/serializers). And of course, new serializers + particular to JSON:API have been added to + [`@orbit/jsonapi`](/docs/api/jsonapi). + +* **New validators** — A common source of problems for Orbit developers + has been using data that is malformed or doesn't align with a schema's + expectations. This can cause confusing errors during processing by a cache or + downstream source. To address this problem, we're introducing "validators", + which are shipped in a new package [`@orbit/validators`](/docs/api/validators) + that includes some validators for primitive types. Validators that are + record-specific have also been included in + [`@orbit/records`](/docs/api/records). By default, each source will build its + own set of validators and use them automatically. You can instead share a + common set of validators via the `validatorFor` settings. And you can opt-out + of using validators entirely by configuring your sources with `{ autoValidate: + false }`. + +* **Record normalizers** — When building queries and transforms, some + scenarios have been more tedious than necessary: identifying records by a key + instead of `id`, for instance, or using a model class from a lib like + ember-orbit to reference a record instead of its json identity. A new + abstraction has been added to make query and transform builders more flexible: + record normalizers. Record normalizers implement the + [`RecordNormalizer`](/docs/api/records/interfaces/RecordNormalizer) interface + and convert record identities and/or data into a normalized form. The new base + normalizer now allows `{ type, key, value }` to be used anywhere that `{ type, + id }` identities can be used, which significantly reduces the annoyance of + working with remote keys. Other normalizers, + +* **Synchronous change tracking in memory forks** — Previously, memory + source forks behaved precisely like other memory sources: every trackable + update applied at the source level (and thus async). Now, the default (but + overrideable) behavior is to track changes at the cache level in forks. Thus + synchronous changes can be made to a forked cache and then merged back into + the base source. This better accomodates the most common use case for forks: + editing form data in isolation before merging coalesced changes back to the + base. + +* **Debug mode** — A new `debug` setting has been added to the `Orbit` + global, that toggles between using a more verbose, developer-friendly "debug" + mode of Orbit vs. a leaner, more performant production mode. Since debug mode + is enabled by default, you'll need to set `Orbit.debug = false` in order to + eliminate deprecation warnings and other debug-friendly messaging. + +* **Increased reliance on The Platformâ„¢** — Orbit's codebase continues to + evolve with the web, adopting new ES language and web platform features as + they are released. Custom utilities have been gradually deprecated and phased + out of the codebase (e.g. `isArray` -> `Array.isArray`), new language features + such as nullish coalescing and optional chaining have been adopted, and + platform features such as `crypto.randomUUID` have been adopted (with a + fallback implementation if unavailable). + +Thanks for your patience with this release. We expect that v0.18 will not take +nearly as long as v0.17 did. In fact, we plan to use this next release primarily +to remove deprecated interfaces in preparation for a lean and focused v1.0 +release. diff --git a/website/docs/coordination.md b/website/docs/coordination.md index 8155716b..9f6bb061 100644 --- a/website/docs/coordination.md +++ b/website/docs/coordination.md @@ -3,8 +3,9 @@ title: Coordination strategies --- Orbit provides another layer of abstraction on top of direct event observation -and handling: a `Coordinator`. A coordinator manages a set of sources to which -it applies a set of coordination strategies. +and handling: a [`Coordinator`](./api/coordinator/classes/Coordinator.md). A +coordinator manages a set of sources to which it applies a set of coordination +strategies. ## Why use a coordinator? @@ -31,8 +32,8 @@ coordination strategies between sources. A coordinator can be created with sources and strategies: -```javascript -import Coordinator from "@orbit/coordinator"; +```typescript +import Coordinator from '@orbit/coordinator'; const coordinator = new Coordinator({ sources: [memory, backup], @@ -43,8 +44,8 @@ const coordinator = new Coordinator({ Or sources and strategies can be added / removed any time the coordinator is inactive: -```javascript -import Coordinator from "@orbit/coordinator"; +```typescript +import Coordinator from '@orbit/coordinator'; const coordinator = new Coordinator(); @@ -58,20 +59,20 @@ coordinator.addStrategy(backupMemorySync); A coordinator won't actually do anything until it's been "activated", which is an async process that activates all of the coordinator's strategies: -```javascript +```typescript coordinator.activate().then(() => { - console.log("Coordinator is active"); + console.log('Coordinator is active'); }); ``` Note that you can assign a log-level when activating a coordinator, and it will be applied to all of the coordinator's strategies: -```javascript -import { LogLevel } from "@orbit/coordinator"; +```typescript +import { LogLevel } from '@orbit/coordinator'; coordinator.activate({ logLevel: LogLevel.Info }).then(() => { - console.log("Coordinator will be chatty"); + console.log('Coordinator will be chatty'); }); ``` @@ -82,9 +83,9 @@ Possible log levels include `None`, `Errors`, `Warnings`, and `Info`. If you want to temporarily disable a coordinator or change its settings, you can deactivate it: -```javascript +```typescript coordinator.deactivate().then(() => { - console.log("Coordinator is inactive"); + console.log('Coordinator is inactive'); }); ``` @@ -114,17 +115,17 @@ request strategy should be defined with: Here are some example strategies that query / update a remote server pessimistically whenever a memory source is queried / updated: -```javascript -import { RequestStrategy } from "@orbit/coordinator"; +```typescript +import { RequestStrategy } from '@orbit/coordinator'; // Query the remote server whenever the memory source is queried coordinator.addStrategy( new RequestStrategy({ - source: "memory", - on: "beforeQuery", + source: 'memory', + on: 'beforeQuery', - target: "remote", - action: "pull", + target: 'remote', + action: 'query', blocking: true }) @@ -133,11 +134,11 @@ coordinator.addStrategy( // Update the remote server whenever the memory source is updated coordinator.addStrategy( new RequestStrategy({ - source: "memory", - on: "beforeUpdate", + source: 'memory', + on: 'beforeUpdate', - target: "remote", - action: "push", + target: 'remote', + action: 'update', blocking: true }) @@ -148,24 +149,24 @@ It's possible to apply a filter function to a strategy so that it only applies to certain data. For instance, the following filter limits which queries should be handled by a remote server: -```javascript -import { RequestStrategy } from "@orbit/coordinator"; +```typescript +import { RequestStrategy } from '@orbit/coordinator'; // Only forward requests for planets on to the remote server coordinator.addStrategy( new RequestStrategy({ - source: "memory", - on: "beforeQuery", + source: 'memory', + on: 'beforeQuery', - target: "remote", - action: "pull", + target: 'remote', + action: 'pull', blocking: true, filter(query) { return ( - query.expressions.op === "findRecords" && - query.expressions.type === "planet" + query.expressions.op === 'findRecords' && + query.expressions.type === 'planet' ); } }) @@ -188,14 +189,14 @@ on the `target`. The following strategy synchronizes any changes to the `remote` source with a `memory` source: -```javascript -import { SyncStrategy } from "@orbit/coordinator"; +```typescript +import { SyncStrategy } from '@orbit/coordinator'; // Sync all changes received from the remote server to the memory source coordinator.addStrategy( new SyncStrategy({ - source: "remote", - target: "memory", + source: 'remote', + target: 'memory', blocking: true }) ); @@ -211,8 +212,8 @@ An event logging strategy can be applied to log events on all sources to the console. By default, all events will be logged on all sources registered to a coordinator: -```javascript -import { EventLoggingStrategy } from "@orbit/coordinator"; +```typescript +import { EventLoggingStrategy } from '@orbit/coordinator'; coordinator.addStrategy(new EventLoggingStrategy()); ``` @@ -220,10 +221,10 @@ coordinator.addStrategy(new EventLoggingStrategy()); You may wish to only observe events on certain interfaces, which can be specified as follows: -```javascript +```typescript coordinator.addStrategy( new EventLoggingStrategy({ - interfaces: ["updatable", "pushable", "syncable"] + interfaces: ['updatable', 'pushable', 'syncable'] }) ); ``` @@ -234,10 +235,10 @@ Valid interfaces include `updatable`, `queryable`, `pushable`, `pullable`, and Furthermore, you may wish to only observe certain sources, which can be specified by name: -```javascript +```typescript coordinator.addStrategy( new EventLoggingStrategy({ - sources: ["remote", "memory"] + sources: ['remote', 'memory'] }) ); ``` @@ -256,18 +257,18 @@ all. To add a log truncation strategy that applies to all sources: -```javascript -import { LogTruncationStrategy } from "@orbit/coordinator"; +```typescript +import { LogTruncationStrategy } from '@orbit/coordinator'; coordinator.addStrategy(new LogTruncationStrategy()); ``` To limit the strategy to apply to only specific sources: -```javascript +```typescript coordinator.addStrategy( new LogTruncationStrategy({ - sources: ["backup", "memory"] + sources: ['backup', 'memory'] }) ); ``` @@ -305,10 +306,10 @@ be too: ```ts coordinator.addStrategy( new RequestStrategy({ - source: "memory", - target: "remote", - on: "beforeQuery", - action: "query", + source: 'memory', + target: 'remote', + on: 'beforeQuery', + action: 'query', blocking: true, passHints: true }) @@ -326,8 +327,8 @@ You'll also want to create a blocking `SyncStrategy` that syncs any transforms a ```ts coordinator.addStrategy( new SyncStrategy({ - source: "remote", - target: "memory", + source: 'remote', + target: 'memory', blocking: true }) ); diff --git a/website/docs/data-flows.md b/website/docs/data-flows.md index c8f9968b..8cb3614c 100644 --- a/website/docs/data-flows.md +++ b/website/docs/data-flows.md @@ -21,8 +21,8 @@ Orbit divides the movement of data into two different "flows": Every source interface has events and methods that correspond with one of these flows: -- The `Updatable`, `Queryable`, `Pushable`, and `Pullable` interfaces all - participate in the request flow. +- The `Updatable` and `Queryable` interfaces all participate in the request + flow. - The `Syncable` interface participates in the sync flow. @@ -41,8 +41,7 @@ listener for one source that triggers actions on another. Let's take a look at what events can trigger other actions: -- Update events (`beforeUpdate`, `update`, `beforePush`, `push`) can trigger - `push`. +- Update events (`beforeUpdate`, `update`) can trigger `update`. - Query events (`beforeQuery`, `query`, `beforePull`, `pull`) can trigger `pull`. @@ -53,26 +52,26 @@ Let's take a look at what events can trigger other actions: We can coordinate sources through simple event listeners, such as: -```javascript -memory.on("beforeUpdate", transform => { - remote.push(transform); +```typescript +memory.on('beforeUpdate', transform => { + remote.update(transform); }); ``` The above listener is "non-blocking" because it doesn't return anything to -the emitter. The call to `remote.push()` is async and may take a while to +the emitter. The call to `remote.update()` is async and may take a while to complete, so it will proceed in parallel with the `memory` source being updated. As an alternative, we can use a "blocking" strategy in our event listener by simply returning a promise: -```javascript -memory.on("beforeUpdate", transform => remote.push(transform)); +```typescript +memory.on('beforeUpdate', transform => remote.update(transform)); ``` -This will prevent the `memory` source from updating before the transform has been pushed -up to the `remote` source. An error in `remote.push` will cause `memory.update` -to error as well. +This will prevent the `memory` source from updating before the transform has +been pushed up to the `remote` source. An error in `remote.update` will cause +`memory.update` to error as well. ### Coordination guidelines @@ -89,5 +88,6 @@ Here are some guidelines for working with data flows: proceeding, use blocking connections for all the request and sync flows that may be involved. -Last but not least, it's recommended that you use a `Coordinator` instead of -manually configuring event listeners. Read on to understand why ... +Last but not least, it's recommended that you use a +[`Coordinator`](./api/coordinator/classes/Coordinator.md) instead of manually +configuring event listeners. Read on to understand why ... diff --git a/website/docs/data-sources.md b/website/docs/data-sources.md index 66289836..a0c17dbb 100644 --- a/website/docs/data-sources.md +++ b/website/docs/data-sources.md @@ -6,26 +6,26 @@ Sources provide access to data. They vary widely in their capabilities: some support interfaces for updating and/or querying records, while others simply broadcast changes. -Orbit includes a number of "standard" sources: +Orbit includes a number of "standard" record-specific sources: -- [@orbit/memory](https://www.npmjs.com/package/@orbit/memory) - an in-memory source -- [@orbit/jsonapi](https://www.npmjs.com/package/@orbit/jsonapi) - a JSON API client -- [@orbit/indexeddb](https://www.npmjs.com/package/@orbit/indexeddb) - for accessing IndexedDB databases -- [@orbit/local-storage](https://www.npmjs.com/package/@orbit/local-storage) - for accessing LocalStorage +- [@orbit/memory](./api/memory/index.md) - an in-memory source +- [@orbit/jsonapi](./api/jsonapi/index.md) - a JSON:API client +- [@orbit/indexeddb](./api/indexeddb/index.md) - for accessing IndexedDB +- [@orbit/local-storage](./api/local-storage/index.md) - for accessing LocalStorage Custom sources can also be written to access to virtually any source of data. ## Base class -Every source derives from an abstract base class, `Source`, which has a core -set of capabilities. +Every source derives from an abstract base class, +[`Source`](./api/data/classes/Source.md), which has a core set of capabilities. Sources must be instantiated with a schema. A schema provides sources with an understanding of the domain-specific data they manage. Let's create a simple schema and memory source: -```javascript +```typescript import { RecordSchema } from '@orbit/records'; import { MemorySource } from '@orbit/memory'; @@ -61,7 +61,7 @@ emitted when that source changes. Most sources emit additional events as well Let's look at an example of a simple mutation triggered by a call to `update`: -```javascript +```typescript // Define a record const jupiter = { type: 'planet', @@ -81,17 +81,15 @@ memory.on('transform', (t) => { console.log(`transforms: ${memory.transformLog.length}`); // Update the memory source with a transform that adds a record -memory - .update((t) => t.addRecord(jupiter)) - .then(() => { - // Verify that the transform log has grown - console.log(`transforms: ${memory.transformLog.length}`); - }); +await memory.update((t) => t.addRecord(jupiter)); + +// Verify that the transform log has grown +console.log(`transforms: ${memory.transformLog.length}`); ``` The following should be logged as a result: -```javascript +```typescript 'transforms: 0', 'transform', { @@ -123,14 +121,16 @@ Want to learn more about updating data? [See the guide](./updating-data.md) Orbit includes a number of standard interfaces that may be implemented by sources: -- `Updatable` - Allows sources to be updated via an `update` method that takes - a transform and returns the updated records that result. +- [`Updatable`](./api/data/interfaces/Updatable.md) - Allows sources to be + updated via an `update` method that takes a transform and returns the updated + records that result. -- `Queryable` - Allows sources to be queried via a `query` method that receives - a query expression and returns a recordset as a result. +- [`Queryable`](./api/data/interfaces/Queryable.md) - Allows sources to be + queried via a `query` method that receives a query expression and returns a + recordset as a result. -- `Syncable` - Applies a transform or transforms to a source via a `sync` - method. +- [`Syncable`](./api/data/interfaces/Syncable.md) - Applies a transform or + transforms to a source via a `sync` method. :::caution The `Pullable` and `Pushable` interfaces have been deprecated in @@ -174,20 +174,29 @@ be ignored by the emitter. ### Data flows -The `Updatable` and `Queryable` interfaces participate in the "request flow", in -which requests are made upstream and data flows back down. +The [`Updatable`](./api/data/interfaces/Updatable.md) and +[`Queryable`](./api/data/interfaces/Queryable.md) interfaces participate in the +"request flow", in which requests are made upstream and data flows back down. -The `Syncable` interface participates in the "sync flow", in which data flowing -downstream is synchronized with other sources. +The [`Syncable`](./api/data/interfaces/Syncable.md) interface participates in +the "sync flow", in which data flowing downstream is synchronized with other +sources. -> Want to learn more about data flows? [See the guide](./data-flows.md) +:::info +Want to learn more about data flows? [See the guide](./data-flows.md) +::: ### Developer-facing interfaces -Generally speaking, developers will primarily interact the `Updatable` and -`Queryable` interfaces. The `Syncable` interface is used primarily via +Generally speaking, developers will primarily interact the +[`Updatable`](./api/data/interfaces/Updatable.md) and +[`Queryable`](./api/data/interfaces/Queryable.md) interfaces. The +[`Syncable`](./api/data/interfaces/Syncable.md) interface is used primarily via coordination strategies. -> See guides that cover [querying data](./querying-data.md), -> [updating data](./updating-data.md), and -> [configuring coordination strategies](./coordination.md). +:::info +See more specific guides that cover: +* [Updating data](./updating-data.md) +* [Querying data](./querying-data.md) +* [Coordination strategies](./coordination.md) +::: diff --git a/website/docs/getting-started.md b/website/docs/getting-started.md index 4e1f2268..ff13240f 100644 --- a/website/docs/getting-started.md +++ b/website/docs/getting-started.md @@ -13,7 +13,7 @@ Schemas are used to define the models and relationships for an application. Let's start by defining a schema for our solar system's data: -```javascript +```typescript import { RecordSchema } from '@orbit/records'; const schema = new RecordSchema({ @@ -54,7 +54,7 @@ schema. Let's create an in-memory source as our first data source: -```javascript +```typescript import { MemorySource } from '@orbit/memory'; const memory = new MemorySource({ schema }); @@ -64,7 +64,7 @@ const memory = new MemorySource({ schema }); We can now load some data into our memory source and then query its contents: -```javascript +```typescript const earth = { type: 'planet', id: 'earth', @@ -108,7 +108,7 @@ console.log(planets); The following output should be logged: -```javascript +```typescript [ { type: 'planet', @@ -156,7 +156,7 @@ to the memory source's cache passes through a schema consistency check. Let's look at how the memory source is queried: -```javascript +```typescript let planets = await memory.query((q) => q.findRecords('planet').sort('name')); ``` @@ -170,7 +170,7 @@ involved). Here's an example of a more complex query that filters, sorts, and paginates: -```javascript +```typescript let planets = await memory.query((q) => q .findRecords('planet') @@ -188,7 +188,7 @@ In fact, if you want to just "peek" into the contents of the memory source, you can issue the same queries synchronously against the memory source's `Cache`. For example: -```javascript +```typescript // Results will be returned synchronously by querying the cache let planets = memory.cache.query((q) => q.findRecords('planet').sort('name')); ``` @@ -209,7 +209,7 @@ whole planet or moon! 😱 Let's create a browser storage source to keep data around locally: -```javascript +```typescript import { IndexedDBSource } from '@orbit/indexeddb'; const backup = new IndexedDBSource({ @@ -224,7 +224,7 @@ const backup = new IndexedDBSource({ Every time a source is transformed, it emits a `transform` event. It's simple to observe these events directly: -```javascript +```typescript memory.on('transform', (transform) => { console.log(transform); }); @@ -233,7 +233,7 @@ memory.on('transform', (transform) => { It's possible to pipe changes that occur in one source into another via the `sync` method: -```javascript +```typescript memory.on('transform', (transform) => { backup.sync(transform); }); @@ -244,7 +244,7 @@ promise. If we want to guarantee that transforms can't be applied to our memory source without also being backed up, we should return the promise in the event handler: -```javascript +```typescript memory.on('transform', (transform) => { return backup.sync(transform); }); @@ -252,7 +252,7 @@ memory.on('transform', (transform) => { Or more simply: -```javascript +```typescript memory.on('transform', (transform) => backup.sync(transform)); ``` @@ -269,7 +269,7 @@ sources to which it applies a set of coordination strategies. A coordinator could be configured to handle the above scenario as follows: -```javascript +```typescript import { Coordinator, SyncStrategy } from '@orbit/coordinator'; const coordinator = new Coordinator({ @@ -312,7 +312,7 @@ yet set up a process to restore that backed up data. If we want our app to restore all of its data from browser storage when it first boots, we could perform the following: -```javascript +```typescript let allRecords = await backup.query((q) => q.findRecords()); await memory.sync((t) => allRecords.map((r) => t.addRecord(r))); await coordinator.activate(); @@ -343,7 +343,7 @@ communicate with that server. We'll start by creating a new `remote` source: -```javascript +```typescript import { JSONAPISource } from '@orbit/jsonapi'; const remote = new JSONAPISource({ @@ -355,14 +355,14 @@ const remote = new JSONAPISource({ Next let's add the source to the coordinator: -```javascript +```typescript coordinator.addSource(remote); ``` And then we can add strategies to ensure that queries and updates made against the memory source are processed by the remote server: -```javascript +```typescript import { RequestStrategy, SyncStrategy } from '@orbit/coordinator'; // Query the remote server whenever the memory source is queried @@ -431,7 +431,7 @@ state that we'd like to reify if our application was closed unexpectedly. In order to persist this state, we can create a "bucket" that can be shared among our sources: -```javascript +```typescript import { LocalStorageBucket } from '@orbit/local-storage-bucket'; import { IndexedDBBucket, supportsIndexedDB } from '@orbit/indexeddb-bucket'; @@ -445,7 +445,7 @@ back to using a LocalStorage-based bucket if necessary. This `bucket` can be passed as a setting to any and all of our sources. For instance: -```javascript +```typescript const backup = new IndexedDBSource({ bucket, schema, diff --git a/website/docs/memory-sources.md b/website/docs/memory-sources.md index f446d19d..c1d52e55 100644 --- a/website/docs/memory-sources.md +++ b/website/docs/memory-sources.md @@ -13,8 +13,8 @@ capabilities of memory sources and their inner workings. ## Cache -Every memory source keeps its data in memory in a `Cache`, which is accessible via the -`memory.cache` member. +Every memory source keeps its data in memory in a `Cache`, which is accessible +via the `cache` getter. ### Immutable data @@ -39,31 +39,21 @@ operation processors. For instance, when a record is removed, it must be removed from all of its associated relationships. When a relationship with an inverse is removed, that inverse relationship must also be removed. -### Patches +### Updating a cache Typically you should not be applying changes directly to a cache. It's far -preferable to apply changes to the associated memory source through its `update` event. +preferable to apply changes to the associated memory source through its `update` +event. -However, caches can be modified via a `patch` method, that takes an `Operation` -or array of `Operation`s. - -The `PatchResult` that's returned has the following signature: - -```typescript -type PatchResultData = Record | RecordIdentity | null; - -interface PatchResult { - inverse: RecordOperation[]; - data: PatchResultData[]; -} -``` +However, caches can be modified via an `update` method, just like sources. All changes to a cache will be emitted as `patch` events. These events include the `Operation` that was applied as well as any data returned. Its important to recognize that `patch` events will be emitted for _EVERY_ -change, including those made by operation processors. Therefore, if you need -a high fidelity log of changes to a memory source, observe its cache's `patch` events. +change, including those made by operation processors. Therefore, if you need a +high fidelity log of changes to a memory source, observe its cache's `patch` +events. ### Querying cache data @@ -75,9 +65,9 @@ While `memory.query` is asynchronous and thus returns results wrapped in a promise, `memory.cache.query` is synchronous and returns results directly. For example: -```javascript +```typescript // Results will be returned synchronously by querying the cache -let planets = memory.cache.query(q => q.findRecords("planet").sort("name")); +let planets = memory.cache.query((q) => q.findRecords('planet').sort('name')); ``` > By querying the cache instead of the memory source, you're not allowing other @@ -95,37 +85,37 @@ Let's look at an example of memory source forking and merging: ```typescript // start by adding two planets and a moon to the memory source -await memory.update(t => [ +await memory.update((t) => [ t.addRecord(earth), t.addRecord(venus), t.addRecord(theMoon) ]); -let planets = await memory.query(q => q.findRecords("planet").sort("name")); -console.log("original planets", planets); +let planets = await memory.query((q) => q.findRecords('planet').sort('name')); +console.log('original planets', planets); // fork the memory source let forkedMemorySource = memory.fork(); // add a planet and moons to the fork -await forkedMemorySource.update(t => [ +await forkedMemorySource.update((t) => [ t.addRecord(jupiter), t.addRecord(io), t.addRecord(europa) ]); // query the planets in the forked memory source -planets = await forkedMemorySource.query(q => - q.findRecords("planet").sort("name") +planets = await forkedMemorySource.query((q) => + q.findRecords('planet').sort('name') ); -console.log("planets in fork", planets); +console.log('planets in fork', planets); // merge the forked memory source back into the original memory source await memory.merge(forkedMemorySource); // query the planets in the original memory source -planets = await memory.query(q => q.findRecords("planet").sort("name")); -console.log("merged planets", planets); +planets = await memory.query((q) => q.findRecords('planet').sort('name')); +console.log('merged planets', planets); ``` It's important to note a few things about memory source forking and merging: @@ -138,9 +128,3 @@ It's important to note a few things about memory source forking and merging: - Merging a fork will gather the transforms applied since the fork point, coalesce the operations in those transforms into a single new transform, and then update the original memory source. - -
- -Want to experiment with memory source forking and merging? - -See [this example in CodeSandbox](https://codesandbox.io/s/40lo886nn7?previewwindow=console). diff --git a/website/docs/modeling-data.md b/website/docs/modeling-data.md index b862c917..653439c9 100644 --- a/website/docs/modeling-data.md +++ b/website/docs/modeling-data.md @@ -17,7 +17,7 @@ and relationships with other records. Here's an example record that represents a planet: -```javascript +```typescript { type: 'planet', id: 'earth', @@ -94,7 +94,7 @@ Remote IDs should be kept in a `keys` object at the root of a record. For example, the following record has a `remoteId` key that is assigned by a server: -```javascript +```typescript { type: 'planet', id: '34677136-c0b7-4015-b9e5-57f6fdd16bd2', @@ -105,10 +105,11 @@ server: ``` The `remoteId` key of `123456` can be mapped to the locally generated `id` using -a `KeyMap`, which can be shared by any sources that need access to the mapping. -When communicating with the server, `remoteId` might be serialized as `id`—such -a translation should occur within the source that communicates directly with the -remote server (e.g. Orbit's standard `JSONAPISource`). +a [`RecordKeyMap`](./api/records/classes/RecordKeyMap.md), which can be shared +by any sources that need access to the mapping. When communicating with the +server, `remoteId` might be serialized as `id`—such a translation should occur +within the source that communicates directly with the remote server (e.g. +Orbit's standard [`JSONAPISource`](./api/jsonapi/classes/JSONAPISource.md)). ### Attributes @@ -140,26 +141,26 @@ the sources in an application. Schemas are defined with their initial settings as follows: -```javascript -import { RecordSchema } from "@orbit/records"; +```typescript +import { RecordSchema } from '@orbit/records'; const schema = new RecordSchema({ models: { planet: { attributes: { - name: { type: "string" }, - classification: { type: "string" } + name: { type: 'string' }, + classification: { type: 'string' } }, relationships: { - moons: { kind: "hasMany", type: "moon", inverse: "planet" } + moons: { kind: 'hasMany', type: 'moon', inverse: 'planet' } } }, moon: { attributes: { - name: { type: "string" } + name: { type: 'string' } }, relationships: { - planet: { kind: "hasOne", type: "planet", inverse: "moons" } + planet: { kind: 'hasOne', type: 'planet', inverse: 'moons' } } } } @@ -173,7 +174,9 @@ object that contains `attributes`, `relationships`, and/or `keys`. Attributes may be defined by their `type`, which determines what type of data they can contain. An attribute's type may also be used to determine how it -should be serialized and validated. Standard attribute types are: +should be serialized and validated. + +Standard attribute types are: - `array` - `boolean` @@ -185,7 +188,7 @@ should be serialized and validated. Standard attribute types are: ### Model relationships -Two kind of relationships between models are allowed: +Two kinds of relationships between models are allowed: - `hasOne` - for to-one relationships - `hasMany` - for to-many relationships @@ -201,19 +204,19 @@ results in a corresponding change on the inverse model. Here's an example of a schema definition that includes relationships with inverses: -```javascript -import { RecordSchema } from "@orbit/records"; +```typescript +import { RecordSchema } from '@orbit/records'; const schema = new RecordSchema({ models: { planet: { relationships: { - moons: { kind: "hasMany", type: ["moon", "satellite"], inverse: "planet" } + moons: { kind: 'hasMany', type: ['moon', 'satellite'], inverse: 'planet' } } }, moon: { relationships: { - planet: { kind: "hasOne", type: "planet", inverse: "moons" } + planet: { kind: 'hasOne', type: 'planet', inverse: 'moons' } } } } @@ -226,10 +229,10 @@ When working with remote servers that do not support client-generated IDs, it's necessary to correlate locally generated IDs with remotely generated IDs, or "keys". Like `id`, keys uniquely identify a record of a particular model type. -Keys currently accept no _standard_ options, so they should be declared with an -empty options hash as follows: +In the simplest case, keys can be declared with an empty options object as +follows: -```javascript +```typescript const schema = new RecordSchema({ models: { moon: { @@ -242,6 +245,9 @@ const schema = new RecordSchema({ }); ``` +Like attributes and relationships, keys can also be declared with options that +are specific to validation or serialization. + :::info Since keys can only be of type `"string"`, it is unnecessary to explicitly diff --git a/website/docs/querying-data.md b/website/docs/querying-data.md index c4d576c1..37e32090 100644 --- a/website/docs/querying-data.md +++ b/website/docs/querying-data.md @@ -122,7 +122,7 @@ to create query expressions. You can use the standard `@orbit/data` query builder as follows: -```javascript +```typescript // Find a single record by identity memory.query((q) => q.findRecord({ type: 'planet', id: 'earth' })); @@ -140,7 +140,7 @@ memory.query((q) => The base `findRecords` query can be enhanced significantly: -```javascript +```typescript // Sort by name memory.query((q) => q.findRecords('planet') .sort('name')); @@ -179,7 +179,7 @@ memory.query((q) => q.findRecords('planet') The same parameters can be applied to `findRelatedRecords`: -```javascript +```typescript // Sort by name memory.query((q) => q.findRelatedRecords({ id: 'solar', type: 'planetarySystem' }, 'planets') .sort('name')); @@ -310,7 +310,9 @@ Filtering on a `hasOne` relationship has different comparison operations availab #### findRelatedRecords vs findRecords.filter({ relation: ..., record: ... }) -If you're using the default settings for JSONAPISource, `findRelatedRecords` and `findRecords.filter(...)` produce very different URLs. +If you're using the default settings for +[JSONAPISource](./api/jsonapi/classes/JSONAPISource.md), `findRelatedRecords` +and `findRecords.filter(...)` produce very different URLs. ```typescript const relatedRecordId = { type: 'planet', id: 'earth' }; @@ -330,12 +332,12 @@ sources and to include metadata about queries. For example, the following query is given a `label` and contains instructions for the source named `remote`: -```javascript +```typescript memory.query((q) => q.findRecords('contact').sort('lastName', 'firstName'), { label: 'Find all contacts', sources: { remote: { - include: ['phone-numbers'] + include: ['phoneNumbers'] } } }); @@ -353,9 +355,9 @@ phone numbers. It is possible to pass different options to each expression in the query. -```javascript +```typescript memory.query((q) => [ - q.findRecords('contact').options({ include: ['phone-numbers'] }), + q.findRecords('contact').options({ include: ['phoneNumbers'] }), q.findRecords('meeting').options({ include: ['location'] }) ]); ``` @@ -368,7 +370,7 @@ In fact, if you want to just "peek" into the contents of the memory source, you can issue the same queries synchronously against the memory source's `Cache`. For example: -```javascript +```typescript // Results will be returned synchronously by querying the cache const planets = memory.cache.query((q) => q.findRecords('planet').sort('name')); ``` @@ -388,7 +390,7 @@ instance and then subscribe to changes. By default the `patch` events with a debounce. The subscription callback will be called on every operation which is relevant to the query. -```javascript +```typescript // Create a new LiveQuery instance const planetsLiveQuery = memory.cache.liveQuery((q) => q.findRecords('planet')); // Subscribe to LiveQuery changes diff --git a/website/docs/task-processing.md b/website/docs/task-processing.md index 2c013eb0..0ec3a3a7 100644 --- a/website/docs/task-processing.md +++ b/website/docs/task-processing.md @@ -2,8 +2,9 @@ title: Task processing --- -Tasks and queues are primitives contained in `@orbit/core` that are useful for -processing actions asynchronously and serially. +Tasks and queues are primitives contained in +[`@orbit/core`](./api/core/index.md) that are useful for processing actions +asynchronously and serially. Although you'll typically work with tasks indirectly, understanding these concepts can help you better troubleshoot and harden your Orbit applications. @@ -13,32 +14,41 @@ concepts can help you better troubleshoot and harden your Orbit applications. Every action performed by sources, from updates to queries, is considered a "task" to be performed asynchronously. -The `Task` interface is simply: +The [`Task`](/api/core/interfaces/Task.md) interface is simply: ```typescript -interface Task { - type: string; +interface Task { + type: Type; id?: string; - data?: any; + data?: Data; + options?: Options; } ``` A task's `type`, such as `"query"` or `"update"`, signals how that task should be performed. An `id` is assigned to uniquely identify the task. And `data` should contain the type-specific data needed to perform the task, such as an -object that conforms with the `Query` or `Transform` interfaces. +object that conforms with the [`Query`](./api/data/interfaces/Query.md) or +[`Transform`](./api/data/interfaces/Transform.md) interfaces. ## Performers -Tasks are performed asynchronously by a `Performer`: +Tasks are performed asynchronously by a +[`Performer`](./api/core/interfaces/Performer.md): ```typescript -export interface Performer { - perform(task: Task): Promise; +interface Performer< + Type = string, + Data = unknown, + Options = unknown, + Result = unknown +> { + perform(task: Task): Promise; } ``` -In `@orbit/data`, every `Source` implements the `Performer` interface. +In [`@orbit/data`](./api/data/index.md), every +[`Source`](./api/data/classes/Source.md) implements the `Performer` interface. ## Task queues @@ -47,9 +57,9 @@ serially and asynchronously. Task queues are associated with a single `performer`, such as a `Source`, that will perform each task. A `performer` must be assigned when instantiating a -`TaskQueue`: +[`TaskQueue`](./api/core/classes/TaskQueue.md): -```javascript +```typescript const queue = new TaskQueue(source); // `source` implements `Performer` ``` @@ -58,16 +68,16 @@ and will continue until either all tasks have been performed or a problem has been encountered. For finer control over processing, it's possible to instantiate a queue that will only process tasks explicitly: -```javascript +```typescript const queue = new TaskQueue(source, { autoProcess: false }); ``` Tasks are normally added to the end of a queue via the `push` method: -```javascript +```typescript queue.push({ type: 'query', - data: { expression: { op: 'findRecords', type: 'planet' } } + data: { expressions: { op: 'findRecords', type: 'planet' } } }); ``` @@ -111,7 +121,8 @@ prove useful when handling exceptions, debugging, and testing. ### Task queues for sources -Every `Source` in `@orbit/data` maintains two task queues: +Every [`Source`](./api/data/classes/Source.md#requestqueue) in `@orbit/data` +maintains two task queues: - A `requestQueue` for processing requests, such as updates and queries. diff --git a/website/docs/updating-data.md b/website/docs/updating-data.md index 84fbfa13..76edf7af 100644 --- a/website/docs/updating-data.md +++ b/website/docs/updating-data.md @@ -100,7 +100,7 @@ to create one or more operations. For instance, here's how you might update a memory source with a single record: -```javascript +```typescript const earth = { type: 'planet', id: 'earth', @@ -115,7 +115,7 @@ memory.update((t) => t.addRecord(earth)); To perform more than one operation in a single transform, just return an array of operations: -```javascript +```typescript memory.update((t) => [t.addRecord(earth), t.addRecord(jupiter)]); ``` @@ -123,7 +123,7 @@ memory.update((t) => [t.addRecord(earth), t.addRecord(jupiter)]); You can use the standard `@orbit/data` transform builder as follows: -```javascript +```typescript // Adding a new record memory.update((t) => t.addRecord({ @@ -206,7 +206,7 @@ particular sources and to include metadata about transforms. For example, the following transform is given a `label` and contains instructions for the source named `remote`: -```javascript +```typescript memory.update( (t) => t.updateRecord({ @@ -239,7 +239,7 @@ timeout when performing this particular update. It is possible to pass different options to each operation in the transform. -```javascript +```typescript memory.update((t) => [ t .addRecord({ diff --git a/website/docs/whats-new.md b/website/docs/whats-new.md index 1b151732..26ef65d4 100644 --- a/website/docs/whats-new.md +++ b/website/docs/whats-new.md @@ -2,6 +2,516 @@ title: What's new in v0.17 --- -This is a distillation of what's new in Orbit v0.17, intended as a reference for developers who need to upgrade their apps and libraries from v0.16. +This is a distillation of what's new in Orbit v0.17, intended as a reference for +developers who need to upgrade their apps and libraries from v0.16. -[TODO] \ No newline at end of file +If you're brand new to Orbit yourself, you may wish to skip this section in +order to explore Orbit's latest features in a broader context. + +## New Site + API Reference + +v0.17 is Orbit's first release that comes with [API docs](./api/index.md) for +all its packages. These docs are generated by [TypeDoc](https://typedoc.org/) +from Orbit's typings and code annotations. Although a bit sparse for now, this +reference should only improve with time and help from the community. +Contributions will be most appreciated! + +## Improved, strict typings throughout + +The TypeScript in all of Orbit's packages has been improved to the extent that +it is now all compiled with the +[strict](https://www.typescriptlang.org/tsconfig#strict) flag. This has allowed +us to refactor more confidently, improve our documentation, and provide a +better developer experience all around. + +## Extraction of `@orbit/records` from `@orbit/data` + +As part of the push to improve typings, it became clear that +[`@orbit/data`](./api/data/index.md) contains a number of interfaces and classes +that could prove useful for _any_ type of data, not just records. Thus, +record-specific types and classes were extracted into a new package: +[`@orbit/records`](./api/records/index.md). + +Please review the [exports](./api/records/modules.md) from `@orbit/records` for +a complete listing of classes, interfaces, and other types that have been moved +to this new package. + +Be aware that several exports have been renamed to be explicit about being +record-specific. For instance, `Schema` is now `RecordSchema`, so you'll want to +make this refactor: + +```diff +- import { Schema } from '@orbit/data'; ++ import { RecordSchema } from '@orbit/records'; +``` + +Apologies for this breaking change and the refactoring it requires. We're trying +to settle the scope of each package prior to v1.0. + +:::caution Breaking change + +Please review all your direct imports from `@orbit/data` and replace them as +needed with imports from `@orbit/records`. + +::: + +## Singular vs. multi-expression queries + +In v0.16, each `Query` could only have a single `expression`: + +```typescript +// v0.16 +export interface Query { + id: string; + expression: QueryExpression; + options?: any; +} +``` + +Now, [`Query`](./api/data/interfaces/Query.md) is typed as follows, with +`expressions` that can be singular or an array of query expressions: + +```typescript +// v0.17 +export interface Query { + id: string; + expressions: QE | QE[]; + options?: RequestOptions; +} +``` + +This allows sources, such as +[`JSONAPISource`](./api/jsonapi/classes/JSONAPISource.md), to optionally perform +these expressions in parallel, which it does now by default. + +Now that queries can contain multiple expressions just like transforms can +contain multiple operations, there needs to be a clear and consistent way to +build them. And likewise, the expectation needs to be clear about the form +in which results should be returned. + +Here's a single expression to a query builder, which can be expected to return +a single result: + +```typescript +const earth = source.query((q) => + q.findRecord({ type: 'planet', id: 'earth' }) +); +``` + +That same expression could be passed in an array, which will cause results to be +returned in an array: + +```typescript +const [earth] = source.query((q) => [ + q.findRecord({ type: 'planet', id: 'earth' }) +]); +``` + +And of course, that array could be expanded to include more than one expression: + +```typescript +const [earth, jupiter, saturn] = source.query((q) => [ + q.findRecord({ type: 'planet', id: 'earth' }), + q.findRecord({ type: 'planet', id: 'jupiter' }), + q.findRecord({ type: 'planet', id: 'saturn' }) +]); +``` + +As mentioned above, this query may be handled with 3 parallel requests, but will +only resolve when all have completed successfully. + +:::caution Breaking change + +Although most developers typically do not interact with queries directly, if +you do it's important to note the change from `expression` -> `expressions`. + +::: + +## Singular vs. multi-operation transforms + +All the patterns mentioned above for queries also apply to transforms. + +A single operation provided to a transform builder will return a single result: + +```typescript +const earth = source.update((t) => + t.addRecord({ type: 'planet', id: 'earth' }) +); +``` + +The same expression passed in an array will cause results to be returned in an +array: + +```typescript +const [earth] = source.update((t) => [ + t.addRecord({ type: 'planet', id: 'earth' }) +]); +``` + +And as before, multi-operation transforms will produce an array of results: + +```typescript +const [earth, jupiter, saturn] = source.update((t) => [ + t.addRecord({ type: 'planet', id: 'earth' }), + t.addRecord({ type: 'planet', id: 'jupiter' }), + t.addRecord({ type: 'planet', id: 'saturn' }) +]); +``` + +The [`Transform`](./api/data/interfaces/Transform.md) interface has changed +subtly such that `operations` can now be singular or an array, to parallel +`Query#expressions`: + +```typescript +// v0.17 +export interface Transform { + id: string; + operations: O | O[]; + options?: RequestOptions; +} +``` + +:::caution Breaking changes + +The change that allows `Transform`'s `operations` to be singular is breaking. +You may wish to use a utility function such as +[`toArray`](./api/utils/modules.md#toarray) to interact with `operations` +uniformly as an array. + +Also note that, in v0.16, calling `update` with a single operation in an array +would return a singular result. It will now return that same result as the +single member of an array. + +::: + +## Full vs. data-only responses + +All requests (queries and updates) can now be made with a `{ fullResponse: true +}` option to receive responses as a +[`FullResponse`](./api/data/interfaces/FullResponse.md). Full responses include +the following members: + +- `data` - the primary data that would be returned without the `fullResponse` + option + +- `details` - response details particular to the source. For a + [`MemorySource`](./api/memory/classes/MemorySource.md), this will include + applied and inverse operations. For a + [`JSONAPISource`](./api/jsonapi/classes/JSONAPISource.md), this will include + [`Response`](https://developer.mozilla.org/en-US/docs/Web/API/Response) + objects and documents. + +- `transforms` - these are the transforms applied as a result of this request. + They are always emitted with a `transform` event, which hooks into Orbit's + sync flow. + +- `sources` - a map of source-specific response details from downstream sources + that were engaged in fulfilling this request. + +It's now up to you just how much of this information you want at the call site. +The following requests will be handled the same internally: + +```typescript +// Just the data +const planets = await source.query((q) => q.findRecords('planet')); + +// All the details +const { data, details, transforms, sources } = await source.query( + (q) => q.findRecords('planet'), + { fullResponse: true } +); +``` + +## Improved response typings + +Speaking of responses, it's now possible to type them using [TypeScript +generics](https://www.typescriptlang.org/docs/handbook/2/generics.html) instead +of relying on type coercion (i.e. `response as Type`). + +Standard data requests can type the response data: + +```typescript +// query(queryOrExpressions, options, id?): Promise +const planets = await source.query((q) => q.findRecords('planet')); +``` + +Full data requests can type the response data, details, and operation: + +```typescript +// query(queryOrExpressions, options, id?): Promise>; +const { data, details, transforms, sources } = await source.query< + Planet[], + JSONAPIResponse[], + RecordOperation +>((q) => q.findRecords('planet'), { fullResponse: true }); +``` + +## Deprecation of `Pullable` and `Pushable` interfaces + +Now that responses can include full processing details, everything that was +unique to the `pull` and `push` methods on source is redundant. The `Pullable` +and `Pushable` interfaces have been deprecated to focus on the more capable +`Queryable` and `Updatable` interfaces for making requests. + +One common use case for `pull` / `push` was restoring from backup: + +```typescript +const transform = await backup.pull((q) => q.findRecords()); +await memory.push(transform); +``` + +This can be achieved as follows with `query` / `sync` (or `update`): + +```typescript +const allRecords = await backup.query((q) => q.findRecords()); +await memory.sync((t) => allRecords.map((r) => t.addRecord(r))); +``` + +And if you do want access to the transforms that result from a request, specify +that you want a full response: + +```typescript +const { transforms } = await source.update((t) => [ + t.addRecord(type: 'planet', attributes: { name: 'Earth' }), + t.addRecord(type: 'planet', attributes: { name: 'Jupiter' }) + ], + { fullResponse: true } +); +``` + +## Transform buffers for faster cache processing + +Record-cache-based sources that interact with browser storage have had +performance issues when dealing with large datasets, especially when paired with +read/write heavy processors that ensure relationship tracking and correctness. A +new paradigm has been developed, the `RecordTransformBuffer`, that acts as a +memory buffer for these operations. + +For now, using this buffer is opt-in, with the `{ useBuffer: true }` option: + +```typescript +await indexeddbSource.update((t) => [ + t.addRecord(type: 'planet', attributes: { name: 'Earth' }), + t.addRecord(type: 'planet', attributes: { name: 'Jupiter' }) + ], + { useBuffer: true } +); +``` + +Performance improvements are quite promising, and stability seems solid. + +:::caution + +The only edge cases we've found to be concerned about are related to cascading +deletes, which are triggered when record relationships are defined with +`dependent: delete`. In those cases, the cascade may not be as complete in the +buffer as in the actual cache, so we recommend avoiding transform buffers for +now. + +::: + +## New serializers + +Concepts of serialization have, up until now, been very specific to usage by the +`JSONAPISource`, and particularly the `JSONAPISerializer` class. This class has +been deprecated and replaced with a series of composable serializers all build +upon a simple and flexible +[`Serializer`](./api/serializers/interfaces/Serializer.md) interface. This +interface, as well as some serializers for primitives (booleans, dates, +date-times, etc.) have been published in a new package, +[`@orbit/serializers`](./api/serializers/index.md). + +New serializers particular to JSON:API have also been added to +[`@orbit/jsonapi`](./api/jsonapi/index.md), including: + +- `JSONAPIDocumentSerializer` +- `JSONAPIResourceSerializer` +- `JSONAPIResourceIdentitySerializer` +- `JSONAPIResourceFieldSerializer` +- `JSONAPIOperationSerializer` +- `JSONAPIOperationsDocumentSerializer` + +These new serializers remove some of the default behaviors present in v0.16 - +resource fields and types in documents are no longer dasherized and pluralized, +but are left "as is" in camelized form. This lines up with the new +recommendations for the JSON:API spec and creates much less work by default. + +Each of these classes can be overridden to provide custom serialization +behavior. You could then provide those custom classes when creating your source: + +```typescript +const source = new JSONAPISource({ + schema, + serializerClassFor: buildSerializerClassFor({ + [JSONAPISerializers.Resource]: MyCustomResourceSerializer, + [JSONAPISerializers.ResourceType]: MyCustomResourceTypeSerializer + }) +}); +``` + +Alternatively, you can use the standard serializers but provide custom settings +for those serializers. For example, here are settings that match the previous +default serialization options: + +```typescript +const source = new JSONAPISource({ + schema, + serializerSettingsFor: buildSerializerSettingsFor({ + sharedSettings: { + // Optional: Custom `pluralize` / `singularize` inflectors that know about + // your app's unique data. + inflectors: { + pluralize: buildInflector( + { person: 'people' }, // custom mappings + (input) => `${input}s` // naive pluralizer, specified as a fallback + ), + singularize: buildInflector( + { people: 'person' }, // custom mappings + (arg) => arg.substring(0, arg.length - 1) // naive singularizer, specified as a fallback + ) + } + }, + // Serialization settings according to the type of serializer + settingsByType: { + [JSONAPISerializers.ResourceField]: { + serializationOptions: { inflectors: ['dasherize'] } + }, + [JSONAPISerializers.ResourceFieldParam]: { + serializationOptions: { inflectors: ['dasherize'] } + }, + [JSONAPISerializers.ResourceFieldPath]: { + serializationOptions: { inflectors: ['dasherize'] } + }, + [JSONAPISerializers.ResourceType]: { + serializationOptions: { inflectors: ['pluralize', 'dasherize'] } + }, + [JSONAPISerializers.ResourceTypePath]: { + serializationOptions: { inflectors: ['pluralize', 'dasherize'] } + } + } + }) +}); +``` + +## New validators + +A common source of problems for Orbit developers has been using data that is +malformed or doesn't align with a schema's expectations. This can cause +confusing errors during processing by a cache or downstream source. + +To address this problem, we're introducing "validators", which are shipped in a +new package [`@orbit/validators`](./api/validators/index.md) that includes some +validators for primitive types. Validators that are record-specific have also +been included in [`@orbit/records`](./api/records/index.md). + +By default, each source will build its own set of validators and use them +automatically. You can instead share a common set of validators via the +`validatorFor` settings. And you can opt-out of using validators entirely by +configuring your sources with `{ autoValidate: false }`. + +## Record normalizers + +When building queries and transforms, some scenarios have been more tedious than +necessary: identifying records by a key instead of `id`, for instance, or using +a model class from a lib like ember-orbit to reference a record instead of its +json identity. + +A new abstraction has been added to make query and transform builders more +flexible: record normalizers. Record normalizers implement the +[`RecordNormalizer`](./api/records/interfaces/RecordNormalizer.md) interface and +convert record identities and/or data into a normalized form. + +The new base normalizer now allows `{ type, key, value }` to be used anywhere +that `{ type, id }` identities can be used, which significantly reduces the +annoyance of working with remote keys. + +## Synchronous change tracking in memory forks + +Previously, memory source forks behaved precisely like other memory sources: +every trackable update applied at the source level (and thus async). Now, the +default (but overrideable) behavior is to track changes at the cache level in +forks. Thus synchronous changes can be made to a forked cache and then merged +back into the base source. + +This improves the DX for the most common use case for forks: editing form data +in isolation before merging coalesced changes back to the base. For example: + +```typescript +// (sync) fork a base memory source +let fork = source.fork(); + +// (sync) add jupiter synchronously to the forked source's cache +fork.cache.update((t) => + t.addRecord({ + type: 'planet', + id: 'jupiter', + attributes: { name: 'Jupiter' } + }) +); + +// (async) merge changes from the fork back to its base +await source.merge(fork); + +// (async) jupiter should now be in the base source +let jupiter = await source.query((q) => + q.findRecord({ type: 'planet', id: 'jupiter' }) +); +``` + +If you want to continue to track changes only at the source-level and have +`merge` work only with those changes, pass the following configuration setting +when you fork a source: + +```typescript +let fork = source.fork({ + cacheSettings: { trackUpdateOperations: false } +}); +``` + +This will prevent update tracking at the cache level and will signal to `merge` +that only transforms applied at the source-level should be merged. + +## New memory cache capabilities + +In addition to the above improvements to memory sources, v0.17 also adds the +following methods to [`MemoryCache`](./api/memory/classes/MemoryCache.md): + +* `fork` - creates a new cache based on this one. +* `merge` - merges changes from a forked cache back into this cache. +* `rebase` - resets this cache's state to that of its `base` and then replays + any update operations. + +Memory cache forking / merging / rebasing is a lighter-weight way of +"branching" changes, that can ultimately be merged back into a source. +Cache-level forking can be paired with source-level forking for a lot of +flexibility and power. + +## Debug mode + +A new `debug` setting has been added to the +[`Orbit`](./api/core/interfaces/OrbitGlobal.md) global, that toggles between +using a more verbose, developer-friendly "debug" mode of Orbit vs. a leaner, +more performant production mode. + +** Debug mode is enabled by default. ** Some standard features of debug mode +include deprecation warnings and creation of validation processors on all your +caches. + +To disable debug mode: + +```typescript +import { Orbit } from '@orbit/core'; + +// disable debug mode +Orbit.debug = false; +``` + +## Increased reliance on The Platform™ + +Orbit's codebase continues to evolve with the web, adopting new ES language and +web platform features as they are released. Custom utilities have been gradually +deprecated and phased out of the codebase (e.g. `isArray` -> `Array.isArray`), +new language features such as nullish coalescing and optional chaining have been +adopted, and platform features such as +[`crypto.randomUUID`](https://developer.mozilla.org/en-US/docs/Web/API/Crypto/randomUUID) +have been adopted (with a fallback implementation if unavailable). diff --git a/website/docusaurus.config.js b/website/docusaurus.config.js index 4b7a41bc..1cc1705e 100644 --- a/website/docusaurus.config.js +++ b/website/docusaurus.config.js @@ -419,6 +419,12 @@ module.exports = { '@docusaurus/preset-classic', { docs: { + lastVersion: 'current', + versions: { + current: { + label: '0.17' + } + }, sidebarPath: require.resolve('./sidebars.js'), editUrl: 'https://github.com/orbitjs/orbit/edit/main/website/' },