Skip to content

Commit 62dabc0

Browse files
docs (#14)
1 parent 5eed70b commit 62dabc0

39 files changed

+1118
-205
lines changed

docs/package.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -13,5 +13,8 @@
1313
"rspress": "^1.35.1",
1414
"ts-node": "^10.9.2",
1515
"typescript": "^5.6.3"
16+
},
17+
"dependencies": {
18+
"rspress-plugin-font-open-sans": "^1.0.0"
1619
}
1720
}

docs/rspress.config.ts

Lines changed: 25 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,32 @@
11
import { defineConfig } from 'rspress/config';
2+
import { pluginFontOpenSans } from 'rspress-plugin-font-open-sans';
3+
import * as path from 'node:path';
24

35
export default defineConfig({
46
root: 'src',
57
base: '/byorg-ai/',
6-
title: 'byorg-ai',
8+
title: 'byorg.ai',
9+
icon: '/img/favicon.ico',
710
description: 'TypeScript framework for writing chatbot applications.',
8-
plugins: [],
11+
logo: {
12+
light: '/img/logo_mono_light.svg',
13+
dark: '/img/logo_mono_dark.svg',
14+
},
15+
globalStyles: path.join(__dirname, 'src/styles/index.css'),
16+
themeConfig: {
17+
enableContentAnimation: true,
18+
enableScrollToTop: true,
19+
outlineTitle: 'Contents',
20+
footer: {
21+
message: `Copyright © ${new Date().getFullYear()} Callstack Open Source`,
22+
},
23+
socialLinks: [
24+
{
25+
icon: 'github',
26+
mode: 'link',
27+
content: 'https://github.com/callstack/byorg-ai',
28+
},
29+
],
30+
},
31+
plugins: [pluginFontOpenSans()],
932
});

docs/src/_meta.json

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,5 @@
33
"text": "Docs",
44
"link": "/docs/about",
55
"activeMatch": "^/docs/"
6-
},
7-
{
8-
"text": "API",
9-
"link": "/api/about",
10-
"activeMatch": "^/api/"
116
}
127
]

docs/src/api/_meta.json

Lines changed: 0 additions & 23 deletions
This file was deleted.

docs/src/api/about.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/src/api/core/_meta.json

Lines changed: 0 additions & 7 deletions
This file was deleted.

docs/src/api/core/index.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/src/api/slack/_meta.json

Lines changed: 0 additions & 7 deletions
This file was deleted.

docs/src/api/slack/index.md

Lines changed: 0 additions & 3 deletions
This file was deleted.

docs/src/docs/_meta.json

Lines changed: 2 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -17,14 +17,8 @@
1717
},
1818
{
1919
"type": "dir",
20-
"name": "slack",
21-
"label": "Slack",
22-
"collapsed": true
23-
},
24-
{
25-
"type": "dir",
26-
"name": "discord",
27-
"label": "Discord",
20+
"name": "integrations",
21+
"label": "Integrations",
2822
"collapsed": true
2923
}
3024
]

docs/src/docs/about.md

Lines changed: 11 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1-
# About byorg-ai
1+
# About byorg.ai
22

3-
This is main section about byorg-ai Framework
3+
## Introduction
4+
5+
byorg.ai is a framework designed for rapid development and deployment of AI assistants within companies and organizations.
6+
7+
## Supported Integrations
8+
9+
- Slack
10+
- Discord
11+
12+
byorg.ai supports a wide range of large language models (LLMs) via the Vercel [AI SDK](https://sdk.vercel.ai/docs/introduction). You can host byorg.ai applications on various cloud platforms or local environments. We provide examples for some popular hosting options.

docs/src/docs/core/_meta.json

Lines changed: 40 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,5 +3,45 @@
33
"type": "file",
44
"name": "usage",
55
"label": "Usage"
6+
},
7+
{
8+
"type": "file",
9+
"name": "chat-model",
10+
"label": "Chat Model"
11+
},
12+
{
13+
"type": "file",
14+
"name": "system-prompt",
15+
"label": "System Prompt"
16+
},
17+
{
18+
"type": "file",
19+
"name": "context",
20+
"label": "Context"
21+
},
22+
{
23+
"type": "file",
24+
"name": "plugins",
25+
"label": "Plugins"
26+
},
27+
{
28+
"type": "file",
29+
"name": "tools",
30+
"label": "Tools"
31+
},
32+
{
33+
"type": "file",
34+
"name": "references",
35+
"label": "References"
36+
},
37+
{
38+
"type": "file",
39+
"name": "performance",
40+
"label": "Performance"
41+
},
42+
{
43+
"type": "file",
44+
"name": "error-handling",
45+
"label": "Error Handling"
646
}
747
]

docs/src/docs/core/chat-model.md

Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
# Chat Model
2+
3+
## Providers and Adapter
4+
5+
You can use any AI provider supported by Vercel’s [AI SDK](https://sdk.vercel.ai/providers/ai-sdk-providers). This includes both LLM-as-a-service providers like OpenAI, Anthropic, and others, as well as locally hosted LLMs. We are also open to extending support to other types of chat models, such as LangChain’s [runnables](https://js.langchain.com/docs/how_to/streaming).
6+
7+
### Providers Examples
8+
9+
```js
10+
import { createOpenAI } from '@ai-sdk/openai';
11+
12+
const openAiProvider = createOpenAI({
13+
apiKey: 'your-api-key',
14+
compatibility: 'strict',
15+
});
16+
```
17+
18+
After instantiating the provider client, wrap it with our `VercelAdapter` class:
19+
20+
```js
21+
import { VercelChatModelAdapter } from '@callstack/byorg-core';
22+
23+
const openAiChatModel = new VercelChatModelAdapter({
24+
languageModel: openAiModel,
25+
});
26+
```
27+
28+
Now that the `chatModel` is ready, let’s discuss the `systemPrompt` function.

docs/src/docs/core/context.md

Lines changed: 74 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,74 @@
1+
# Context
2+
3+
The `context` object holds information about the currently processed message. It allows you to modify the behavior of your assistant at runtime or alter the message processing flow.
4+
5+
`Context` can be modified by [middlewares](./plugins.md) during the message processing flow to implement highly flexible logic or rules (e.g., authentication, RAG, etc.).
6+
7+
### Properties in Context
8+
9+
```js
10+
export type RequestContext = {
11+
/** All messages from given conversation */
12+
messages: Message[];
13+
14+
/** Convenience reference to the last `messages` item which is the latest `UserMessage`. */
15+
lastMessage: UserMessage;
16+
17+
/** Declarations of tools for ai assistant */
18+
tools: ApplicationTool[];
19+
20+
/** Storage with references to documents mentioned in the conversation */
21+
references: ReferenceStorage;
22+
23+
/** Ids of users who are a part of conversation */
24+
resolvedEntities: EntityInfo;
25+
26+
/** Function for generating a system prompt */
27+
systemPrompt: () => Promise<string> | string;
28+
29+
/**
30+
* Received partial response update with response streaming.
31+
* Note: setting this option will switch underlying assistant calls to streaming format.
32+
*/
33+
onPartialResponse?: (text: string) => void;
34+
35+
/** Measures and marks for performance tracking */
36+
performance: PerformanceTimeline;
37+
38+
/** Container for additional custom properties */
39+
extras: MessageRequestExtras;
40+
};
41+
```
42+
43+
To add typing for your custom properties to the context, create a file with the type definition and override the typing.
44+
45+
```js
46+
declare module '@callstack/byorg-core' {
47+
interface MessageRequestExtras {
48+
// Here you can add your own properties
49+
example?: string;
50+
messagesCount?: number;
51+
isAdmin?: boolea;
52+
}
53+
}
54+
55+
export {};
56+
```
57+
58+
:::warning
59+
All custom properties must be optional, as the current context creation does not support default values for custom objects.
60+
:::
61+
62+
After setting extras, you can access them from the context object:
63+
64+
```js
65+
export const systemPrompt = (context: RequestContext): Promise<string> | string => {
66+
if (context.extras.isAdmin) {
67+
return `You are currently talking to an admin.`;
68+
}
69+
70+
return `You are talking to user with regular permissions.`;
71+
};
72+
```
73+
74+
Next, we’ll explore the concept of `plugins` to understand how to modify the `context`.

docs/src/docs/core/error-handling.md

Lines changed: 23 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,23 @@
1+
# Error handling
2+
3+
The error handler in byorg.ai is responsible for processing error objects and returning messages that are sent back to the user. You can customize the error handling by providing your own error handler function. This allows you to define specific reactions to errors and deliver appropriate feedback to users.
4+
5+
```js
6+
function handleError(error: unknown): SystemResponse {
7+
logger.error('Unhandled error:', error);
8+
9+
return {
10+
role: 'system',
11+
content: 'There was a problem with Assistant. Please try again later or contact administrator.',
12+
error,
13+
};
14+
}
15+
16+
const app = createApp({
17+
chatModel,
18+
systemPrompt,
19+
errorHandler: handleError,
20+
});
21+
```
22+
23+
By implementing a custom error handler, you can tailor the user experience by providing meaningful responses to errors encountered within the byorg framework.

docs/src/docs/core/performance.md

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# Performance
2+
3+
To test your application's performance, you can use the performance object available in the context.
4+
5+
```js
6+
const slowPlugin: Promise<MessageResponse> = {
7+
name: 'slow-plugin',
8+
middleware: async (context, next): Promise<MessageResponse> => {
9+
10+
context.performance.markStart("SlowPluginPerformance");
11+
await slowFunction();
12+
context.performance.markEnd("SlowPluginPerformance");
13+
14+
// Continue middleware chain
15+
return next();
16+
},
17+
};
18+
```
19+
20+
After collecting your performance data, you can access it through the same performance object. Performance tracking requires all processes to complete, so it uses effect instead of middleware, as it runs after the response is finalized.
21+
22+
```js
23+
const analyticsPlugin: Promise<MessageResponse> = {
24+
name: 'analytics',
25+
effects: [analyticsEffect]
26+
};
27+
28+
async function analyticsEffect(context: RequestContext, response: MessageResponse): Promise<void> {
29+
console.log(context.performance.getMeasureTotal("SlowPluginPerformance"))
30+
}
31+
```
32+
33+
## Measures vs Marks
34+
35+
This concept comes from [Web Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance).
36+
Marks are just named 'sequences' for the performance tool to measure.
37+
Let's say that you have a tool for your AI, and you'd like to check how it performs.
38+
Issue might be that it's being triggered multiple times by AI. For that reason
39+
one mark can be a part of multiple measures.
40+
Single measure is constructed of two marks: `start` and `end`.
41+
42+
This concept is inspired by the [Web Performance API](https://developer.mozilla.org/en-US/docs/Web/API/Performance). Marks are essentially named sequences that the performance tool uses to measure execution time. For instance, if you have a tool for your AI and want to evaluate its performance, you might find it triggered multiple times by the AI. Therefore, a single mark can be part of multiple measures. A measure is constructed using two marks: `start` and `end`.
43+
44+
:::info
45+
You can also access all marks and measures using `getMarks` and `getMeasures`
46+
:::
47+
48+
## Default measures
49+
50+
Byorg automatically gathers performance data. Middleware measures are collected in two separate phases: before handling the response and after it.
51+
52+
```js
53+
export const PerformanceMarks = {
54+
processMessages: 'processMessages',
55+
middlewareBeforeHandler: 'middleware:beforeHandler',
56+
middlewareAfterHandler: 'middleware:afterHandler',
57+
chatModel: 'chatModel',
58+
toolExecution: 'toolExecution',
59+
errorHandler: 'errorHandler',
60+
} as const;
61+
```

0 commit comments

Comments
 (0)