Skip to content

Commit

Permalink
Merge branch 'master' into feat/file-handling
Browse files Browse the repository at this point in the history
  • Loading branch information
angrybayblade committed Jan 2, 2025
2 parents 81c4a34 + 0ca89ac commit 19e16a9
Show file tree
Hide file tree
Showing 163 changed files with 9,975 additions and 3,416 deletions.
2 changes: 1 addition & 1 deletion docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@
},
{
"name": "Chat with Code",
"url": "https://chatgpt.com/g/g-67697db23c808191a1787ea4b86ac1ce-composio"
"url": "https://entelligence.ai/ComposioHQ&composio"
}
],
"navigation": [
Expand Down
4 changes: 2 additions & 2 deletions docs/mint.json.ejs
Original file line number Diff line number Diff line change
Expand Up @@ -72,8 +72,8 @@
"url": "https://app.composio.dev/apps"
},
{
"name": "Chat with Repo",
"url": "https://dub.composio.dev/composio-chat-with-repo"
"name": "Chat with Code",
"url": "https://entelligence.ai/ComposioHQ&composio"
}
],
"navigation": [
Expand Down
143 changes: 106 additions & 37 deletions docs/patterns/tools/use-tools/processing-actions.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -20,65 +20,45 @@ These can be applied at two levels:
1. **App-level**: Affects all actions within a specific tool.
2. **Action-level**: Tailored processing for individual actions.


<Tabs>
<Tab title="Python">
<Steps>
<Step title="Import required libraries">
<CodeGroup>
```python Python
from langchain.agents import create_openai_functions_agent, AgentExecutor
from langchain import hub
from langchain_openai import ChatOpenAI
from composio_langchain import ComposioToolSet, Action, App
```
```javascript JavaScript
Coming Soon
```
</CodeGroup>
</Step>
<Step title="Import Prompt template & Initialize ChatOpenAI & composio toolset client">
<CodeGroup>
```python Python
prompt = hub.pull("hwchase17/openai-functions-agent")

llm = ChatOpenAI()
composio_toolset = ComposioToolSet()
```
```javascript JavaScript
Coming Soon
```
</CodeGroup>
</Step>
<Step title="Define a Custom Function to Modify Schema">
This function will be used to modify the schema of the `LINEAR_CREATE_LINEAR_ISSUE` action, we get rid of the parameters `project_id` and `team_id`, later in the program we will pass these values as inputs to the action manually. The technical term for this is **Action-level Schema Processing**.
<CodeGroup>
```python Python
def linear_schema_processor(schema: dict) -> dict:
# This way the agent doesn't expect a project and team ID to run the action
del schema['project_id']
del schema['team_id']
return schema
```
```javascript JavaScript
Coming Soon
```
</CodeGroup>
</Step><Step title="Define a Custom Function to Modify Input">
This function will be used to modify the input data for the `LINEAR_CREATE_LINEAR_ISSUE` action. Here we have added the values for `project_id` and `team_id` parameters to the input data. By doing this, we can avoid specifying these values in the prompt and be sure that the agent uses the correct values. The technical term for this is **Action-level Pre-Processing**.
<CodeGroup>
```python Python
def linear_pre_processor(input_data: dict) -> dict:
input_data['project_id'] = 'e708162b-9b1a-4901-ab93-0f0149f9d805'
input_data['team_id'] = '249ee4cc-7bbb-4ff1-adbe-d3ef2f3df94e'
return input_data
```
```javascript JavaScript
Coming Soon
```
</CodeGroup>
</Step>
<Step title="Define a Custom Function to Modify Output">
This function will be used to modify the output data for the `LINEAR_CREATE_LINEAR_ISSUE` action. Here we are modifying the output to just return the action execution status `successful` & the `issue_id`, by doing this can keep the LLM context clean. The technical term for this is **Action-level Post-Processing**.
<CodeGroup>
```python Python
def linear_post_processor(output_data: dict) -> dict:
output_data = {
Expand All @@ -87,15 +67,9 @@ def linear_post_processor(output_data: dict) -> dict:
}
return output_data
```
```javascript JavaScript
Coming Soon
```
</CodeGroup>
</Step>
<Step title="Get Linear Action from Composio">
When getting tools using the `get_tools()` method, we need to pass the `processors` parameter to specify the schema, pre-processing, and post-processing functions. In this example, we're setting up an Action-level preprocessor by mapping the `LINEAR_CREATE_LINEAR_ISSUE` action to our `linear_schema_processor`, `linear_pre_processor` and `linear_post_processor` functions defined above respectively in schema, pre, and post processors.

<CodeGroup>
```python Python {2-12}
tools = composio_toolset.get_tools(
processors={
Expand All @@ -112,13 +86,8 @@ tools = composio_toolset.get_tools(
actions=[Action.LINEAR_CREATE_LINEAR_ISSUE]
)
```
```javascript JavaScript
Coming Soon
```
</CodeGroup>
</Step>
<Step title="Invoke the agent">
<CodeGroup>
```python Python
task = "Create a Linear Issue to update the frontend"

Expand All @@ -127,12 +96,112 @@ agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

agent_executor.invoke({"input": task})
```
```javascript JavaScript
Coming Soon
</Step>
</Steps>
</Tab>
<Tab title="TypeScript">
<Steps>
<Step title="Import required libraries">
```typescript TypeScript
import { ActionExecutionResDto, LangchainToolSet, RawActionData, TPostProcessor, TPreProcessor, TSchemaProcessor } from "composio-core";
import { ChatOpenAI } from "@langchain/openai";
import { createOpenAIFunctionsAgent, AgentExecutor } from "langchain/agents";
import { pull } from "langchain/hub";
import { ChatPromptTemplate } from "@langchain/core/prompts";
```
</Step>
<Step title="Initialize ChatOpenAI & LangChain toolset client">
```typescript TypeScript
const llm = new ChatOpenAI({ apiKey: "<your-api-key>" });
const toolset = new LangchainToolSet({ apiKey: "<your-api-key>" });
```
</Step>
<Step title="Schema Modifier">
This will be used to modify the schema of the `LINEAR_CREATE_LINEAR_ISSUE` action, we remove the parameters `project_id` and `team_id` as required fields, later in the program we will pass these values as inputs to the action manually. The technical term for this is **Schema Processing**.
```typescript TypeScript
const schemaProcessor: TSchemaProcessor = ({
actionName,
toolSchema,
}: {
actionName: string;
toolSchema: RawActionData;
}) => {
const modifiedSchema = { ...toolSchema };
modifiedSchema.parameters = {
...modifiedSchema.parameters,
required: modifiedSchema.parameters?.required?.filter(
field => !['project_id', 'team_id'].includes(field)
) || []
};

return modifiedSchema;
};
```
</Step><Step title="Input Modifier">
This will be used to modify the input data for the `LINEAR_CREATE_LINEAR_ISSUE` action. Here we have added the values for `project_id` and `team_id` parameters to the input data. By doing this, we can avoid specifying these values in the prompt and be sure that the agent uses the correct values. The technical term for this is **Pre-Processing**.
```typescript TypeScript
const preProcessor: TPreProcessor = ({ params, actionName, appName }: {
params: Record<string, unknown>;
actionName: string;
appName: string;
}) => {
const modifiedParams = { ...params };

modifiedParams.project_id = "e708162b-9b1a-4901-ab93-0f0149f9d805";
modifiedParams.team_id = "249ee4cc-7bbb-4ff1-adbe-d3ef2f3df94e";

return modifiedParams;
}
```
</Step>
<Step title="Output Modifier">
This will be used to modify the output data for the `LINEAR_CREATE_LINEAR_ISSUE` action. Here we are modifying the output to just return the action execution status `successful` & the `issueId`, by doing this can keep the LLM context clean. The technical term for this is **Post-Processing**.
```typescript TypeScript
const postProcessor: TPostProcessor = ({ actionName, appName, toolResponse }: {
actionName: string;
appName: string;
toolResponse: ActionExecutionResDto;
}) => {
const issueId = toolResponse.data.id;
return { data: { id: issueId }, successful: true };
}
```
</Step>
<Step title="Add the Processors to Toolset & Execute the Agent">
After creating the processors, we need to add them to the toolset using `addSchemaProcessor`, `addPreProcessor`, and `addPostProcessor` methods and then get the tools. Last we create the agent and execute it.
```typescript TypeScript {2-4}
async function main() {
toolset.addSchemaProcessor(schemaProcessor);
toolset.addPreProcessor(preProcessor);
toolset.addPostProcessor(postProcessor);

const tools = await toolset.getTools({
actions: ["LINEAR_CREATE_LINEAR_ISSUE"]
});

const prompt = (await pull(
"hwchase17/openai-functions-agent"
)) as ChatPromptTemplate;

const agent = await createOpenAIFunctionsAgent({
llm,
tools,
prompt,
});

const agentExecutor = new AgentExecutor({ agent, tools, verbose: true });

const response = await agentExecutor.invoke({ input: "Create an issue on linear to update the frontend to with new design and description 'to update the frontend with new design', estimate is 5 (L) & return the issue id" });
console.log(response);
}

main()
```
</CodeGroup>
</Step>
</Steps>
</Tab>
</Tabs>


### How to use processors at App-level?
Above we saw how to use processors at the Action-level, below is an example of how to use them at the App-level.
Expand All @@ -153,7 +222,7 @@ tools = composio_toolset.get_tools(
apps=[App.<app_name>]
)
```
```javascript JavaScript
```typescript TypeScript
Coming Soon
```
</CodeGroup>
Expand Down
36 changes: 36 additions & 0 deletions js/Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@

.PHONY: install
install:
pnpm install

.PHONY: build
build:
pnpm run build

.PHONY: lint
lint:
pnpm run lint

.PHONY: setup_cli
setup_cli:
./setup_cli.sh

.PHONY: bump_version
bump_version:
./bump_version.sh

.PHONY: test
test:
pnpm run test

.PHONY: switch_package
switch_package:
./switch_package.sh

## add support for publishing to npm
.PHONY: release
release:
./switch_package.sh
./bump_version.sh
pnpm run build
cd dist && pnpm publish
8 changes: 5 additions & 3 deletions js/TO_DOs.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
List of things to do for Composio JS SDK
- [ ] Decrease bundle size more to less than 500 kb.
- [ ] Decrease bundle size more to less than 500 kb.[DONE]
- [ ] Add support via browserify[DONE]
- [ ] Add test for connected account execution
- [ ] Add example test for cloudflare, vercel, server components. Check release with different run time
- [ ] Move away from axios
- [ ] Optimise code from openapi spec
- [ ] Move away from class based to functional based code
- [ ] Add react code library
- [ ] Add support via browserify
- [ ] Add more edge cases
- [ ] Sign JS SDK CLIs
- [ ] Sign JS SDK CLIs
2 changes: 2 additions & 0 deletions js/bump_version.sh
Original file line number Diff line number Diff line change
Expand Up @@ -23,5 +23,7 @@ sed -i.bak "s/\"version\": \"$current_version_pkg\"/\"version\": \"$new_version\
# Update version in src/constants.js
sed -i.bak "s/COMPOSIO_VERSION = \`$current_version\`/COMPOSIO_VERSION = \`$new_version\`/" src/constants.js && rm src/constants.js.bak

sed -i.bak "s/COMPOSIO_VERSION = \`$current_version\`/COMPOSIO_VERSION = \`$new_version\`/" src/constants.ts && rm src/constants.ts.bak

echo "Version updated from $current_version to $new_version in package.dist.json"
echo "Version updated from $current_version_pkg to $new_version in package.json"
41 changes: 41 additions & 0 deletions js/examples/chat-with-sheets/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
# See https://help.github.com/articles/ignoring-files/ for more about ignoring files.

# dependencies
/node_modules
/.pnp
.pnp.*
.yarn/*
!.yarn/patches
!.yarn/plugins
!.yarn/releases
!.yarn/versions

# testing
/coverage

# next.js
/.next/
/out/

# production
/build

# misc
.DS_Store
*.pem

# debug
npm-debug.log*
yarn-debug.log*
yarn-error.log*
.pnpm-debug.log*

# env files (can opt-in for committing if needed)
.env*

# vercel
.vercel

# typescript
*.tsbuildinfo
next-env.d.ts
40 changes: 40 additions & 0 deletions js/examples/chat-with-sheets/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
This is a [Next.js](https://nextjs.org) project bootstrapped with [`create-next-app`](https://nextjs.org/docs/pages/api-reference/create-next-app).

## Getting Started

First, run the development server:

```bash
npm run dev
# or
yarn dev
# or
pnpm dev
# or
bun dev
```

Open [http://localhost:3000](http://localhost:3000) with your browser to see the result.

You can start editing the page by modifying `pages/index.tsx`. The page auto-updates as you edit the file.

[API routes](https://nextjs.org/docs/pages/building-your-application/routing/api-routes) can be accessed on [http://localhost:3000/api/hello](http://localhost:3000/api/hello). This endpoint can be edited in `pages/api/hello.ts`.

The `pages/api` directory is mapped to `/api/*`. Files in this directory are treated as [API routes](https://nextjs.org/docs/pages/building-your-application/routing/api-routes) instead of React pages.

This project uses [`next/font`](https://nextjs.org/docs/pages/building-your-application/optimizing/fonts) to automatically optimize and load [Geist](https://vercel.com/font), a new font family for Vercel.

## Learn More

To learn more about Next.js, take a look at the following resources:

- [Next.js Documentation](https://nextjs.org/docs) - learn about Next.js features and API.
- [Learn Next.js](https://nextjs.org/learn-pages-router) - an interactive Next.js tutorial.

You can check out [the Next.js GitHub repository](https://github.com/vercel/next.js) - your feedback and contributions are welcome!

## Deploy on Vercel

The easiest way to deploy your Next.js app is to use the [Vercel Platform](https://vercel.com/new?utm_medium=default-template&filter=next.js&utm_source=create-next-app&utm_campaign=create-next-app-readme) from the creators of Next.js.

Check out our [Next.js deployment documentation](https://nextjs.org/docs/pages/building-your-application/deploying) for more details.
Loading

0 comments on commit 19e16a9

Please sign in to comment.