Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 11 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,17 @@ A GitHub Action to interact with OpenAI-compatible LLM services, supporting cust

## Outputs

| Output | Description |
| ---------- | --------------------------------------------------------------------------------------------- |
| `response` | The raw response from the LLM (always available) |
| `<field>` | When using tool_schema, each field from the function arguments JSON becomes a separate output |
| Output | Description |
| -------------------------------------- | --------------------------------------------------------------------------------------------- |
| `response` | The raw response from the LLM (always available) |
| `prompt_tokens` | Number of tokens in the prompt |
| `completion_tokens` | Number of tokens in the completion |
| `total_tokens` | Total number of tokens used |
| `prompt_cached_tokens` | Number of cached tokens in the prompt (cost saving, if available) |
| `completion_reasoning_tokens` | Number of reasoning tokens for o1/o3 models (if available) |
| `completion_accepted_prediction_tokens`| Number of accepted prediction tokens (if available) |
| `completion_rejected_prediction_tokens`| Number of rejected prediction tokens (if available) |
| `<field>` | When using tool_schema, each field from the function arguments JSON becomes a separate output |

**Output Behavior:**

Expand Down
15 changes: 11 additions & 4 deletions README.zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,17 @@

## 输出参数

| 输出 | 说明 |
| ---------- | ----------------------------------------------------------------- |
| `response` | 来自 LLM 的原始响应(始终可用) |
| `<field>` | 使用 tool_schema 时,函数参数 JSON 中的每个字段都会成为独立的输出 |
| 输出 | 说明 |
| --------------------------------------- | ----------------------------------------------------------------- |
| `response` | 来自 LLM 的原始响应(始终可用) |
| `prompt_tokens` | 提示词的 token 数量 |
| `completion_tokens` | 回复的 token 数量 |
| `total_tokens` | 总 token 使用量 |
| `prompt_cached_tokens` | 提示词中的缓存 token 数量(节省成本,如可用) |
| `completion_reasoning_tokens` | 推理 token 数量,用于 o1/o3 模型(如可用) |
| `completion_accepted_prediction_tokens` | 已接受的预测 token 数量(如可用) |
| `completion_rejected_prediction_tokens` | 已拒绝的预测 token 数量(如可用) |
| `<field>` | 使用 tool_schema 时,函数参数 JSON 中的每个字段都会成为独立的输出 |

**输出行为:**

Expand Down
15 changes: 11 additions & 4 deletions README.zh-TW.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,10 +92,17 @@

## 輸出參數

| 輸出 | 說明 |
| ---------- | ----------------------------------------------------------------- |
| `response` | 來自 LLM 的原始回應(始終可用) |
| `<field>` | 使用 tool_schema 時,函數參數 JSON 中的每個欄位都會成為獨立的輸出 |
| 輸出 | 說明 |
| --------------------------------------- | ----------------------------------------------------------------- |
| `response` | 來自 LLM 的原始回應(始終可用) |
| `prompt_tokens` | 提示詞的 token 數量 |
| `completion_tokens` | 回覆的 token 數量 |
| `total_tokens` | 總 token 使用量 |
| `prompt_cached_tokens` | 提示詞中的快取 token 數量(節省成本,如可用) |
| `completion_reasoning_tokens` | 推理 token 數量,用於 o1/o3 模型(如可用) |
| `completion_accepted_prediction_tokens` | 已接受的預測 token 數量(如可用) |
| `completion_rejected_prediction_tokens` | 已拒絕的預測 token 數量(如可用) |
| `<field>` | 使用 tool_schema 時,函數參數 JSON 中的每個欄位都會成為獨立的輸出 |

**輸出行為:**

Expand Down
14 changes: 14 additions & 0 deletions action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,20 @@ inputs:
outputs:
response:
description: 'The response from the LLM'
prompt_tokens:
description: 'Number of tokens in the prompt'
completion_tokens:
description: 'Number of tokens in the completion'
total_tokens:
description: 'Total number of tokens used'
prompt_cached_tokens:
description: 'Number of cached tokens in the prompt (cost saving)'
completion_reasoning_tokens:
description: 'Number of reasoning tokens (for o1/o3 models)'
completion_accepted_prediction_tokens:
description: 'Number of accepted prediction tokens'
completion_rejected_prediction_tokens:
description: 'Number of rejected prediction tokens'

runs:
using: 'docker'
Expand Down
33 changes: 33 additions & 0 deletions main.go
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ import (
"context"
"fmt"
"os"
"strconv"

"github.com/appleboy/com/gh"
openai "github.com/sashabaranov/go-openai"
Expand Down Expand Up @@ -140,6 +141,21 @@ func run() error {
fmt.Println(response)
fmt.Println("--- End Response ---")

// Print token usage
fmt.Println("--- Token Usage ---")
fmt.Printf("Prompt Tokens: %d\n", resp.Usage.PromptTokens)
fmt.Printf("Completion Tokens: %d\n", resp.Usage.CompletionTokens)
fmt.Printf("Total Tokens: %d\n", resp.Usage.TotalTokens)
if resp.Usage.PromptTokensDetails != nil {
fmt.Printf("Cached Tokens: %d\n", resp.Usage.PromptTokensDetails.CachedTokens)
}
if d := resp.Usage.CompletionTokensDetails; d != nil {
fmt.Printf("Reasoning Tokens: %d\n", d.ReasoningTokens)
fmt.Printf("Accepted Prediction Tokens: %d\n", d.AcceptedPredictionTokens)
fmt.Printf("Rejected Prediction Tokens: %d\n", d.RejectedPredictionTokens)
}
fmt.Println("--- End Token Usage ---")

// Set GitHub Actions output
var toolArgs map[string]string
if toolMeta != nil {
Expand All @@ -161,6 +177,23 @@ func run() error {
)
}

// Add token usage to output
output["prompt_tokens"] = strconv.Itoa(resp.Usage.PromptTokens)
output["completion_tokens"] = strconv.Itoa(resp.Usage.CompletionTokens)
output["total_tokens"] = strconv.Itoa(resp.Usage.TotalTokens)

// Add prompt token details if available
if resp.Usage.PromptTokensDetails != nil {
output["prompt_cached_tokens"] = strconv.Itoa(resp.Usage.PromptTokensDetails.CachedTokens)
}

// Add completion token details if available
if d := resp.Usage.CompletionTokensDetails; d != nil {
output["completion_reasoning_tokens"] = strconv.Itoa(d.ReasoningTokens)
output["completion_accepted_prediction_tokens"] = strconv.Itoa(d.AcceptedPredictionTokens)
output["completion_rejected_prediction_tokens"] = strconv.Itoa(d.RejectedPredictionTokens)
}

if err := gh.SetOutput(output); err != nil {
return fmt.Errorf("failed to set output: %v", err)
}
Expand Down
Loading