Skip to content

datadog setup#1

Merged
0Shark merged 5 commits intomainfrom
feature/datadog-setup
Nov 29, 2025
Merged

datadog setup#1
0Shark merged 5 commits intomainfrom
feature/datadog-setup

Conversation

@0Shark
Copy link
Owner

@0Shark 0Shark commented Nov 28, 2025

This pull request introduces Datadog LLM Observability and tracing support throughout the codebase, improves environment variable handling, and enhances error monitoring for LLM interactions. Key changes include adding Datadog initialization utilities, instrumenting both client and server logic for observability, updating dependencies and environment configuration, and suppressing unnecessary warnings for better compatibility.

Datadog LLM Observability and Tracing Integration:

  • Added initialize_datadog utility to client/joinly_client/datadog.py for configuring Datadog LLM Observability and auto-instrumentation of libraries. This enables agentless monitoring and sets up relevant environment variables.
  • Updated client/joinly_client/main.py and joinly/main.py to initialize Datadog early, ensuring environment variables are loaded before tracing starts and suppressing pydantic-ai warnings for Gemini compatibility. [1] [2] [3]
  • Enhanced LLM request handling in client/joinly_client/agent.py to record LLM input/output, errors, and usage metrics using Datadog LLMObs spans. [1] [2]

Server and Session Observability:

  • Instrumented meeting and speech actions in joinly/server.py and joinly/session.py with Datadog tracing spans, including contextual tags for better monitoring and error reporting. [1] [2] [3] [4] [5] [6] [7]

Configuration and Dependency Updates:

  • Added Datadog-related environment variables to .env.example for easier setup and configuration.
  • Updated client/pyproject.toml to require ddtrace>=2.0.0 for observability support.
  • Modified .devcontainer/Dockerfile to add the Python virtual environment to the PATH for smoother development experience.

General Improvements:

  • Suppressed strict warnings from ruff in .pre-commit-config.yaml to improve developer workflow with optional libraries.
  • Updated documentation to reference the new Google LLM setup guide.

Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This pull request introduces comprehensive Datadog LLM Observability and APM tracing support to monitor LLM interactions and application performance. The implementation uses Datadog's agentless mode for LLM Observability while maintaining optional support for full APM tracing with a local agent.

Key changes include:

  • Added Datadog initialization utilities with automatic library patching for OpenAI, Anthropic, httpx, and aiohttp
  • Instrumented LLM calls in the client agent to capture input/output, errors, and token usage metrics
  • Added tracing spans for meeting operations (join, speak) in both server and session layers with contextual tags

Reviewed changes

Copilot reviewed 14 out of 15 changed files in this pull request and generated 10 comments.

Show a summary per file
File Description
uv.lock Added ddtrace 4.0.0 and its dependencies (bytecode, envier, opentelemetry-api, wrapt)
pyproject.toml Added ddtrace>=2.0.0 dependency to main package
client/pyproject.toml Added ddtrace>=2.0.0 dependency to client package
joinly/utils/datadog.py New file implementing Datadog initialization with LLM Observability and APM tracing support, including helper functions for span management
client/joinly_client/datadog.py New file with client-specific Datadog initialization logic
joinly/main.py Added early Datadog initialization, .env auto-loading, and pydantic-ai warning suppression
client/joinly_client/main.py Added early Datadog initialization, .env auto-loading, and pydantic-ai warning suppression
joinly/utils/logging.py Integrated Datadog log handler when DD_LOGS_ENABLED is set
joinly/session.py Added tracing spans to join_meeting and speak_text operations with contextual tags
joinly/server.py Added tracing spans to MCP tool handlers (join_meeting, speak_text) with error tracking
client/joinly_client/agent.py Implemented LLMObs span tracking for LLM calls with input/output data, usage metrics, and error handling
.env.example Added Datadog configuration variables and PYTHONWARNINGS setting
.pre-commit-config.yaml Disabled ruff linting hook to accommodate optional library handling
.devcontainer/Dockerfile Added virtual environment to PATH for automatic activation
README.md Added reference to Google LLM setup guide

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +53 to +55
logging.getLogger(__name__).debug(
"Failed to configure Datadog logging: %s", e
)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Datadog log handler initialization occurs before logging.basicConfig is called, but the logging.getLogger(__name__).debug() call at line 53 relies on logging being configured. If DD_LOGS_ENABLED=true and Datadog fails to initialize, the debug message at line 53-55 might not be properly formatted or output. Consider moving the Datadog handler setup after the initial logging.basicConfig call, or use print() for the error message since logging may not be configured yet.

Suggested change
logging.getLogger(__name__).debug(
"Failed to configure Datadog logging: %s", e
)
print(f"Failed to configure Datadog logging: {e}")

Copilot uses AI. Check for mistakes.
Comment on lines +272 to +285
except Exception as e:
if llmobs_span:
self._annotate_llmobs(llmobs_span, llmobs_input, error=e)
raise

logger.debug(
"LLM response received with %d parts, %d input tokens and %d output tokens",
len(response.parts),
response.usage.request_tokens or 0,
response.usage.response_tokens or 0,
)

if llmobs_span:
self._annotate_llmobs(llmobs_span, llmobs_input, response=response)
Copy link

Copilot AI Nov 28, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The LLMObs span cleanup logic has a potential issue. If _annotate_llmobs is called twice with the same span (once in the error handler at line 274 and once in the success path at line 285), the span will be exited twice, which could cause errors.

Consider tracking whether the span has been exited to avoid double-exit:

llmobs_span_exited = False
try:
    response = await model_request(...)
except Exception as e:
    if llmobs_span:
        self._annotate_llmobs(llmobs_span, llmobs_input, error=e)
        llmobs_span_exited = True
    raise

if llmobs_span and not llmobs_span_exited:
    self._annotate_llmobs(llmobs_span, llmobs_input, response=response)

Copilot uses AI. Check for mistakes.
Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot open a new pull request to apply changes based on this feedback

Copy link

Copilot AI commented Nov 29, 2025

@0Shark I've opened a new pull request, #2, to work on those changes. Once the pull request is ready, I'll request review from you.

Copilot AI and others added 3 commits November 29, 2025 00:07
Co-authored-by: 0Shark <47896217+0Shark@users.noreply.github.com>
Co-authored-by: 0Shark <47896217+0Shark@users.noreply.github.com>
Fix LLMObs span double-exit issue with exit state tracking
@0Shark 0Shark merged commit 2f7fa3d into main Nov 29, 2025
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants