diff --git a/features/edge-tools.mdx b/features/edge-tools.mdx
index 512a719..47538b6 100644
--- a/features/edge-tools.mdx
+++ b/features/edge-tools.mdx
@@ -69,6 +69,8 @@ These shared tools are available today. All Edge Tools use the `edgee_` prefix.
Example: if the model calls `edgee_current_time` with `{"timezone": "Europe/Paris"}`, the gateway returns the current time in Paris. If the model calls `edgee_generate_uuid`, the gateway returns a new UUID.
+We regularly add new tools. If there's a tool you need that we don't support, [request a tool](https://www.edgee.ai/~/me/edgee-tools?display-request-dialog=true).
+
## Relation to function calling
The [SDK tools documentation](/sdk/typescript/tools) describes how to pass **tools** (function definitions) in each request so the model can request calls — that’s **client-side** function calling. **Edge Tools** are shared tools that run **at the gateway**, so you don’t have to run and wire every tool in your own backend.
diff --git a/introduction/faq.mdx b/introduction/faq.mdx
index ca78dc9..2b34b15 100644
--- a/introduction/faq.mdx
+++ b/introduction/faq.mdx
@@ -68,74 +68,74 @@ icon: message-circle-question-mark
- **And more**
To see the full list of supported models, [see our dedicated models page](https://www.edgee.ai/models).
-
- We regularly add new providers and models. If there's a model you need that we don't support, [let us know](https://www.edgee.ai/contact).
+
+ We regularly add new providers and models. If there's a model you need that we don't support, [let us know](https://www.edgee.ai/~/me/models?display-request-dialog=true).
Edgee adds less than 10ms of latency at the p99 level. Our edge network processes requests at the point of presence closest to your application, minimizing round-trip time.
-
+
For most AI applications, where LLM inference takes 500ms-5s, this overhead is negligible — typically less than 1-2% of total request time.
Edgee's routing engine analyzes each request and selects the optimal model based on your configuration:
-
+
- **Cost strategy**: Routes to the cheapest model capable of handling the request
- **Performance strategy**: Always uses the fastest, most capable model
- **Balanced strategy**: Finds the optimal trade-off within your latency and cost budgets
-
+
You can also define fallback chains — if your primary model is unavailable (rate limited, outage, etc.), Edgee automatically retries with your backup models.
Edgee automatically handles provider failures:
-
+
1. **Detection**: We detect issues within seconds through health checks and error monitoring
2. **Retry**: For transient errors, we retry with exponential backoff
3. **Failover**: For persistent issues, we route to your configured backup models
-
+
Your application sees a seamless response — no errors, no interruption.
Every response from Edgee includes a `cost` field showing exactly how much that request cost in USD. You can also:
-
+
- View aggregated costs by model, project, or time period in the dashboard
- Set budget alerts at 80%, 90%, 100% of your limit
- Receive webhook notifications when thresholds are crossed
- Export usage data for your own analysis
-
+
No more surprise bills at the end of the month.
Yes! Edgee supports two modes:
-
+
1. **Edgee-managed keys**: We handle provider accounts and billing. Simple, but you pay our prices (with volume discounts available).
-
+
2. **Bring Your Own Key (BYOK)**: Use your existing provider API keys. You get your negotiated rates, we just route and observe.
-
+
You can mix both approaches — use your own OpenAI key while we handle Anthropic, for example.
Yes. Edgee is designed for compliance-sensitive workloads:
-
+
- **SOC 2 Type II** certified
- **GDPR** compliant with DPA available
- **Regional routing** to keep data in specific jurisdictions
-
+
Zero Data Retention mode ensures no personal data is ever stored on Edgee servers.
We're here to help:
-
+
- **Email**: support@edgee.ai
- **Discord**: [Join our community](https://www.edgee.ai/discord)
- **GitHub**: [Open an issue](https://github.com/edgee-ai)
-
+
Enterprise customers have access to dedicated support channels with guaranteed response times.