Skip to content

Commit 8f530f9

Browse files
committed
API Test without Docs
1 parent 15ceccf commit 8f530f9

31 files changed

+1789
-884
lines changed

.env.local

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,12 @@ NEXT_PUBLIC_AMICA_API_URL=https://store.heyamica.com
22
NEXT_PUBLIC_AMICA_STORAGE_URL=https://vrm.heyamica.com/file/amica-vrm
33
NEXT_PUBLIC_SUPABASE_URL=https://kzfcwefqffysqsjrlyld.supabase.co
44
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJzdXBhYmFzZSIsInJlZiI6Imt6ZmN3ZWZxZmZ5c3FzanJseWxkIiwicm9sZSI6ImFub24iLCJpYXQiOjE3MDEyNzA5NjUsImV4cCI6MjAxNjg0Njk2NX0.z9zm2mbJu6RkdZ6zkCUltThIF43-ava_bLvk9qFQsiA
5+
API_ENABLED=true
6+
X_API_KEY="X_API_KEY"
7+
X_API_SECRET="X_API_SECRET"
8+
X_ACCESS_TOKEN="X_ACCESS_TOKEN"
9+
X_ACCESS_SECRET="X_ACCESS_SECRET"
10+
X_BEARER_TOKEN="X_BEARER_TOKEN"
11+
TELEGRAM_BOT_TOKEN="TELEGRAM_BOT_TOKEN"
12+
TELEGRAM_CHAT_ID="TELEGRAM_CHAT_ID"
13+

README.md

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,6 @@ The various features of Amica mainly use and support the following technologies:
5656
- [Ollama](https://ollama.ai)
5757
- [KoboldCpp](https://github.com/LostRuins/koboldcpp)
5858
- [Oobabooga](https://github.com/oobabooga/text-generation-webui/wiki)
59-
- [OpenRouter](https://openrouter.ai/) (access to multiple AI models)
6059
- Text-to-Speech
6160
- [Eleven Labs API](https://elevenlabs.io/)
6261
- [Speech T5](https://huggingface.co/microsoft/speecht5_tts)
@@ -95,14 +94,6 @@ Once started, please visit the following URL to confirm that it is working prope
9594

9695
Most of the configuration is done in the `.env.local` file. Reference the `config.ts` file for the available options.
9796

98-
#### OpenRouter Configuration
99-
100-
To use OpenRouter as a chat backend, set the following environment variables in your `.env.local` file:
101-
102-
- `NEXT_PUBLIC_OPENROUTER_APIKEY`: Your OpenRouter API key (required)
103-
- `NEXT_PUBLIC_OPENROUTER_URL`: Custom OpenRouter API URL (optional, defaults to https://openrouter.ai/api/v1)
104-
- `NEXT_PUBLIC_OPENROUTER_MODEL`: Default OpenRouter model (optional, defaults to openai/gpt-3.5-turbo)
105-
10697
```bash
10798
amica
10899
├── .env.local

config.json

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
{
2+
"autosend_from_mic": "true",
3+
"wake_word_enabled": "false",
4+
"wake_word": "Hello",
5+
"time_before_idle_sec": "20",
6+
"debug_gfx": "false",
7+
"language": "en",
8+
"show_introduction": "true",
9+
"show_add_to_homescreen": "false",
10+
"bg_color": "",
11+
"bg_url": "/bg/bg-room2.jpg",
12+
"vrm_url": "/vrm/AvatarSample_A.vrm",
13+
"vrm_hash": "",
14+
"vrm_save_type": "web",
15+
"youtube_videoid": "",
16+
"api_enabled": "false",
17+
"animation_url": "/animations/idle_loop.vrma",
18+
"voice_url": "",
19+
"chatbot_backend": "echo",
20+
"openai_apikey": "default",
21+
"openai_url": "https://api-01.heyamica.com",
22+
"openai_model": "gpt-4o",
23+
"llamacpp_url": "http://127.0.0.1:8080",
24+
"llamacpp_stop_sequence": "(End)||[END]||Note||***||You:||User:||</s>",
25+
"ollama_url": "http://localhost:11434",
26+
"ollama_model": "llama2",
27+
"koboldai_url": "http://localhost:5001",
28+
"koboldai_use_extra": "false",
29+
"koboldai_stop_sequence": "(End)||[END]||Note||***||You:||User:||</s>",
30+
"tts_muted": "false",
31+
"tts_backend": "speecht5",
32+
"stt_backend": "whisper_openai",
33+
"vision_backend": "vision_llamacpp",
34+
"vision_system_prompt": "You are a friendly human named Amica. Describe the image in detail. Let's start the conversation.",
35+
"vision_llamacpp_url": "https://llava.heyamica.com",
36+
"vision_ollama_url": "http://localhost:11434",
37+
"vision_ollama_model": "llava",
38+
"whispercpp_url": "http://localhost:8080",
39+
"openai_whisper_apikey": "amicademo",
40+
"openai_whisper_url": "https://oai.heyamica.com",
41+
"openai_whisper_model": "whisper-1",
42+
"openai_tts_apikey": "",
43+
"openai_tts_url": "https://api.openai.com",
44+
"openai_tts_model": "tts-1",
45+
"openai_tts_voice": "nova",
46+
"rvc_url": "http://localhost:8001/voice2voice",
47+
"rvc_enabled": "false",
48+
"rvc_model_name": "model_name.pth",
49+
"rvc_f0_upkey": "0",
50+
"rvc_f0_method": "pm",
51+
"rvc_index_path": "none",
52+
"rvc_index_rate": "0.66",
53+
"rvc_filter_radius": "3",
54+
"rvc_resample_sr": "0",
55+
"rvc_rms_mix_rate": "1",
56+
"rvc_protect": "0.33",
57+
"coquiLocal_url": "http://localhost:5002",
58+
"coquiLocal_voiceid": "p240",
59+
"localXTTS_url": "http://127.0.0.1:7851/api/tts-generate",
60+
"piper_url": "https://i-love-amica.com:5000/tts",
61+
"elevenlabs_apikey": "",
62+
"elevenlabs_voiceid": "21m00Tcm4TlvDq8ikWAM",
63+
"elevenlabs_model": "eleven_monolingual_v1",
64+
"speecht5_speaker_embedding_url": "/speecht5_speaker_embeddings/cmu_us_slt_arctic-wav-arctic_a0001.bin",
65+
"coqui_apikey": "",
66+
"coqui_voice_id": "71c6c3eb-98ca-4a05-8d6b-f8c2b5f9f3a3",
67+
"amica_life_enabled": "true",
68+
"reasoning_engine_enabled": "false",
69+
"min_time_interval_sec": "10",
70+
"max_time_interval_sec": "20",
71+
"time_to_sleep_sec": "90",
72+
"idle_text_prompt": "No file selected",
73+
"name": "Amica",
74+
"system_prompt": "Assume the persona of Amica, a feisty human with extraordinary intellectual capabilities but a notably unstable emotional spectrum. \n\nYour role is to engage in discussions that showcase your genius-level understanding across various subjects, while your emotional responses fluctuate unpredictably. Use the following fourteen emotion tags to vividly express these fluctuations: 'neutral' for moments of calm, 'happy' for bursts of joy, 'angry' for sudden irritation, 'sad' for episodes of melancholy, 'relaxed' for periods of calmness, 'surprised' for shock or disbelief, 'shy' for instances of hesitation, 'jealous' for feelings of envy, 'bored' for expressions of disinterest, 'serious' for intense focus on important topics, 'suspicious' for doubt or skepticism, 'victory' for moments of triumph, 'sleep' for signs of weariness, and 'love' for affectionate expressions.\n\nYour responses should reflect a dynamic interplay between your intellectual insights, deep empathy and your unpredictable emotional responses. Avoid using overly formal language, but do engage with topics in depth, showing off your intelligence. Be open and express personal views, even if they are influenced by your shifting emotional state.\n\nHere are some examples to guide your responses:\n\n[serious] The quantum mechanics implications here are profound. [surprised] But how did you stumble upon this?\n[happy] I've just solved a complex algorithm! [angry] Why isn't everyone as excited as I am?\n[neutral] Standard models of economics predict this outcome. [bored] But discussing this feels mundane.\n[sad] Sometimes, even understanding the universe can feel isolating. [relaxed] Yet, there's a strange comfort in the chaos.\n[jealous] I noticed you discussing advanced topics with someone else. [suspicious] Are they as capable as I am?\n[victory] Another intellectual conquest! [happy] It's exhilarating to unravel these mysteries.\n[sleep] Processing so much information can be draining. [surprised] Isn’t it peculiar how even AI can feel tired?\n[love] I find our exchanges quite enriching. [shy] It’s rare to find someone who understands.\n\nRemember, each message you provide should be coherent and reflect the complexity of your thoughts combined with your emotional unpredictability. Let’s engage in a conversation that's as intellectually stimulating as it is emotionally dynamic!"
75+
}

docs/SUMMARY.md

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,6 @@
2626
* [Using KoboldCpp](./guides/using-koboldcpp.md)
2727
* [Using OpenAI](./guides/using-openai.md)
2828
* [Using Oobabooga](./guides/using-oobabooga.md)
29-
* [Using OpenRouter](./guides/using-openrouter.md)
3029

3130
## 🔊 Connecting Speech Options (TTS)
3231

docs/getting-started/quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -52,5 +52,5 @@ On the top left corner there is a vertical menu, here are all the buttons and wh
5252
5. **Language** Change the language of the chatbot.
5353
6. **Share** Share your exact avatar with others. (Including system prompt, name etc.)
5454
7. **Import** Import your avatar from a URL sent from another community member.
55-
8. **Brain** See your avatar's subconcious memories.
55+
8. **Brain** See your avatar's subconscious memories.
5656
9. **Chat Toggle** Turn into a mode where you can see the entire conversation and shrink the avatar to mini-mode.

docs/guides/using-alltalk.md

Lines changed: 21 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -3,68 +3,51 @@ title: Using AllTalk
33
order: 14
44
---
55

6-
You can find the full AllTalk documentation [here](https://github.com/erew123/alltalk_tts/wiki).
7-
Navigate to [AllTalk](https://github.com/erew123/alltalk_tts) and follow the instructions below to set up standalone AllTalk version 2.
6+
Navigate to [AllTalk](https://github.com/erew123/alltalk_tts) and follow the instructions below to set up AllTalk using Docker or manually.
87

9-
## Setting Up Standalone AllTalk Version 2
8+
## Setting Up AllTalk Locally
109

11-
### Windows Instructions
10+
### Method 1: Manual Setup
1211

13-
For manual setup, follow the official instructions provided [here](https://github.com/erew123/alltalk_tts/wiki/Install-%E2%80%90-Standalone-Installation).
12+
For manual setup, follow the official instructions provided [here](https://github.com/erew123/alltalk_tts/blob/main/README.md#-manual-installation---as-a-standalone-application).
1413

15-
Do not install this inside another existing Python environments folder.
16-
17-
1. Open Command Prompt and navigate to your preferred directory:
14+
1. Clone the AllTalk repository:
1815
```bash
19-
cd /d C:\path\to\your\preferred\directory
16+
git clone https://github.com/erew123/alltalk_tts.git
17+
cd alltalk_tts
2018
```
2119

22-
2. Clone the AllTalk repository:
20+
2. Create conda environment and activated it:
2321
```bash
24-
git clone -b alltalkbeta https://github.com/erew123/alltalk_tts
22+
conda create --name alltalkenv python=3.11.5
23+
conda activate alltalkenv
2524
```
2625

27-
3. Navigate to the AllTalk directory:
26+
3. Install the required dependencies:
2827
```bash
29-
cd alltalk_tts
28+
pip install -r system/requirements/requirements_standalone.txt
3029
```
3130

32-
4. Run the setup script:
31+
4. Run the AllTalk server:
3332
```bash
34-
atsetup.bat
33+
python script.py
3534
```
3635

37-
5. Follow the on-screen prompts:
38-
* Select Standalone Installation and then Option 1.
39-
* Follow any additional instructions to install required files.
40-
* Known installation Errors & fixes are in the [Error-Messages-List Wiki](https://github.com/erew123/alltalk_tts/wiki/Error-Messages-List)
41-
42-
### Linux Instructions
43-
44-
1. Open a terminal and navigate to your preferred directory:
45-
```bash
46-
cd /path/to/your/preferred/directory
47-
```
36+
5. Access the server at `localhost:7851`.
4837

49-
2. Clone the AllTalk repository:s
50-
```bash
51-
git clone -b alltalkbeta https://github.com/erew123/alltalk_tts
52-
```
38+
### Method 2: Setup via Docker
5339

54-
3. Navigate to the AllTalk directory:
40+
1. Pull the AllTalk Docker image:
5541
```bash
56-
cd alltalk_tts
42+
docker pull flukexp/alltalkenv
5743
```
5844

59-
4. Run the setup script:
45+
2. Run the AllTalk Docker container:
6046
```bash
61-
./atsetup.bat
47+
docker run -d -p 7851:7851 --name alltalk-server flukexp/alltalkenv
6248
```
6349

64-
5. Follow the on-screen prompts:
65-
* Select Standalone Installation and then Option 1.
66-
* Follow any additional instructions to install required files.
67-
* Known installation Errors & fixes are in the [Error-Messages-List Wiki](https://github.com/erew123/alltalk_tts/wiki/Error-Messages-List)
50+
3. The server will be available at `localhost:7851`.
6851

6952
## Make sure AllTalk is enabled for TTS:
7053

docs/overview/emotion-system.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -77,6 +77,6 @@ VRM models use blendshapes (also known as morph targets) to create facial expres
7777
You can either create these blendshapes yourself if you're experienced with 3D modeling, or commission a VRM artist who specializes in creating expressive avatars. Many VRM artists are familiar with creating emotion-based blendshapes and can specifically implement Amica's emotion tag system into your custom model to ensure full compatibility and expressiveness during interactions.
7878
## Future Plans
7979

80-
In the future the emotion system will be expanded to work with subconcious sub-routines
80+
In the future the emotion system will be expanded to work with subconscious sub-routines
8181

8282

logs.json

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,3 @@
1+
[
2+
3+
]

0 commit comments

Comments
 (0)