You can join our discord: discord.gg/gptgod for further updates.
Base URL: https://api.gptgod.online
Api Key: sk-OsMMq65tXdfOIlTUYtocSL7NCsmA7CerN77OkEv29dODg1EA
Support Models: gpt-4-all
,gpt-3.5-turbo
,gpt-3.5-turbo-16k
,net-gpt-3.5-turbo
,net-gpt-3.5-turbo-16k
,claude-1-100k
,google-palm
,llama-2-70b
,llama-2-13b
,llama-2-7b
,code-llama-34b
,code-llama-13b
,code-llama-7b
,qwen-72b
,stable-diffusion
,mixtral-8x7b
,mistral-medium
I suggest you fork this project first. Some websites may go offline at any time.
Still striving to keep updating.
Have implemented models here: If you do not want your website to appear here, please raise an issue and I will remove it immediately. Unfortunately, most of the sites here are no longer available.
Update At 2023-09-10
Site | Models |
---|---|
you | gpt-3.5-turbo |
phind | net-gpt-3.5-turbo |
forefront | gpt-3.5-turbo, claude |
mcbbs | gpt-3.5-turbo, gpt-3.5-turbo-16k |
chatdemo | gpt-3.5-turbo, gpt-3.5-turbo-16k |
vita | gpt-3.5-turbo |
skailar | gpt-4 |
fakeopen | gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4 |
easychat | gpt-4 |
better | gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4 |
pweb | gpt-3.5-turbo, gpt-3.5-turbo-16k |
bai | gpt-3.5-turbo |
gra | gpt-3.5-turbo, gpt-3.5-turbo-16k |
magic | gpt-3.5-turbo, gpt-4, claude-instance, claude, claude-100k |
chim | gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4 |
ram | gpt-3.5-turbo-16k |
chur | gpt-3.5-turbo, gpt-3.5-turbo-16k |
xun | gpt-3.5-turbo, gpt-3.5-turbo-16k |
vvm | gpt-3.5-turbo, gpt-3.5-turbo-16k, gpt-4 |
poef | |
claude | claude-2-100k |
cursor | gpt-3.5-turbo, gpt-4 |
chatbase | gpt-3.5-turbo |
ails | gpt-3.5-turbo |
sincode | gpt-3.5-turbo, gpt-4 |
openai | too much |
jasper | gpt-3.5-turbo, gpt-4 |
pap | |
acytoo | gpt-3.5-turbo |
search | |
www | url |
ddg | search |
First of all, you should create file .env
.
All operation methods require this step.
http_proxy=http://host:port
rapid_api_key=xxxxxxxxxx
EMAIL_TYPE=temp-email44
DEBUG=0
POOL_SIZE=0
PHIND_POOL_SIZE=0
http_proxy
: config your proxy if you can not access target website directly; If you dont need proxy, delete this line;forefront
use env(this site has been removed):rapid_api_key
: you should config this if you use forefront api, this apikey is used for receive register email, get api key hereEMAIL_TYPE
: temp email type includestemp-email
temp-email44
tempmail-lol
- temp-email: soft limit 100req/days, if over use money, need bind credit card! Very Stable!
- temp-email44: hard limit 100req/days! Stable!
- tempmail-lol: nothing need, limit 25request/5min. Not Stable.
DEBUG
: Valid when useforefront
You can set =1 when you run local. show reverse processPOOL_SIZE
:forefront
concurrency size. Keep set=1 until you run it successfully!!! You can engage in {POOL_SIZE} conversations concurrently. More pool size, More conversation can be done simultaneously, But use more RAM
phind
use env:PHIND_POOL_SIZE
:phind
concurrency size.You can engage in {POOL_SIZE} conversations concurrently. More pool size, More conversation can be done simultaneously, But use more RAM
# install module
yarn
# start server
yarn start
docker run -p 3000:3000 --env-file .env xiangsx/gpt4free-ts:latest
first, you should create file .env; Follow step "Run with docker
deploy
docker-compose up --build -d
Find supports model and site http://127.0.0.1:3000/supports [GET]
The same as openai http://127.0.0.1:3000/:site/v1/chat/completions [POST]
The same as openai http://127.0.0.1:3000/v1/chat/completions?site=xxx [POST]
Return when chat complete http://127.0.0.1:3000/ask?prompt=***&model=***&site=*** [POST/GET]
Return with eventstream http://127.0.0.1:3000/ask/stream?prompt=***&model=***&site=*** [POST/GET]
prompt
: your question. It can be astring
orjsonstr
.- example
jsonstr
:[{"role":"user","content":"hello\n"},{"role":"assistant","content":"Hi there! How can I assist you today?"},{"role":"user","content":"who are you"}]
- example
string
:who are you
- example
model
: defaultgpt3.5-turbo
. model include:gpt4
gpt3.5-turbo
net-gpt3.5-turbo
gpt-3.5-turbo-16k
site
: defaultyou
. target site, includefakeopen
better
forefront
you
chatdemo
phind
vita
query supports site and models with api 127.0.0.1:3000/supports
[
{
"site": "you",
"models": [
"gpt-3.5-turbo"
]
},
...
]
Response when chat end(/ask):
interface ChatResponse {
content: string;
error?: string;
}
Response with stream like, Suggest!!(/ask/stream):
event: message
data: {"content":"I"}
event: done
data: {"content":"'m"}
event: error
data: {"error":"some thind wrong"}
- request to site you with history
req:
res:
{
"content": "Hi there! How can I assist you today?"
}
- request to site you with stream return
req:
127.0.0.1:3000/ask/stream?site=you&prompt=who are you
res:
event: message
data: {"content":"I"}
event: message
data: {"content":"'m"}
event: message
data: {"content":" a"}
event: message
data: {"content":" search"}
event: message
data: {"content":" assistant"}
........
event: done
data: {"content":"done"}
You may join our discord: discord.gg/gpt4free for further updates.
This is a replication project for the typescript version of gpt4free
This repository is not associated with or endorsed by providers of the APIs contained in this GitHub repository. This project is intended for educational purposes only. This is just a little personal project. Sites may contact me to improve their security or request the removal of their site from this repository.
Please note the following:
-
Disclaimer: The APIs, services, and trademarks mentioned in this repository belong to their respective owners. This project is not claiming any right over them nor is it affiliated with or endorsed by any of the providers mentioned.
-
Responsibility: The author of this repository is not responsible for any consequences, damages, or losses arising from the use or misuse of this repository or the content provided by the third-party APIs. Users are solely responsible for their actions and any repercussions that may follow. We strongly recommend the users to follow the TOS of the each Website.
-
Educational Purposes Only: This repository and its content are provided strictly for educational purposes. By using the information and code provided, users acknowledge that they are using the APIs and models at their own risk and agree to comply with any applicable laws and regulations.
-
Copyright: All content in this repository, including but not limited to code, images, and documentation, is the intellectual property of the repository author, unless otherwise stated. Unauthorized copying, distribution, or use of any content in this repository is strictly prohibited without the express written consent of the repository author.
-
Indemnification: Users agree to indemnify, defend, and hold harmless the author of this repository from and against any and all claims, liabilities, damages, losses, or expenses, including legal fees and costs, arising out of or in any way connected with their use or misuse of this repository, its content, or related third-party APIs.
-
Updates and Changes: The author reserves the right to modify, update, or remove any content, information, or features in this repository at any time without prior notice. Users are responsible for regularly reviewing the content and any changes made to this repository.
By using this repository or any code related to it, you agree to these terms. The author is not responsible for any copies, forks, or reuploads made by other users. This is the author's only account and repository. To prevent impersonation or irresponsible actions, you may comply with the GNU GPL license this Repository uses.