Skip to content
forked from xtekky/gpt4free

decentralising the Ai Industry, just some language model api's...

License

Notifications You must be signed in to change notification settings

elgreco/gpt4free

Β 
Β 

248433934-7886223b-c1d1-4260-82aa-da5741f303bb

xtekky%2Fgpt4free | Trendshift

Written by @xtekky & maintained by @hlohaus

By using this repository or any code related to it, you agree to the legal notice. The author is not responsible for the usage of this repository nor endorses it, nor is the author responsible for any copies, forks, re-uploads made by other users, or anything else related to GPT4Free. This is the author's only account and repository. To prevent impersonation or irresponsible actions, please comply with the GNU GPL license this Repository uses.

Warning

"gpt4free" serves as a PoC (proof of concept), demonstrating the development of an API package with multi-provider requests, with features like timeouts, load balance and flow control.

Note

Lastet version: PyPI version Docker version
Stats: Downloads Downloads

pip install -U g4f
docker pull hlohaus789/g4f

πŸ†• What's New

πŸ”» Site Takedown

Is your site on this repository and you want to take it down? Send an email to [email protected] with proof it is yours and it will be removed as fast as possible. To prevent reproduction please secure your API ;)

πŸš€ Feedback and Todo

You can always leave some feedback here: https://forms.gle/FeWV9RLEedfdkmFN6

As per the survey, here is a list of improvements to come

  • Update the repository to include the new openai library syntax (ex: Openai() class) | completed, use g4f.client.Client
  • Golang implementation
  • 🚧 Improve Documentation (in /docs & Guides, Howtos, & Do video tutorials)
  • Improve the provider status list & updates
  • Tutorials on how to reverse sites to write your own wrapper (PoC only ofc)
  • Improve the Bing wrapper. (Wait and Retry or reuse conversation)
  • Write a standard provider performance test to improve the stability
  • Potential support and development of local models
  • 🚧 Improve compatibility and error handling

πŸ“š Table of Contents

πŸ› οΈ Getting Started

Docker container

Quick start:
  1. Download and install Docker
  2. Pull latest image and run the container:
docker pull hlohaus789/g4f
docker run -p 8080:8080 -p 1337:1337 -p 7900:7900 --shm-size="2g" -v ${PWD}/hardir:/app/hardir hlohaus789/g4f:latest
  1. Open the included client on: http://localhost:8080/chat/ or set the API base in your client to: http://localhost:1337/v1
  2. (Optional) If you need to log in to a provider, you can view the desktop from the container here: http://localhost:7900/?autoconnect=1&resize=scale&password=secret.
Use your smartphone:

Run the Web UI on Your Smartphone:

Use python

Prerequisites:
  1. Download and install Python (Version 3.10+ is recommended).
  2. Install Google Chrome for providers with webdriver
Install using PyPI package:
pip install -U g4f[all]

How do I install only parts or do disable parts? Use partial requirements: /docs/requirements

Install from source:

How do I load the project using git and installing the project requirements? Read this tutorial and follow it step by step: /docs/git

Install using Docker:

How do I build and run composer image from source? Use docker-compose: /docs/docker

πŸ’‘ Usage

Text Generation

from g4f.client import Client

client = Client()
response = client.chat.completions.create(
    model="gpt-3.5-turbo",
    messages=[{"role": "user", "content": "Hello"}],
    ...
)
print(response.choices[0].message.content)
Hello! How can I assist you today?

Image Generation

from g4f.client import Client

client = Client()
response = client.images.generate(
  model="gemini",
  prompt="a white siamese cat",
  ...
)
image_url = response.data[0].url

Image with cat

Full Documentation for Python API

Webview GUI

Open the GUI in a window of your OS. Runs on a local/static/ssl server and use a JavaScript API. Supports login into the OpenAI Chat, Image Upload and streamed Text Generation.

Supports all platforms, but only Linux tested.

  1. Install all requirements with:
pip install g4f[webview]
  1. Follow the OS specific steps here: pywebview installation

  2. Run the app with:

from g4f.gui.webview import run_webview
run_webview(debug=True)

or execute the following command:

python -m g4f.gui.webview -debug

Webserver

To start the web interface, type the following codes in python:

from g4f.gui import run_gui
run_gui()

or execute the following command:

python -m g4f.cli gui -port 8080 -debug

Interference API

You can use the Interference API to serve other OpenAI integrations with G4F.

See: /docs/interference

Configuration

Cookies

You need cookies for BingCreateImages and the Gemini Provider. From Bing you need the "_U" cookie and from Gemini you need the "__Secure-1PSID" cookie. Sometimes you doesn't need the "__Secure-1PSID" cookie, but some other auth cookies. You can pass the cookies in the create function or you use the set_cookies setter before you run G4F:

from g4f.cookies import set_cookies

set_cookies(".bing.com", {
  "_U": "cookie value"
})
set_cookies(".google.com", {
  "__Secure-1PSID": "cookie value"
})
...

.HAR File for OpenaiChat Provider

Generating a .HAR File

To utilize the OpenaiChat provider, a .har file is required from https://chat.openai.com/. Follow the steps below to create a valid .har file:

  1. Navigate to https://chat.openai.com/ using your preferred web browser and log in with your credentials.
  2. Access the Developer Tools in your browser. This can typically be done by right-clicking the page and selecting "Inspect," or by pressing F12 or Ctrl+Shift+I (Cmd+Option+I on a Mac).
  3. With the Developer Tools open, switch to the "Network" tab.
  4. Reload the website to capture the loading process within the Network tab.
  5. Initiate an action in the chat which can be capture in the .har file.
  6. Right-click any of the network activities listed and select "Save all as HAR with content" to export the .har file.
Storing the .HAR File
  • Place the exported .har file in the ./hardir directory if you are using Docker. Alternatively, you can store it in any preferred location within your current working directory.

Note: Ensure that your .har file is stored securely, as it may contain sensitive information.

Using Proxy

If you want to hide or change your IP address for the providers, you can set a proxy globally via an environment variable:

  • On macOS and Linux:
export G4F_PROXY="http://host:port"
  • On Windows:
set G4F_PROXY=http://host:port

πŸš€ Providers and Models

GPT-4

Website Provider GPT-3.5 GPT-4 Stream Status Auth
bing.com g4f.Provider.Bing ❌ βœ”οΈ βœ”οΈ Unknown ❌
chatgpt.ai g4f.Provider.ChatgptAi ❌ βœ”οΈ βœ”οΈ Active ❌
liaobots.site g4f.Provider.Liaobots βœ”οΈ βœ”οΈ βœ”οΈ Active ❌
chat.openai.com g4f.Provider.OpenaiChat βœ”οΈ ❌ βœ”οΈ Unknown βœ”οΈ
raycast.com g4f.Provider.Raycast βœ”οΈ βœ”οΈ βœ”οΈ Unknown βœ”οΈ
beta.theb.ai g4f.Provider.Theb βœ”οΈ βœ”οΈ βœ”οΈ Unknown ❌
you.com g4f.Provider.You βœ”οΈ βœ”οΈ βœ”οΈ Unknown ❌

GPT-3.5

Website Provider GPT-3.5 GPT-4 Stream Status Auth
chat3.aiyunos.top g4f.Provider.AItianhuSpace βœ”οΈ ❌ βœ”οΈ Unknown ❌
chatforai.store g4f.Provider.ChatForAi βœ”οΈ ❌ βœ”οΈ Active ❌
chatgpt4online.org g4f.Provider.Chatgpt4Online βœ”οΈ ❌ βœ”οΈ Active ❌
chatgpt-free.cc g4f.Provider.ChatgptNext βœ”οΈ ❌ βœ”οΈ Active ❌
chatgptx.de g4f.Provider.ChatgptX βœ”οΈ ❌ βœ”οΈ Active ❌
flowgpt.com g4f.Provider.FlowGpt βœ”οΈ ❌ βœ”οΈ Active ❌
freegptsnav.aifree.site g4f.Provider.FreeGpt βœ”οΈ ❌ βœ”οΈ Unknown ❌
gpttalk.ru g4f.Provider.GptTalkRu βœ”οΈ ❌ βœ”οΈ Active ❌
koala.sh g4f.Provider.Koala βœ”οΈ ❌ βœ”οΈ Active ❌
app.myshell.ai g4f.Provider.MyShell βœ”οΈ ❌ βœ”οΈ Unknown ❌
perplexity.ai g4f.Provider.PerplexityAi βœ”οΈ ❌ βœ”οΈ Unknown ❌
poe.com g4f.Provider.Poe βœ”οΈ ❌ βœ”οΈ Unknown βœ”οΈ
talkai.info g4f.Provider.TalkAi βœ”οΈ ❌ βœ”οΈ Unknown ❌
chat.vercel.ai g4f.Provider.Vercel βœ”οΈ ❌ βœ”οΈ Active ❌
aitianhu.com g4f.Provider.AItianhu βœ”οΈ ❌ βœ”οΈ Inactive ❌
chatgpt.bestim.org g4f.Provider.Bestim βœ”οΈ ❌ βœ”οΈ Inactive ❌
chatbase.co g4f.Provider.ChatBase βœ”οΈ ❌ βœ”οΈ Inactive ❌
chatgptdemo.info g4f.Provider.ChatgptDemo βœ”οΈ ❌ βœ”οΈ Inactive ❌
chat.chatgptdemo.ai g4f.Provider.ChatgptDemoAi βœ”οΈ ❌ βœ”οΈ Inactive ❌
chatgptfree.ai g4f.Provider.ChatgptFree βœ”οΈ ❌ ❌ Inactive ❌
chatgptlogin.ai g4f.Provider.ChatgptLogin βœ”οΈ ❌ βœ”οΈ Inactive ❌
chat.3211000.xyz g4f.Provider.Chatxyz βœ”οΈ ❌ βœ”οΈ Inactive ❌
gpt6.ai g4f.Provider.Gpt6 βœ”οΈ ❌ βœ”οΈ Inactive ❌
gptchatly.com g4f.Provider.GptChatly βœ”οΈ ❌ ❌ Inactive ❌
ai18.gptforlove.com g4f.Provider.GptForLove βœ”οΈ ❌ βœ”οΈ Inactive ❌
gptgo.ai g4f.Provider.GptGo βœ”οΈ ❌ βœ”οΈ Inactive ❌
gptgod.site g4f.Provider.GptGod βœ”οΈ ❌ βœ”οΈ Inactive ❌
onlinegpt.org g4f.Provider.OnlineGpt βœ”οΈ ❌ βœ”οΈ Inactive ❌

Other

Website Provider GPT-3.5 GPT-4 Stream Status Auth
openchat.team g4f.Provider.Aura ❌ ❌ βœ”οΈ Active ❌
bard.google.com g4f.Provider.Bard ❌ ❌ ❌ Unknown βœ”οΈ
deepinfra.com g4f.Provider.DeepInfra ❌ ❌ βœ”οΈ Active ❌
free.chatgpt.org.uk g4f.Provider.FreeChatgpt ❌ ❌ βœ”οΈ Active ❌
gemini.google.com g4f.Provider.Gemini ❌ ❌ βœ”οΈ Active βœ”οΈ
ai.google.dev g4f.Provider.GeminiPro ❌ ❌ βœ”οΈ Active βœ”οΈ
gemini-chatbot-sigma.vercel.app g4f.Provider.GeminiProChat ❌ ❌ βœ”οΈ Unknown ❌
huggingface.co g4f.Provider.HuggingChat ❌ ❌ βœ”οΈ Active ❌
huggingface.co g4f.Provider.HuggingFace ❌ ❌ βœ”οΈ Active ❌
llama2.ai g4f.Provider.Llama2 ❌ ❌ βœ”οΈ Active ❌
labs.perplexity.ai g4f.Provider.PerplexityLabs ❌ ❌ βœ”οΈ Active ❌
pi.ai g4f.Provider.Pi ❌ ❌ βœ”οΈ Active ❌
theb.ai g4f.Provider.ThebApi ❌ ❌ ❌ Unknown βœ”οΈ
open-assistant.io g4f.Provider.OpenAssistant ❌ ❌ βœ”οΈ Inactive βœ”οΈ

Models

Model Base Provider Provider Website
gpt-3.5-turbo OpenAI 5+ Providers openai.com
gpt-4 OpenAI 2+ Providers openai.com
gpt-4-turbo OpenAI g4f.Provider.Bing openai.com
Llama-2-7b-chat-hf Meta 2+ Providers llama.meta.com
Llama-2-13b-chat-hf Meta 2+ Providers llama.meta.com
Llama-2-70b-chat-hf Meta 3+ Providers llama.meta.com
CodeLlama-34b-Instruct-hf Meta 2+ Providers llama.meta.com
CodeLlama-70b-Instruct-hf Meta 2+ Providers llama.meta.com
Mixtral-8x7B-Instruct-v0.1 Huggingface 4+ Providers huggingface.co
Mistral-7B-Instruct-v0.1 Huggingface 4+ Providers huggingface.co
dolphin-2.6-mixtral-8x7b Huggingface g4f.Provider.DeepInfra huggingface.co
lzlv_70b_fp16_hf Huggingface g4f.Provider.DeepInfra huggingface.co
airoboros-70b Huggingface g4f.Provider.DeepInfra huggingface.co
airoboros-l2-70b-gpt4-1.4.1 Huggingface g4f.Provider.DeepInfra huggingface.co
openchat_3.5 Huggingface 2+ Providers huggingface.co
gemini Google g4f.Provider.Gemini gemini.google.com
gemini-pro Google 2+ Providers gemini.google.com
claude-v2 Anthropic 1+ Providers anthropic.com
claude-3-opus Anthropic g4f.Provider.You anthropic.com
claude-3-sonnet Anthropic g4f.Provider.You anthropic.com
pi Inflection g4f.Provider.Pi inflection.ai

πŸ”— Powered by gpt4free

🎁 Projects ⭐ Stars πŸ“š Forks πŸ›Ž Issues πŸ“¬ Pull requests
gpt4free Stars Forks Issues Pull Requests
gpt4free-ts Stars Forks Issues Pull Requests
Free AI API's & Potential Providers List Stars Forks Issues Pull Requests
ChatGPT-Clone Stars Forks Issues Pull Requests
Ai agent Stars Forks Issues Pull Requests
ChatGpt Discord Bot Stars Forks Issues Pull Requests
chatGPT-discord-bot Stars Forks Issues Pull Requests
Nyx-Bot (Discord) Stars Forks Issues Pull Requests
LangChain gpt4free Stars Forks Issues Pull Requests
ChatGpt Telegram Bot Stars Forks Issues Pull Requests
ChatGpt Line Bot Stars Forks Issues Pull Requests
Action Translate Readme Stars Forks Issues Pull Requests
Langchain Document GPT Stars Forks Issues Pull Requests
python-tgpt Stars Forks Issues Pull Requests

🀝 Contribute

We welcome contributions from the community. Whether you're adding new providers or features, or simply fixing typos and making small improvements, your input is valued. Creating a pull request is all it takes – our co-pilot will handle the code review process. Once all changes have been addressed, we'll merge the pull request into the main branch and release the updates at a later time.

Guide: How do i create a new Provider?
Guide: How can AI help me with writing code?

πŸ™Œ Contributors

A list of all contributors is available here
The Vercel.py file contains code from vercel-llm-api by @ading2210, which is licensed under the GNU GPL v3
Top 1 Contributor: @hlohaus

©️ Copyright

This program is licensed under the GNU GPL v3

xtekky/gpt4free: Copyright (C) 2023 xtekky

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program.  If not, see <https://www.gnu.org/licenses/>.

⭐ Star History

Star History Chart

πŸ“„ License


This project is licensed under GNU_GPL_v3.0.

(πŸ”Ό Back to top)

About

decentralising the Ai Industry, just some language model api's...

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 89.0%
  • JavaScript 5.7%
  • CSS 3.3%
  • HTML 1.6%
  • Dockerfile 0.4%