Support HF_TOKEN
environment variable in huggingface_provider.py
#59
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR updates
huggingface_provider.py
to pull the huggingface token fromHF_TOKEN
environment variable instead ofHUGGINGFACE_TOKEN
. This is indeed the env variable supported by all libraries in the HF ecosystem (not just in the Python client, but in Rust, JS, etc. clients). Better to align now before it's too late for a change!Disclaimer: I work at HF as maintainer of the
huggingface_hub
Python client 🤗EDIT: if adding
huggingface_hub
as an optional dependency is not a problem, I would even advice to usehuggingface_hub.get_token()
instead. ReturnsNone
if the user is not connected. Otherwise it pulls the token from either an environment variable, a locally saved token (fromhuggingface-cli login
) or from the secrets vault in a Google Colab session.EDIT 2: using
huggingface_hub.InferenceClient
would also do that automatically + seamlessly work with Inference API (our serverless offer), Inference Endpoints (our dedicated servers) or a local TGI deployment. Please let me know if you are interested in a PR that uses that instead of a raw httpx call :)