This node establishes a connection to a specific embedding model hosted on the Hugging Face Hub.
To use this node, you need to successfully authenticate with the Hugging Face Hub using the HF Hub Authenticator node.
Provide the name of the desired embeddings repository available on the Hugging Face Hub as an input.
Note: If you use the Credentials Configuration node and do not select the "Save password in configuration (weakly encrypted)" option for passing the API key, the Credentials Configuration node will need to be reconfigured upon reopening the workflow, as the credentials flow variable was not saved and will therefore not be available to downstream nodes.
The model name to be used, in the format <organization_name>/<model_name>. For example,
mistralai/Mistral-7B-Instruct-v0.3 for text generation, or sentence-transformers/all-MiniLM-L6-v2
for embedding model.
You can find available models at the Hugging Face Models repository.
Specify whether the Inference Provider is selected automatically or manually.
Available options:
The Inference Provider that runs the model. The HF Hub website shows for each model which providers are available.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Python Extension Development (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!