This node establishes a connection to a specific chat model hosted on the Hugging Face Hub. The difference to the HF Hub LLM Connector is that this node allows you to provide prompt templates which are crucial for obtaining the best output from many models that have been fine-tuned for chat-based usecases.
To use this node, you need to successfully authenticate with the Hugging Face Hub using the HF Hub Authenticator node.
Provide the name of the desired chat model repository available on the Hugging Face Hub as an input.
Please ensure that you have the necessary permissions to access the model. Failures with gated models may occur due to outdated tokens.
The model name to be used, in the format <organization_name>/<model_name>
. For example,
mistralai/Mistral-7B-Instruct-v0.3
for text generation, or sentence-transformers/all-MiniLM-L6-v2
for embedding model.
You can find available models at the Hugging Face Models repository.
Model specific system prompt template. Defaults to "%1". Refer to the Hugging Face Hub model card for information on the correct prompt template.
Model specific prompt template. Defaults to "%1". Refer to the Hugging Face Hub model card for information on the correct prompt template.
The number of top-k tokens to consider when generating text.
The typical probability threshold for generating text.
The repetition penalty to use when generating text.
The maximum number of tokens to generate in the completion.
The token count of your prompt plus max new tokens cannot exceed the model's context length.
Maximum number of concurrent requests to LLMs that can be made, whether through API calls or to an inference server. Exceeding this limit may result in temporary restrictions on your access.
It is important to plan your usage according to the model provider's rate limits, and keep in mind that both software and hardware constraints can impact performance.
For OpenAI, please refer to the Limits page for the rate limits available to you.
Sampling temperature to use, between 0.0 and 100.0. Higher values will make the output more random, while lower values will make it more focused and deterministic.
An alternative to sampling with temperature, where the model considers the results of the tokens (words) with top_p probability mass. Hence, 0.1 means only the tokens comprising the top 10% probability mass are considered.
You want to see the source code for this node? Click the following button and we’ll use our super-powers to find it for you.
To use this node in KNIME, install the extension KNIME Python Extension Development (Labs) from the below update site following our NodePit Product and Node Installation Guide:
A zipped version of the software site can be downloaded here.
Deploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.