This workflow shows how to leverage (i.e., authenticate, connect and prompt) an LLM (e.g., llama 3-instruct) available via Ollama in KNIME.
This approach is suitable for chat, instruct and code models.
____
Why do we use the OpenAI nodes to connect and prompt LLMs via Ollama?
Since Feb 2024, Ollama has built-in compatibility with the OpenAI Chat Completions API, making it possible to use more tooling and applications with Ollama locally. For this reason, we can conveniently use the OpenAI nodes of the KNIME AI Extension (and in particular the OpenAI Chat Model Connector node) to connect to the LLM of choice and send POST requests to the URL of a local host. In this way, we are freed from the burden of having to assemble and properly format the request body, and parse the resulting JSON output.
URL: Download Ollama https://ollama.com/download
URL: Llama 3 model card on Ollama https://ollama.com/library/llama3:instruct
URL: OpenAI compatibility with Ollama https://ollama.com/blog/openai-compatibility
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.