Retrieval Augmented Generation (RAG) is a way to expose large language models to up-to-date information. This can provide additional context for an LLM to use this new content to generate informed output.
This workflow shows how to create a vector store from Wikipedia articles, query the vector store using the Vector Store Retriever to retrieve similar documents and perform RAG using similar documents as context.
To run the workflow with Azure nodes, you need a Microsoft Azure account, an OpenAI API key and access to Microsoft's OpenAI services. More information is available at https://azure.microsoft.com/en-us/products/ai-services/openai-service.
For demonstration purposes, in the metanode "Retrieve Data" we are fetching data from some Wikipedia articles. Ideally, you can substitute and inject your data.
Execute and open the view of the RAG Chat App component by hovering over the component and clicking the lens icon to chat with the AI.
To use this workflow in KNIME, download it from the below URL and open it in KNIME:
Download WorkflowDeploy, schedule, execute, and monitor your KNIME workflows locally, in the cloud or on-premises – with our brand new NodePit Runner.
Try NodePit Runner!Do you have feedback, questions, comments about NodePit, want to support this platform, or want your own nodes or workflows listed here as well? Do you think, the search results could be improved or something is missing? Then please get in touch! Alternatively, you can send us an email to mail@nodepit.com.
Please note that this is only about NodePit. We do not provide general support for KNIME — please use the KNIME forums instead.