Icon

Mitigate hallucinations in LLMs with RAG

There has been no title set for this workflow's metadata.

Mitigate hallucinations in LLMs with RAG

This workflow shows how to mitigate factual hallucinations in LLM responses about KNIME nodes for deep learning by implementing a RAG-based AI framework. The question we ask is: "What KNIME node should I use for transfer learning?"

We first import and embed a knowledge base containing the node descriptions of the KNIME Deep Learning - Keras Integration. Next, we create a Vector Store of that knowledge base and export it.

We implement a RAG process where we query the Vector Store and retrieve documents (5) that are most similar to the query. Next, we use the retrieved documents to augment the prompt with more context. Finally, we prompt ChatGPT to generate a response.

URL: OpenAI API Keys https://platform.openai.com/account/api-keys
URL: What are hallucinations in AI? (Blog) https://www.knime.com/blog/ai-hallucinations
URL: Mitigate hallucinations in LLMs using RAG with KNIME? (Blog) https://www.knime.com/blog/mitigate-hallucinations-in-LLMs-with-RAG

Nodes

Extensions

Links