Icon

Ollama - Chat with LLama3

Chat with local Llama3 model via Ollama in KNIME

Chat with local Llama3 model via Ollama in KNIME

----
Run in Terminal window to start Ollama. You can also try and use other models (https://ollama.com)
ollama run llama3:instruct

Medium - Chat with local Llama3 Model via Ollama in KNIME Analytics Platform - Also extract Logs into structured JSON Files
https://medium.com/p/aca61e4a690a

P.S.: yes I am aware of the large empty white space but I have no idea how to remove it in KNIME 4 and have already contacted KNIME support



Let Llama3 extract data from a LOG file into a structured JSON file - loop thru several lines - yes depending on the power of you machine this might take a while but it is running on your computer ... Chat with local Llama3 model via Ollama in KNIME- simple Prompts. With some added instructions- Chat Window in KNIME Component- Instruct to create structured JSON file for systematic data extraction (from LOG files)- Automatically escape the prompts and instructions Run in Terminal window to start Ollama. You can also try and use other models (https://ollama.com)ollama run llama3:instructMedium - Chat with local Llama3 Model via Ollama in KNIME Analytics Platform - Also extractLogs into structured JSON Fileshttps://medium.com/p/aca61e4a690aP.S.: yes I am aware of the large empty white space but I have no idea how to remove it in KNIME4 and have already contacted KNIME support The most basic prompt - just ask a questionMake sure Ollama (https://ollama.com) is running as a local process and the Llama3 model has been downloaded oncehttp://localhost:11434/api/generate A prompt with an additional insctruction about the role of the model Sample Log files to use. You might als ask a LLM to help you set up the instructions for your JSON filehttps://www.ibm.com/docs/en/zos/3.1.0?topic=problems-example-log-file Please convert the following log file data into JSONformat. Ensure the JSON output has a consistentstructure with "date", "time", "severity","component", and "message" fields. Here's the logdata: Interactive Chat with local Llama3 (via Ollama running in the background) Simply loop thru several questions and collect the answers Send the prompts to the model Maybe download the whole LLM workflow group in order to get all the folders(https://hub.knime.com/mlauber71/spaces/LLM_Space/~17k4zAECNryrZw1X/) If you are behind a Proxy server make sure you have the environment variables set for it. Ollama will use them to download themodel initially (you can later even switch off the WiFi to be sure nothing leaks ...).You can set them (per session if you must) in the terminal window:set HTTP_PROXY=http://proxy.my-company.com:8080set HTTPS_PROXY=http://proxy.my-company.com:8080You might have to close all running Ollama instances. Set the proxy variables in the Terminal Window and then start Ollamaagain:"ollama run llama3:instruct" post to Ollamausing Value column{ "model": "llama3:instruct", "prompt": "How many colors are there in a rainbow?", "stream": false } ResultsPrompt with an additional instructionResultspost to Ollamausing Value columnSimple promptsto demonstrate the functioncurrent timereload the currentsaved chat with OllamaRight click to openinteractive viewMake sure Ollama is running in the backgroundeg. in the Terminal Window:ollama run llama3:instructValueSTARTconstruct the promptENDCollect the PromptsbodySTARTposting to the modelPost toOllamaResultsENDCollect the moldesAnswerssave chat as .tablewith timestampscan for the latestfile to store theolama* .tableresults../data/chat/ResultsResults../data/llama3_results.xlsxbodyENDCollect the PromptsSTARTconstruct the promptValueconstruct the valueswith escaped promptsLogfiles to processENDCollect the moldesAnswersResultsPost toOllamaSTARTposting to the modelInstructionescapedPromptData=> escape the text so it will fit into a JSON fileescapedInstructionData=> escape the text so it will fit into a JSON filecreated_at~#([\s\S]*?)~#Match $MATCHINDEX: regexReplace($response$, "```", "~#")Match 0: Group 1define the fields to extract fromthe resulting JSON file as Listsexpand the collected lists intoreadable columnsRowID../data/results_logfiles.tableescapedPromptData=> escape the text so it will fit into a JSON fileconvert andclear Date/TimeFormatcreated_at_timestampmilliseconds../data/sample_logs.csvPOST Request Table Creator JSON to Table Table Creator JSON to Table POST Request Table Creator Create Date&TimeRange Table Reader Chat with Ollama Java EditVariable (simple) Table Row ToVariable Loop Start Variable Loop End ConstantValue Column Chunk Loop Start POST Request JSON to Table Loop End Table Writer List Files/Folders Table View (Labs) create filename to collect Table View (Labs) Excel Writer Column Filter ConstantValue Column Variable Loop End Table Row ToVariable Loop Start Java EditVariable (simple) Table Creator Loop End JSON to Table POST Request Chunk Loop Start StringConfiguration Java Edit Variable Java Edit Variable Column Filter String to Date&Time Regex Extractor String Manipulation String to JSON JSON Path Column Filter Ungroup RowID Table Writer Java Edit Variable Java Snippet UNIX Timestampto Date&Time Create emptyLog File CSV Writer Column Filter Let Llama3 extract data from a LOG file into a structured JSON file - loop thru several lines - yes depending on the power of you machine this might take a while but it is running on your computer ... Chat with local Llama3 model via Ollama in KNIME- simple Prompts. With some added instructions- Chat Window in KNIME Component- Instruct to create structured JSON file for systematic data extraction (from LOG files)- Automatically escape the prompts and instructions Run in Terminal window to start Ollama. You can also try and use other models (https://ollama.com)ollama run llama3:instructMedium - Chat with local Llama3 Model via Ollama in KNIME Analytics Platform - Also extractLogs into structured JSON Fileshttps://medium.com/p/aca61e4a690aP.S.: yes I am aware of the large empty white space but I have no idea how to remove it in KNIME4 and have already contacted KNIME support The most basic prompt - just ask a questionMake sure Ollama (https://ollama.com) is running as a local process and the Llama3 model has been downloaded oncehttp://localhost:11434/api/generate A prompt with an additional insctruction about the role of the model Sample Log files to use. You might als ask a LLM to help you set up the instructions for your JSON filehttps://www.ibm.com/docs/en/zos/3.1.0?topic=problems-example-log-file Please convert the following log file data into JSONformat. Ensure the JSON output has a consistentstructure with "date", "time", "severity","component", and "message" fields. Here's the logdata: Interactive Chat with local Llama3 (via Ollama running in the background) Simply loop thru several questions and collect the answers Send the prompts to the model Maybe download the whole LLM workflow group in order to get all the folders(https://hub.knime.com/mlauber71/spaces/LLM_Space/~17k4zAECNryrZw1X/) If you are behind a Proxy server make sure you have the environment variables set for it. Ollama will use them to download themodel initially (you can later even switch off the WiFi to be sure nothing leaks ...).You can set them (per session if you must) in the terminal window:set HTTP_PROXY=http://proxy.my-company.com:8080set HTTPS_PROXY=http://proxy.my-company.com:8080You might have to close all running Ollama instances. Set the proxy variables in the Terminal Window and then start Ollamaagain:"ollama run llama3:instruct" post to Ollamausing Value column{ "model": "llama3:instruct", "prompt": "How many colors are there in a rainbow?", "stream": false } ResultsPrompt with an additional instructionResultspost to Ollamausing Value columnSimple promptsto demonstrate the functioncurrent timereload the currentsaved chat with OllamaRight click to openinteractive viewMake sure Ollama is running in the backgroundeg. in the Terminal Window:ollama run llama3:instructValueSTARTconstruct the promptENDCollect the PromptsbodySTARTposting to the modelPost toOllamaResultsENDCollect the moldesAnswerssave chat as .tablewith timestampscan for the latestfile to store theolama* .tableresults../data/chat/ResultsResults../data/llama3_results.xlsxbodyENDCollect the PromptsSTARTconstruct the promptValueconstruct the valueswith escaped promptsLogfiles to processENDCollect the moldesAnswersResultsPost toOllamaSTARTposting to the modelInstructionescapedPromptData=> escape the text so it will fit into a JSON fileescapedInstructionData=> escape the text so it will fit into a JSON filecreated_at~#([\s\S]*?)~#Match $MATCHINDEX: regexReplace($response$, "```", "~#")Match 0: Group 1define the fields to extract fromthe resulting JSON file as Listsexpand the collected lists intoreadable columnsRowID../data/results_logfiles.tableescapedPromptData=> escape the text so it will fit into a JSON fileconvert andclear Date/TimeFormatcreated_at_timestampmilliseconds../data/sample_logs.csvPOST Request Table Creator JSON to Table Table Creator JSON to Table POST Request Table Creator Create Date&TimeRange Table Reader Chat with Ollama Java EditVariable (simple) Table Row ToVariable Loop Start Variable Loop End ConstantValue Column Chunk Loop Start POST Request JSON to Table Loop End Table Writer List Files/Folders Table View (Labs) create filename to collect Table View (Labs) Excel Writer Column Filter ConstantValue Column Variable Loop End Table Row ToVariable Loop Start Java EditVariable (simple) Table Creator Loop End JSON to Table POST Request Chunk Loop Start StringConfiguration Java Edit Variable Java Edit Variable Column Filter String to Date&Time Regex Extractor String Manipulation String to JSON JSON Path Column Filter Ungroup RowID Table Writer Java Edit Variable Java Snippet UNIX Timestampto Date&Time Create emptyLog File CSV Writer Column Filter

Nodes

Extensions

Links