ChatOllama
Last updated
Was this helpful?
Last updated
Was this helpful?
Download or run it on
For example, you can use the following command to spin up a Docker instance with llama3
Chat Models > drag ChatOllama node
Fill in the model that is running on Ollama. For example: llama2
. You can also use additional parameters:
If you are running both Flowise and Ollama on docker. You'll have to change the Base URL for ChatOllama.
Voila , you can now use ChatOllama node in Tailwinds
For Windows and MacOS Operating Systems specify . For Linux based systems the default docker gateway should be used since host.docker.internal is not available: