ChatOllama
Last updated
Last updated
For example, you can use the following command to spin up a Docker instance with llama3
Chat Models > drag ChatOllama node
Fill in the model that is running on Ollama. For example: llama2
. You can also use additional parameters:
Voila 🎉, you can now use ChatOllama node in Tailwinds
If you are running both Flowise and Ollama on docker. You'll have to change the Base URL for ChatOllama.
For Windows and MacOS Operating Systems specify http://host.docker.internal:8000. For Linux based systems the default docker gateway should be used since host.docker.internal is not available: http://172.17.0.1:8000