Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see this in the docker compose logs, but those models don't show up in the model drop-down on the Agent block:

  simstudio-1  | [2025-04-29T03:41:12.195Z] [INFO] [OllamaStore] Updating Ollama models {
  simstudio-1  |   "models": [
  simstudio-1  |     "hf.co/bartowski/Qwen_Qwen3-32B-GGUF:latest",
  simstudio-1  |     "qwen3:30b-a3b-q4_K_M",
  simstudio-1  |     "gemma3:12b-it-qat",
  simstudio-1  |     "gemma3:4b-it-q4_K_M",
  simstudio-1  |     "nomic-embed-text:latest"
  simstudio-1  |   ]
  simstudio-1  | }
  simstudio-1  | [2025-04-29T03:41:12.195Z] [INFO] [ProviderUtils] Updated Ollama provider models {
  simstudio-1  |   "models": [
  simstudio-1  |     "hf.co/bartowski/Qwen_Qwen3-32B-GGUF:latest",
  simstudio-1  |     "qwen3:30b-a3b-q4_K_M",
  simstudio-1  |     "gemma3:12b-it-qat",
  simstudio-1  |     "gemma3:4b-it-q4_K_M",
  simstudio-1  |     "nomic-embed-text:latest"
  simstudio-1  |   ]
  simstudio-1  | }
Whether or not I include `--profile local-cpu` in the docker compose command:

- the models from my local ollama show up in the logs, and

- the models don't show up in the model drop-down in the Agent block.

AFAICT the only impact of `--profile local-cpu` is starting a docker container with ollama running.



just pushed a hotfix that should resolve this for you! let me know if you are still having issues. we recently updated the csp and needed to explicitly add the ollama endpoint to the connect-src directive


I updated, deleted docker volumes, and retried, and I still see the same issue :(


tried another fix! let me know if that resolves it for you, otherwise you can join the discord and I can help you debug to the best of my ability. sorry for the hassle


I don't have more time to spend on it today, but will join your discord when I do.

Just documenting what I tried and what happened, in case it's helpful to you:

1. Delete all existing docker containers and volumes.

2. git pull

3. Add this to docker-compose.yml: - OLLAMA_HOST=http://host.docker.internal:11434

4. docker compose up --build

5. Visit localhost:3000/w/

6. Create an account.

7. Enter PIN.

8. Log in.

9. Observe the initial workflow with only the 'Start' block.

10. In the sidebar, click the 'Agent block'.

11. In the newly-created 'Agent' block, choose one of my local (ollama) models.

12. Click 'Run'.

Step 11 works. Step 12 results in a red error message "Workflow execution failed: Invalid block type: undefined {}".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: