Skip to content
Snippets Groups Projects
Commit 7aed8ab7 authored by Yin Xi's avatar Yin Xi
Browse files

Update README.md

parent be4539e0
No related merge requests found
...@@ -85,6 +85,16 @@ $ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/olla ...@@ -85,6 +85,16 @@ $ docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/olla
`$ curl --noproxy '*' http://localhost:11434/api/pull -d '{ "name": "llama3"}` `$ curl --noproxy '*' http://localhost:11434/api/pull -d '{ "name": "llama3"}`
## Install ANY GGUF model from Huggingface
1. download GGUF file from huggingface using `wget` and save to \blobs, recommended quantization types are q4_K_M or q8_0 [ref](https://github.com/ollama/ollama/blob/main/docs/api.md#create-a-blob)
2. run `$ sha256sum blablabla.gguf` and get the sha256 value
3. rename blablabla.gguf as sha256-...
3. alternatively, run `$ curl --noproxy '*' -T blablabla.gguf -X POST localhost:11434/api/blobs/sha256:...`
4. run this in terminal
`$ url --noproxy '*' localhost:11434/api/create -d '{"model":"blablabla","files":{"blablabla.gguf": "sha256-..."}}'`
5. model should show up in api/tags
## test model connection ## test model connection
......
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment