-
Notifications
You must be signed in to change notification settings - Fork 49
Issues: containers/ramalama
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Ramalama seems mispelled, and I end up mistyping it because of that
#489
opened Nov 25, 2024 by
stefwalter
updated Nov 25, 2024
Document or include additional dependencies - huggingface cli and tqdm
#491
opened Nov 25, 2024 by
jarcher
updated Nov 25, 2024
'ramalama ps' returns exception on macOS when no container-based llms are running
#488
opened Nov 24, 2024 by
planetf1
updated Nov 25, 2024
macOS: leaked semaphore warning when running model
#487
opened Nov 24, 2024 by
planetf1
updated Nov 24, 2024
[packit] Propose downstream failed for release v0.2.0
#482
opened Nov 22, 2024 by
packit-as-a-service
bot
updated Nov 22, 2024
Ramalama Container needs updating on the quay.io to use new llama-simple-chat
#458
opened Nov 15, 2024 by
bmahabirbu
updated Nov 21, 2024
Find a way to automatically build and push x86_64 and aarch64 images
#27
opened Aug 1, 2024 by
ericcurtin
updated Nov 21, 2024
Switch to https://github.com/abetlen/llama-cpp-python
#9
opened Jul 30, 2024 by
ericcurtin
updated Nov 6, 2024
Add podman serve --generate compose MODEL which would generate a docker-compose file for running AI Model Service.
good first issue
Good for newcomers
#184
opened Sep 24, 2024 by
rhatdan
updated Nov 6, 2024
Consolidate with instructlab container images
#43
opened Aug 13, 2024 by
ericcurtin
updated Sep 3, 2024
ProTip!
Updated in the last three days: updated:>2024-11-24.