contort
Control what LLMs can, and can't, say
Control what LLMs can, and can't, say
<a href="https://www.npmjs.com/package/contort"><img alt="Latest Contortionist NPM Version" src="https://badge.fury.io/js/contort.svg" /></a> <a href="https://github.com/thekevinscott/contortionist/blob/master/LICENSE"><img alt="License for contortionist"
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
llama.cpp gguf file parser for javascript
serve websocket GGML 4/5bit Quantized LLM's based on Meta's LLaMa model with llama.ccp
Node.js bindings for LlamaCPP, a C++ library for running language models.
use `npm i --save llama.native.js` to run lama.cpp models on your local machine. features a socket.io server and client that can do inference with the host of the model.
React Native binding of llama.cpp
Libraries and server to build AI applications. Adapters to various native bindings allowing local inference. Integrate it with your application, or use as a microservice.
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
Fork of llama.rn for ChatterUI
A library for generating syntactically valid code from an LLM.
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
React Native binding of llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Force a JSON schema on the model output on the generation level
A simple grammar builder compatible with GBNF (llama.cpp)
An attempt at a pure cpp turbo module library
React Native binding of llama.cpp
Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level
React Native binding of llama.cpp