# Jlama
**Repository Path**: HesenjanJava/Jlama
## Basic Information
- **Project Name**: Jlama
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2024-04-01
- **Last Updated**: 2024-04-01
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# ๐ฆ Jlama: A modern Java inference engine for LLMs
[](https://maven-badges.herokuapp.com/maven-central/com.github.tjake/jlama-core)
## ๐ Features
Model Support:
* Gemma Models
* Llama & Llama2 Models
* Mistral & Mixtral Models
* GPT-2 Models
* BERT Models
* BPE Tokenizers
* WordPiece Tokenizers
Implements:
* Flash Attention
* Mixture of Experts
* Huggingface [SafeTensors](https://github.com/huggingface/safetensors) model and tokenizer format
* Support for F32, F16, BF16 models
* Support for Q8, Q4, Q5 model quantization
* Distributed Inference!
Jlama is built with Java 21 and utilizes the new [Vector API](https://openjdk.org/jeps/448)
for faster inference.
## โญ Give us a star!
Like what you see? Please consider giving this a star (โ
)!
## ๐ค What is it used for?
Add LLM Inference directly to your Java application.
## ๐ฌ Demo
Jlama includes a simple UI if you just want to chat with an llm.
```
./run-cli.sh download tjake/llama2-7b-chat-hf-jlama-Q4
./run-cli.sh serve models/llama2-7b-chat-hf-jlama-Q4
```
open browser to http://localhost:8080/ui/index.html
## ๐ต๏ธโโ๏ธ How to use
Jlama includes a cli tool to run models via the `run-cli.sh` command.
Before you do that first download one or more models from huggingface.
Use the `./run-cli.sh download` command to download models from huggingface.
```shell
./run-cli.sh download gpt2-medium
./run-cli.sh download -t XXXXXXXX meta-llama/Llama-2-7b-chat-hf
./run-cli.sh download intfloat/e5-small-v2
```
Then run the cli tool to chat with the model or complete a prompt.
```shell
./run-cli.sh complete -p "The best part of waking up is " -t 0.7 -tc 16 -q Q4 -wq I8 models/Llama-2-7b-chat-hf
./run-cli.sh chat -p "Tell me a joke about cats." -t 0.7 -tc 16 -q Q4 -wq I8 models/Llama-2-7b-chat-hf
```
## ๐งช Examples
### Llama 2 7B
```
Here is a poem about cats, incluing emojis:
This poem uses emojis to add an extra layer of meaning and fun to the text.
Cat, cat, so soft and sweet,
Purring, cuddling, can't be beat. ๐๐
Fur so soft, eyes so bright,
Playful, curious, such a delight. ๐บ๐
Laps so warm, naps so long,
Sleepy, happy, never wrong. ๐ด๐
Pouncing, chasing, always fun,
Kitty's joy, never done. ๐พ๐
Whiskers twitch, ears so bright,
Cat's magic, pure delight. ๐ฎ๐ซ
With a mew and a purr,
Cat's love, forever sure. ๐๐
So here's to cats, so dear,
Purrfect, adorable, always near. ๐๐
elapsed: 37s, 159.518982ms per token
```
### GPT-2 (355M parameters)
```
In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley,
in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.
a long and diverse and interesting story is told in this book. The author writes:
...
the stories of the unicornes seem to be based on the most vivid and vivid imagination; they are the stories of animals that are a kind of 'spirit animal' , a partly-human spiritual animal that speaks in perfect English , and that often keep their language under mysterious and inaccessible circumstances.
...
While the unicorn stories are mostly about animals, they tell us about animals from other animal species. The unicorn stories are remarkable because they tell us about animals that are not animals at all . They speak and sing in perfect English , and they are very much human beings.
...
This book is not about the unicorn. It is not about anything in particular . It is about a brief and distinct group of animal beings who have been called into existence in a particular remote and unexplored valley in the Andes Mountains. They speak perfect English , and they are very human beings.
...
The most surprising thing about the tales of the unicorn
elapsed: 10s, 49.437500ms per token
```
## ๐บ๏ธ Roadmap
* Support more models
* Add pure java tokenizers
* Support Quantization (e.g. k-quantization)
* Add LoRA support
* GraalVM support
* Add distributed inference
## ๐ท๏ธ License and Citation
The code is available under [Apache License](./LICENSE).
If you find this project helpful in your research, please cite this work at
```
@misc{jlama2024,
title = {Jlama: A modern Java inference engine for large language models},
url = {https://github.com/tjake/jlama},
author = {T Jake Luciani},
month = {January},
year = {2024}
}
```