Skip to content

Instantly share code, notes, and snippets.

View g13n's full-sized avatar

Gopalarathnam Venkatesan g13n

  • Bangalore, India
  • X @g13n
View GitHub Profile
@g13n
g13n / llama-cpp.sh
Last active May 12, 2026 08:07
A simple friendly llama-cli executor
#!/usr/bin/env bash
usage_exit() {
cat <<!
usage: $0 -p path model
The path to GGUF models can be the root/base path like \$HOME/.models or \$HOME/.cache/huggingface.
The model name can be an unambiguous prefix like qwen-3-30b-a3b.
Pass any flags to llama-cli through LLFLAGS:
LLFLAGS="-ngl 99 -fa" llama-cpp -p ~/.cache/huggingface gemma-4
# httperf Web Server Benchmarking
## The Network
### Server
$ iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)