compare/Llama-2-7b-Chat-HF-LoravsLlama-2-7b-Chat-Int8

Llama-2-7b-Chat-HF-Lora vs Llama-2-7b-Chat-Int8

Pricing, context window, capabilities, and release date — pulled from each provider's public docs. Both are available via the same AIgateway OpenAI-compatible endpoint; flip the model string to switch.

RUN BOTH LIVE

Paste a prompt. Watch them race.

Both models stream in parallel through your own AIgateway key. Tokens, latency, and cost update as they arrive.

Sign in to runLive streaming uses your own key. It's free to sign up.
 Llama-2-7b-Chat-HF-Lora
meta-llama/llama-2-7b-chat-hf-lora
Llama-2-7b-Chat-Int8
meta/llama-2-7b-chat-int8
ProviderMetaMeta
FamilyLlama 2Llama 2
Modalitytexttext
Context window8,192 tok8,192 tok
Max output4,096 tok4,096 tok
Released2024-04-022023-09-25
Input price$0.040 /1M$0.040 /1M
Output price$0.080 /1M$0.080 /1M
Cache read
Tools
Streamingyesyes
Vision
JSON mode
Reasoning
Prompt caching
Llama-2-7b-Chat-HF-Lora
meta-llama/llama-2-7b-chat-hf-lora
Full spec →

This is a Llama2 base model that Cloudflare dedicated for inference with LoRA adapters. Llama 2 is a collection of pretrained and fine-tuned generative text models ranging in scale from 7 billion to 70 billion parameters. This is the repository for the 7B fine-tuned model, optimized for dialogue use cases and converted for the Hugging Face Transformers format.

Strengths
  • General-purpose chat
  • Open-weight
Llama-2-7b-Chat-Int8
meta/llama-2-7b-chat-int8
Full spec →

Quantized (int8) generative text model with 7 billion parameters from Meta

Strengths
  • General-purpose chat
  • Open-weight
SWITCH BETWEEN THEM

One key, both models, one line different.

# pip install aigateway-py openai
# aigateway-py: sub-accounts, evals, replays, jobs, webhook verify.
# openai SDK: chat/embeddings/images/audio — drop-in compat per our SDK's own guidance.
from openai import OpenAI

client = OpenAI(
    base_url="https://api.aigateway.sh/v1",
    api_key="sk-aig-...",
)

# Try Llama-2-7b-Chat-HF-Lora
client.chat.completions.create(
    model="meta-llama/llama-2-7b-chat-hf-lora",
    messages=[{"role":"user","content":"hello"}],
)

# Try Llama-2-7b-Chat-Int8 — same client, same key
client.chat.completions.create(
    model="meta/llama-2-7b-chat-int8",
    messages=[{"role":"user","content":"hello"}],
)
Get an AIgateway keyAdd a third model

Compare with another

Llama-2-7b-Chat-HF-Lora vs Llama-2-7b-Chat-Fp16
meta-llama/llama-2-7b-chat-hf-lora · meta/llama-2-7b-chat-fp16
Llama-2-13b-Chat-Awq vs Llama-2-7b-Chat-HF-Lora
hf/thebloke/llama-2-13b-chat-awq · meta-llama/llama-2-7b-chat-hf-lora
Llama-2-7b-Chat-Fp16 vs Llama-2-7b-Chat-Int8
meta/llama-2-7b-chat-fp16 · meta/llama-2-7b-chat-int8
Llama-2-13b-Chat-Awq vs Llama-2-7b-Chat-Int8
hf/thebloke/llama-2-13b-chat-awq · meta/llama-2-7b-chat-int8