gaz.codes
/
benchmarks
Models
Search
Compare
Speed
Run Benchmarks
llm.l.gaz.codes
← Back to Models
lfm2-24b-a2b
liquid/lfm2-24b-a2b
View on HuggingFace
lms get
Publisher
liquid
Architecture
lfm2_moe
(sparse)
Parameters
24B
Quantization
4bit (4.0 bpw)
Format
mlx
Max Context
128k tokens
Type
llm
File Size
12.50 GB
Est. RAM
14.0 GB
Est. VRAM
14.0 GB
(~2B active)
Capabilities
trained_for_tool_use
Benchmark Results
Benchmark Now
Compare Models