gaz.codes
/
benchmarks
Models
Search
Compare
Speed
Run Benchmarks
llm.l.gaz.codes
← Back to Models
glm-4.7-flash
zai-org/glm-4.7-flash
View on HuggingFace
lms get
Publisher
zai-org
Architecture
glm4_moe_lite
(sparse)
Parameters
30B
Quantization
6bit (6.0 bpw)
Format
mlx
Max Context
203k tokens
Type
llm
File Size
22.68 GB
Est. RAM
24.2 GB
Est. VRAM
24.2 GB
Capabilities
trained_for_tool_use
Benchmark Results
Benchmark Now
Compare Models