Enjoying AI Stats?Support us
Llama 3.2 90B Instruct
Overview
No description provided. Want to help? Contribute on GitHub or click here!
Quick Links
Key Metrics
Max Input
128,000
Tokens
Max Output
128,000
Tokens
Throughput
-
tok/s
Latency
-
ms
Input Price
-
Per 1M Tokens
Cached Input Price
-
Per 1M Tokens
Output Price
-
Per 1M Tokens
Blended Price
-
Per 1M Tokens
Model Information
Release Details
Released
25 Sept 2024
Knowledge Cutoff
Dec 2023
License
llama3.2
Model Architecture
Parameters
-
Training Data
-
Context Window
Input Context Length
128,000 tokens
Output Context Length
128,000 tokens
Key Features
Web Access
No
Real-time access to current web information
Multimodal
Yes
Ability to process multiple data types (text, images, etc.)
Reasoning
Unknown
Advanced logical and deductive reasoning capabilities
Fine-Tunable
Unknown
Can be customized for specific use cases
Model Release & Updates
25 Sept 2024
Model Released
Model first made available to the public
Benchmarks & Performance Comparison