Llama-3.1-Nemotron-Ultra-253B-v1 is a large language model (LLM) optimized for advanced reasoning, human-interactive chat, retrieval-augmented generation (RAG), and tool-calling tasks. Derived from Meta’s Llama-3.1-405B-Instruct, it has been significantly customized using Neural Architecture Search (NAS), resulting in enhanced efficiency, reduced memory usage, and improved inference latency. The model supports a context length of up to 128K tokens and can operate efficiently on an 8x NVIDIA H100 node.
Note: you must include detailed thinking on in the system prompt to enable reasoning. Please see Usage Recommendations for more.
Recent activity on Llama 3.1 Nemotron Ultra 253B v1
Total usage per day on OpenRouter
Prompt
186M
Completion
1.99M
Reasoning
0
Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.