Skip to content
  1.  
  2. © 2023 – 2025 OpenRouter, Inc
    Favicon for openai

    OpenAI

    Browse models from OpenAI

    54 models

    Tokens processed on OpenRouter

    • OpenAI: GPT-5 Image MiniGPT-5 Image Mini
      5.57M tokens

      GPT-5 Image Mini combines OpenAI's advanced language capabilities, powered by GPT-5 Mini, with GPT Image 1 Mini for efficient image generation. This natively multimodal model features superior instruction following, text rendering, and detailed image editing with reduced latency and cost. It excels at high-quality visual creation while maintaining strong text understanding, making it ideal for applications that require both efficient image generation and text processing at scale.

      by openai400K context$2.50/M input tokens
    $2/M output tokens
    $0.003/K input imgs
    $0.008/K output imgs
  3. OpenAI: GPT-5 ImageGPT-5 Image
    651K tokens

    GPT-5 Image combines OpenAI's most advanced language model with state-of-the-art image generation capabilities. It offers major improvements in reasoning, code quality, and user experience while incorporating GPT Image 1's superior instruction following, text rendering, and detailed image editing.

    by openai400K context$10/M input tokens$10/M output tokens$0.01/K input imgs$0.04/K output imgs
  4. OpenAI: o3 Deep Researcho3 Deep Research
    810K tokens

    o3-deep-research is OpenAI's advanced model for deep research, designed to tackle complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.

    by openai200K context$10/M input tokens$40/M output tokens$7.65/K input imgs
  5. OpenAI: o4 Mini Deep Researcho4 Mini Deep Research
    673K tokens

    o4-mini-deep-research is OpenAI's faster, more affordable deep research model—ideal for tackling complex, multi-step research tasks. Note: This model always uses the 'web_search' tool which adds additional cost.

    by openai200K context$2/M input tokens$8/M output tokens$1.53/K input imgs
  6. OpenAI: GPT-5 ProGPT-5 Pro
    27.3M tokens

    GPT-5 Pro is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex tasks that require step-by-step reasoning, instruction following, and accuracy in high-stakes use cases. It supports test-time routing features and advanced prompt understanding, including user-specified intent like "think hard about this." Improvements include reductions in hallucination, sycophancy, and better performance in coding, writing, and health-related tasks.

    by openai400K context$15/M input tokens$120/M output tokens
  7. OpenAI: GPT-5 CodexGPT-5 Codex
    932M tokens

    GPT-5-Codex is a specialized version of GPT-5 optimized for software engineering and coding workflows. It is designed for both interactive development sessions and long, independent execution of complex engineering tasks. The model supports building projects from scratch, feature development, debugging, large-scale refactoring, and code review. Compared to GPT-5, Codex is more steerable, adheres closely to developer instructions, and produces cleaner, higher-quality code outputs. Reasoning effort can be adjusted with the reasoning.effort parameter. Read the docs here Codex integrates into developer environments including the CLI, IDE extensions, GitHub, and cloud tasks. It adapts reasoning effort dynamically—providing fast responses for small tasks while sustaining extended multi-hour runs for large projects. The model is trained to perform structured code reviews, catching critical flaws by reasoning over dependencies and validating behavior against tests. It also supports multimodal inputs such as images or screenshots for UI development and integrates tool use for search, dependency installation, and environment setup. Codex is intended specifically for agentic coding applications.

    by openai400K context$1.25/M input tokens$10/M output tokens
  8. OpenAI: GPT-4o AudioGPT-4o Audio
    6K tokens

    The gpt-4o-audio-preview model adds support for audio inputs as prompts. This enhancement allows the model to detect nuances within audio recordings and add depth to generated user experiences. Audio outputs are currently not supported. Audio tokens are priced at $40 per million input audio tokens.

    by openai128K context$2.50/M input tokens$10/M output tokens$40/M audio tokens
  9. OpenAI: GPT-5 ChatGPT-5 Chat
    1.2B tokens

    GPT-5 Chat is designed for advanced, natural, multimodal, and context-aware conversations for enterprise applications.

    by openai128K context$1.25/M input tokens$10/M output tokens
  10. OpenAI: GPT-5GPT-5
    2.9B tokens

    GPT-5 is OpenAI’s most advanced model, offering major improvements in reasoning, code quality, and user experience. It is optimized for complex tasks that require step-by-step reasoning, instruction following, and accuracy in high-stakes use cases. It supports test-time routing features and advanced prompt understanding, including user-specified intent like "think hard about this." Improvements include reductions in hallucination, sycophancy, and better performance in coding, writing, and health-related tasks.

    by openai400K context$1.25/M input tokens$10/M output tokens
  11. OpenAI: GPT-5 MiniGPT-5 Mini
    1.81B tokens

    GPT-5 Mini is a compact version of GPT-5, designed to handle lighter-weight reasoning tasks. It provides the same instruction-following and safety-tuning benefits as GPT-5, but with reduced latency and cost. GPT-5 Mini is the successor to OpenAI's o4-mini model.

    by openai400K context$0.25/M input tokens$2/M output tokens
  12. OpenAI: GPT-5 NanoGPT-5 Nano
    459M tokens

    GPT-5-Nano is the smallest and fastest variant in the GPT-5 system, optimized for developer tools, rapid interactions, and ultra-low latency environments. While limited in reasoning depth compared to its larger counterparts, it retains key instruction-following and safety features. It is the successor to GPT-4.1-nano and offers a lightweight option for cost-sensitive or real-time applications.

    by openai400K context$0.05/M input tokens$0.40/M output tokens
  13. OpenAI: gpt-oss-120bgpt-oss-120b
    3.17B tokens

    gpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.

    by openai131K context$0.04/M input tokens$0.40/M output tokens
  14. OpenAI: gpt-oss-20bgpt-oss-20b
    810M tokens

    gpt-oss-20b is an open-weight 21B parameter model released by OpenAI under the Apache 2.0 license. It uses a Mixture-of-Experts (MoE) architecture with 3.6B active parameters per forward pass, optimized for lower-latency inference and deployability on consumer or single-GPU hardware. The model is trained in OpenAI’s Harmony response format and supports reasoning level configuration, fine-tuning, and agentic capabilities including function calling, tool use, and structured outputs.

    by openai131K context$0.03/M input tokens$0.14/M output tokens
  15. OpenAI: o3 Proo3 Pro
    24.2M tokens

    The o-series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o3-pro model uses more compute to think harder and provide consistently better answers. Note that BYOK is required for this model. Set up here: https://openrouter.ai/settings/integrations

    by openai200K context$20/M input tokens$80/M output tokens$15.30/K input imgs
  16. OpenAI: Codex MiniCodex Mini
    4.92M tokens

    codex-mini-latest is a fine-tuned version of o4-mini specifically for use in Codex CLI. For direct use in the API, we recommend starting with gpt-4.1.

    by openai200K context$1.50/M input tokens$6/M output tokens
  17. OpenAI: o4 Mini Higho4 Mini High
    217M tokens

    OpenAI o4-mini-high is the same model as o4-mini with reasoning_effort set to high. OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains. Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where latency or cost is critical. Thanks to its efficient architecture and refined reinforcement learning training, o4-mini can chain tools, generate structured outputs, and solve multi-step tasks with minimal delay—often in under a minute.

    by openai200K context$1.10/M input tokens$4.40/M output tokens$0.842/K input imgs
  18. OpenAI: o3o3
    106M tokens

    o3 is a well-rounded and powerful model across domains. It sets a new standard for math, science, coding, and visual reasoning tasks. It also excels at technical writing and instruction-following. Use it to think through multi-step problems that involve analysis across text, code, and images.

    by openai200K context$2/M input tokens$8/M output tokens$1.53/K input imgs
  19. OpenAI: o4 Minio4 Mini
    669M tokens

    OpenAI o4-mini is a compact reasoning model in the o-series, optimized for fast, cost-efficient performance while retaining strong multimodal and agentic capabilities. It supports tool use and demonstrates competitive reasoning and coding performance across benchmarks like AIME (99.5% with Python) and SWE-bench, outperforming its predecessor o3-mini and even approaching o3 in some domains. Despite its smaller size, o4-mini exhibits high accuracy in STEM tasks, visual problem solving (e.g., MathVista, MMMU), and code editing. It is especially well-suited for high-throughput scenarios where latency or cost is critical. Thanks to its efficient architecture and refined reinforcement learning training, o4-mini can chain tools, generate structured outputs, and solve multi-step tasks with minimal delay—often in under a minute.

    by openai200K context$1.10/M input tokens$4.40/M output tokens$0.842/K input imgs
  20. OpenAI: GPT-4.1GPT-4.1
    5.87B tokens

    GPT-4.1 is a flagship large language model optimized for advanced instruction following, real-world software engineering, and long-context reasoning. It supports a 1 million token context window and outperforms GPT-4o and GPT-4.5 across coding (54.6% SWE-bench Verified), instruction compliance (87.4% IFEval), and multimodal understanding benchmarks. It is tuned for precise code diffs, agent reliability, and high recall in large document contexts, making it ideal for agents, IDE tooling, and enterprise knowledge retrieval.

    by openai1.05M context$2/M input tokens$8/M output tokens
  21. OpenAI: GPT-4.1 MiniGPT-4.1 Mini
    3.62B tokens

    GPT-4.1 Mini is a mid-sized model delivering performance competitive with GPT-4o at substantially lower latency and cost. It retains a 1 million token context window and scores 45.1% on hard instruction evals, 35.8% on MultiChallenge, and 84.1% on IFEval. Mini also shows strong coding ability (e.g., 31.6% on Aider’s polyglot diff benchmark) and vision understanding, making it suitable for interactive applications with tight performance constraints.

    by openai1.05M context$0.40/M input tokens$1.60/M output tokens
  22. OpenAI: GPT-4.1 NanoGPT-4.1 Nano
    1.34B tokens

    For tasks that demand low latency, GPT‑4.1 nano is the fastest and cheapest model in the GPT-4.1 series. It delivers exceptional performance at a small size with its 1 million token context window, and scores 80.1% on MMLU, 50.3% on GPQA, and 9.8% on Aider polyglot coding – even higher than GPT‑4o mini. It’s ideal for tasks like classification or autocompletion.

    by openai1.05M context$0.10/M input tokens$0.40/M output tokens
  23. OpenAI: o1-proo1-pro
    708K tokens

    The o1 series of models are trained with reinforcement learning to think before they answer and perform complex reasoning. The o1-pro model uses more compute to think harder and provide consistently better answers.

    by openai200K context$150/M input tokens$600/M output tokens$216.80/K input imgs
  24. OpenAI: GPT-4o-mini Search PreviewGPT-4o-mini Search Preview
    12.1M tokens

    GPT-4o mini Search Preview is a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries.

    by openai128K context$0.15/M input tokens$0.60/M output tokens$0.217/K input imgs$27.50/K reqs
  25. OpenAI: GPT-4o Search PreviewGPT-4o Search Preview
    13.4M tokens

    GPT-4o Search Previewis a specialized model for web search in Chat Completions. It is trained to understand and execute web search queries.

    by openai128K context$2.50/M input tokens$10/M output tokens$3.613/K input imgs$35/K reqs
  26. OpenAI: GPT-4.5 (Preview)GPT-4.5 (Preview)

    GPT-4.5 (Preview) is a research preview of OpenAI’s latest language model, designed to advance capabilities in reasoning, creativity, and multi-turn conversation. It builds on previous iterations with improvements in world knowledge, contextual coherence, and the ability to follow user intent more effectively. The model demonstrates enhanced performance in tasks that require open-ended thinking, problem-solving, and communication. Early testing suggests it is better at generating nuanced responses, maintaining long-context coherence, and reducing hallucinations compared to earlier versions. This research preview is intended to help evaluate GPT-4.5’s strengths and limitations in real-world use cases as OpenAI continues to refine and develop future models. Read more at the blog post here.

    by openai128K context
  27. OpenAI: o3 Mini Higho3 Mini High
    8M tokens

    OpenAI o3-mini-high is the same model as o3-mini with reasoning_effort set to high. o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding. The model features three adjustable reasoning effort levels and supports key developer capabilities including function calling, structured outputs, and streaming, though it does not include vision processing capabilities. The model demonstrates significant improvements over its predecessor, with expert testers preferring its responses 56% of the time and noting a 39% reduction in major errors on complex questions. With medium reasoning effort settings, o3-mini matches the performance of the larger o1 model on challenging reasoning evaluations like AIME and GPQA, while maintaining lower latency and cost.

    by openai200K context$1.10/M input tokens$4.40/M output tokens
  28. OpenAI: o3 Minio3 Mini
    159M tokens

    OpenAI o3-mini is a cost-efficient language model optimized for STEM reasoning tasks, particularly excelling in science, mathematics, and coding. This model supports the reasoning_effort parameter, which can be set to "high", "medium", or "low" to control the thinking time of the model. The default is "medium". OpenRouter also offers the model slug openai/o3-mini-high to default the parameter to "high". The model features three adjustable reasoning effort levels and supports key developer capabilities including function calling, structured outputs, and streaming, though it does not include vision processing capabilities. The model demonstrates significant improvements over its predecessor, with expert testers preferring its responses 56% of the time and noting a 39% reduction in major errors on complex questions. With medium reasoning effort settings, o3-mini matches the performance of the larger o1 model on challenging reasoning evaluations like AIME and GPQA, while maintaining lower latency and cost.

    by openai200K context$1.10/M input tokens$4.40/M output tokens
  29. OpenAI: o1o1
    6.84M tokens

    The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 model series is trained with large-scale reinforcement learning to reason using chain of thought. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the launch announcement.

    by openai200K context$15/M input tokens$60/M output tokens$21.68/K input imgs
  30. OpenAI: GPT-4o (2024-11-20)GPT-4o (2024-11-20)
    317M tokens

    The 2024-11-20 version of GPT-4o offers a leveled-up creative writing ability with more natural, engaging, and tailored writing to improve relevance & readability. It’s also better at working with uploaded files, providing deeper insights & more thorough responses. GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of GPT-4 Turbo while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities.

    by openai128K context$2.50/M input tokens$10/M output tokens$3.613/K input imgs
  31. OpenAI: o1-preview (2024-09-12)o1-preview (2024-09-12)

    The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the launch announcement. Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

    by openai128K context
  32. OpenAI: o1-mini (2024-09-12)o1-mini (2024-09-12)
    514K tokens

    The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the launch announcement. Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

    by openai128K context$1.10/M input tokens$4.40/M output tokens
  33. OpenAI: o1-minio1-mini
    12.3M tokens

    The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the launch announcement. Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

    by openai128K context$1.10/M input tokens$4.40/M output tokens
  34. OpenAI: o1-previewo1-preview

    The latest and strongest model family from OpenAI, o1 is designed to spend more time thinking before responding. The o1 models are optimized for math, science, programming, and other STEM-related tasks. They consistently exhibit PhD-level accuracy on benchmarks in physics, chemistry, and biology. Learn more in the launch announcement. Note: This model is currently experimental and not suitable for production use-cases, and may be heavily rate-limited.

    by openai128K context
  35. OpenAI: ChatGPT-4oChatGPT-4o
    164M tokens

    OpenAI ChatGPT 4o is continually updated by OpenAI to point to the current version of GPT-4o used by ChatGPT. It therefore differs slightly from the API version of GPT-4o in that it has additional RLHF. It is intended for research and evaluation. OpenAI notes that this model is not suited for production use-cases as it may be removed or redirected to another model in the future.

    by openai128K context$5/M input tokens$15/M output tokens$7.225/K input imgs
  36. OpenAI: GPT-4o (2024-08-06)GPT-4o (2024-08-06)
    46M tokens

    The 2024-08-06 version of GPT-4o offers improved performance in structured outputs, with the ability to supply a JSON schema in the respone_format. Read more here. GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of GPT-4 Turbo while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities. For benchmarking against other models, it was briefly called "im-also-a-good-gpt2-chatbot"

    by openai128K context$2.50/M input tokens$10/M output tokens$3.613/K input imgs
  37. OpenAI: GPT-4o-mini (2024-07-18)GPT-4o-mini (2024-07-18)
    452M tokens

    GPT-4o mini is OpenAI's newest model after GPT-4 Omni, supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than GPT-3.5 Turbo. It maintains SOTA intelligence, while being significantly more cost-effective. GPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences common leaderboards. Check out the launch announcement to learn more. #multimodal

    by openai128K context$0.15/M input tokens$0.60/M output tokens$7.225/K input imgs
  38. OpenAI: GPT-4o-miniGPT-4o-mini
    3.53B tokens

    GPT-4o mini is OpenAI's newest model after GPT-4 Omni, supporting both text and image inputs with text outputs. As their most advanced small model, it is many multiples more affordable than other recent frontier models, and more than 60% cheaper than GPT-3.5 Turbo. It maintains SOTA intelligence, while being significantly more cost-effective. GPT-4o mini achieves an 82% score on MMLU and presently ranks higher than GPT-4 on chat preferences common leaderboards. Check out the launch announcement to learn more. #multimodal

    by openai128K context$0.15/M input tokens$0.60/M output tokens$0.217/K input imgs
  39. OpenAI: GPT-4o (2024-05-13)GPT-4o (2024-05-13)
    14.2M tokens

    GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of GPT-4 Turbo while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities. For benchmarking against other models, it was briefly called "im-also-a-good-gpt2-chatbot" #multimodal

    by openai128K context$5/M input tokens$15/M output tokens$7.225/K input imgs
  40. OpenAI: GPT-4oGPT-4o
    913M tokens

    GPT-4o ("o" for "omni") is OpenAI's latest AI model, supporting both text and image inputs with text outputs. It maintains the intelligence level of GPT-4 Turbo while being twice as fast and 50% more cost-effective. GPT-4o also offers improved performance in processing non-English languages and enhanced visual capabilities. For benchmarking against other models, it was briefly called "im-also-a-good-gpt2-chatbot" #multimodal

    by openai128K context$2.50/M input tokens$10/M output tokens$3.613/K input imgs
  41. OpenAI: GPT-4 TurboGPT-4 Turbo
    27.7M tokens

    The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to December 2023.

    by openai128K context$10/M input tokens$30/M output tokens$14.45/K input imgs
  42. OpenAI: GPT-3.5 Turbo (older v0613)GPT-3.5 Turbo (older v0613)
    6.18M tokens

    GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.

    by openai4K context$1/M input tokens$2/M output tokens
  43. OpenAI: GPT-4 Turbo PreviewGPT-4 Turbo Preview
    5.49M tokens

    The preview GPT-4 model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Dec 2023. Note: heavily rate limited by OpenAI while in preview.

    by openai128K context$10/M input tokens$30/M output tokens
  44. OpenAI: GPT-4 VisionGPT-4 Vision

    Ability to understand images, in addition to all other GPT-4 Turbo capabilties. Training data: up to Apr 2023. Note: heavily rate limited by OpenAI while in preview. #multimodal

    by openai128K context
  45. OpenAI: GPT-4 Turbo (older v1106)GPT-4 Turbo (older v1106)
    683K tokens

    The latest GPT-4 Turbo model with vision capabilities. Vision requests can now use JSON mode and function calling. Training data: up to April 2023.

    by openai128K context$10/M input tokens$30/M output tokens
  46. OpenAI: GPT-3.5 Turbo 16k (older v1106)GPT-3.5 Turbo 16k (older v1106)

    An older GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021.

    by openai16K context
  47. OpenAI: GPT-3.5 Turbo InstructGPT-3.5 Turbo Instruct
    3.23M tokens

    This model is a variant of GPT-3.5 Turbo tuned for instructional prompts and omitting chat-related optimizations. Training data: up to Sep 2021.

    by openai4K context$1.50/M input tokens$2/M output tokens
  48. OpenAI: GPT-4 32kGPT-4 32k

    GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial for handling longer content like interacting with PDFs without an external vector database. Training data: up to Sep 2021.

    by openai33K context
  49. OpenAI: GPT-4 32k (older v0314)GPT-4 32k (older v0314)

    GPT-4-32k is an extended version of GPT-4, with the same capabilities but quadrupled context length, allowing for processing up to 40 pages of text in a single pass. This is particularly beneficial for handling longer content like interacting with PDFs without an external vector database. Training data: up to Sep 2021.

    by openai33K context
  50. OpenAI: GPT-3.5 Turbo 16kGPT-3.5 Turbo 16k
    7.03M tokens

    This model offers four times the context length of gpt-3.5-turbo, allowing it to support approximately 20 pages of text in a single request at a higher cost. Training data: up to Sep 2021.

    by openai16K context$3/M input tokens$4/M output tokens
  51. OpenAI: GPT-3.5 Turbo (older v0301)GPT-3.5 Turbo (older v0301)

    GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.

    by openai4K context
  52. OpenAI: GPT-4GPT-4
    7.93M tokens

    OpenAI's flagship model, GPT-4 is a large-scale multimodal language model capable of solving difficult problems with greater accuracy than previous models due to its broader general knowledge and advanced reasoning capabilities. Training data: up to Sep 2021.

    by openai8K context$30/M input tokens$60/M output tokens
  53. OpenAI: GPT-3.5 Turbo 16kGPT-3.5 Turbo 16k

    The latest GPT-3.5 Turbo model with improved instruction following, JSON mode, reproducible outputs, parallel function calling, and more. Training data: up to Sep 2021. This version has a higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.

    by openai16K context
  54. OpenAI: GPT-4 (older v0314)GPT-4 (older v0314)
    87K tokens

    GPT-4-0314 is the first version of GPT-4 released, with a context length of 8,192 tokens, and was supported until June 14. Training data: up to Sep 2021.

    by openai8K context$30/M input tokens$60/M output tokens
  55. OpenAI: GPT-3.5 TurboGPT-3.5 Turbo
    40.4M tokens

    GPT-3.5 Turbo is OpenAI's fastest model. It can understand and generate natural language or code, and is optimized for chat and traditional completion tasks. Training data up to Sep 2021.

    by openai16K context$0.50/M input tokens$1.50/M output tokens