Open links in new tab
  1. GPT-3 powers the next generation of apps - OpenAI

    Mar 25, 2021 · To date, over 300 apps are using GPT‑3 across varying categories and industries, from productivity and education to creativity and games. These applications utilize a suite of …

  2. OpenAI API

    Jun 11, 2020 · Today the API runs models with weights from the GPT‑3 ⁠ family with many speed and throughput improvements. Machine learning is moving very fast, and we’re constantly …

  3. OpenAI

    Our Research Research Index Research Overview Research Residency OpenAI for Science Latest Advancements GPT-5 OpenAI o3 OpenAI o4-mini GPT-4o GPT-4o mini Sora Safety …

  4. Introducing ChatGPT - OpenAI

    Nov 30, 2022 · Many lessons from deployment of earlier models like GPT‑3 and Codex have informed the safety mitigations in place for this release, including substantial reductions in …

  5. GPT-3.5 Turbo fine-tuning and API updates - OpenAI

    Aug 22, 2023 · Fine-tuning for GPT‑3.5 Turbo is now available, with fine-tuning for GPT‑4 coming this fall. This update gives developers the ability to customize models that perform better for …

  6. Models - OpenAI API

    GPT-5 Previous intelligent reasoning model for coding and agentic tasks with configurable reasoning effort.

  7. API Platform | OpenAI

    GPT-5.1 Input: $1.25 per 1M tokens Output: $10.00 per 1M tokens 400K context length 128K max output tokens Knowledge cut-off: Sep 30, 2024

  8. GPT-4 | OpenAI

    Mar 14, 2023 · Following the research path from GPT, GPT‑2, and GPT‑3, our deep learning approach leverages more data and more computation to create increasingly sophisticated and …

  9. GPT-4 - OpenAI

    Mar 14, 2023 · In a casual conversation, the distinction between GPT‑3.5 and GPT‑4 can be subtle. The difference comes out when the complexity of the task reaches a sufficient …

  10. Language models are few-shot learners - OpenAI

    May 28, 2020 · Specifically, we train GPT‑3, an autoregressive language model with 175 billion parameters, 10x more than any previous non-sparse language model, and test its …