Maintaining this GPT-4 “kinda sucks,” Open AI CEO Sam Altman recently said his company will release an “amazing up-to-date model” this year. This has sparked speculation that OpenAI may release GPT-5 sometime between June and August this year. Why is this a gigantic deal?
What triggered the emotions?
OpenAI, the creator of ChatGPT, inadvertently published a blog post that was indexed by the search engines Bing and DuckDuckGo. X and Reddit readers first noticed the page, which has since been taken down, but the cached version points to a GPT-4.5 Turbo release with a “knowledge cap” of June 2024 (the date when the AI model will stop being trained on information). This has led many to believe that OpenAI will release GPT-4.5 Turbo this summer. When asked if GPT-5 would come this year, Altman noted, “We’ll be releasing an amazing up-to-date model this year,” but “I don’t know.” whatever we call it.”
Is it true that GPT-4 “kinda sucks”?
Released in 2023, GPT-4’s training data specifications and parameters have not been revealed, but unlike GPT-3 it can accept both text and images as input and emit (multimodal) text output. But he still can’t reason, has hallucinations (he’s sure to give wrong answers), and is the cause of many plagiarism and copyright infringement lawsuits. Launched in 2020 with 175 billion parameters, GPT-3 was a huge improvement over previous editions, with few learning effects (learning from only a miniature amount of labeled training data), but concerns about bias, hallucinations and understanding remained context.
What can we expect from the “amazing” up-to-date model?
GPT-5, or whatever OpenAI calls its up-to-date model, is expected to exponentially expand GPT-4’s multimodal capabilities, have a larger context window (to allow for more input), and predict the next token in the sequence, enabling tasks such as sentence completion and code generation necessary for chatbots like ChatGPT.
How’s the competition?
Immovable. GPT-4 faces competition from entry-level models such as Google’s Gemini, Meta’s LLaMa, and Anthropic’s Claude 3 family. While Microsoft invested about $10 billion in OpenAI, Amazon increased its stake in Anthropic to $4 billion. The anthropic Claude 3 family – Claude 3 Haiku, Claude 3 Sonnet and Claude 3 Opus – will initially offer 200,000 context windows, which is significantly more than
GPT-4 is 128,000. Additionally, Gemini 1.0 Ultra can handle up to 1 million tokens (numerical representation of words).