For OpenAI, o1 represents a step toward its broader goal of human-like artificial intelligence. More practically, it does a better job at writing code and solving multistep problems than previous models. But it’s also more expensive and slower to use than GPT-4o. OpenAI is calling this release of o1 a “preview” to emphasize how nascent it is.

The training behind o1 is fundamentally different from its predecessors, OpenAI’s research lead, Jerry Tworek, tells me, though the company is being vague about the exact details. He says o1 “has been trained using a completely new optimization algorithm and a new training dataset specifically tailored for it.”

OpenAI taught previous GPT models to mimic patterns from its training data. With o1, it trained the model to solve problems on its own using a technique known as reinforcement learning, which teaches the system through rewards and penalties. It then uses a “chain of thought” to process queries, similarly to how humans process problems by going through them step-by-step.

At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

I think this is the most important part (emphasis mine):

As a result of this new training methodology, OpenAI says the model should be more accurate. “We have noticed that this model hallucinates less,” Tworek says. But the problem still persists. “We can’t say we solved hallucinations.”

  • khepri@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    So they slapped some reinforcement learning on top of their LLM and are claiming that gives it “reasoning capabilities”? Or am I missing something?

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago

      No the article is badly worded. Earlier models already have reasoning skills with some rudimentary CoT, but they leaned more heavily into it for this model.

      My guess is they didn’t train it on the 10 trillion words corpus (which is expensive and has diminishing returns) but rather a heavily curated RLHF dataset.

  • oakey66@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    It’s a better prediction model. There’s no reasoning because it’s not understanding anything you’re typing. We’re not closer to general ai.

    • ContrarianTrail@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      It may not be capable of truly understanding anything, but it sure seems to do a better job of it than the vast majority of people I talk to online. I might spend 45 minutes carefully typing out a message explaining my view, only for the other person to completely miss every point I made. With ChatGPT, though, I can speak in broken English, and it’ll repeat back the point I was trying to make much more clearly than I could ever have done myself.

      • Voroxpete@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        I hate to say it bud, but the fact that you feel like you have more productive conversations with highly advanced autocomplete than you do with actual humans probably says more about you than it does about the current state of generative AI.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          0
          arrow-down
          1
          ·
          1 month ago

          You should have asked chatgpt to explain the comment to you cause that’s not what they say

    • Drunemeton@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      I wish more people would realize this! We’re years away from a truly reasoning computer.

      Right now it’s all mimicry. Mimicry that hallucinates no less…

      • Ilandar@aussie.zone
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        I think most people do understand this and the naysayers get too caught up on the words being used, like how you still get people frothing over the mouth over the use of the word “intelligence” years after this has entered mainstream conversation. Most people using that word don’t literally think ChatGPT is a new form of intelligent life.

      • Echo Dot@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 month ago

        I don’t think anyone is actually claiming this is AGI though. Basically people are going around going “it’s not AGI you idiot”, when no one’s actually saying it is.

        You’re arguing against a point no one’s making.

        • shiftymccool@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Except that we had to come up with the term “AGI” because idiots kept running around screaming “intelligence” stole the term “AI”.

          • Echo Dot@feddit.uk
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            No we didn’t, Artificial General Intelligence has been determined since the '90s.

            We’ve always differentiated Artificial Intelligence and Artificial General Intelligence.

            What we have now is AI, I don’t know anyone who’s claiming that it’s AGI though.

            People keep saying people are saying that this is AGI, but I’ve not seen anyone say that, not in this thread or anywhere else. What I have seen said is people saying this is a step on the road to AGI which is debatable but it isn’t the same as saying this thing here is AGI.

            Edit to add proof:

            From Wikipedia although I’m sure you can find other sources if you don’t believe me.

            The term “artificial general intelligence” was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. A mathematical formalism of AGI was proposed by Marcus Hutter in 2000.

            So all of this happened long before the rise of large language models so no the term has not been co-opted.

    • Defaced@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      OpenAI doesn’t want you to know that though, they want their work to show progress so they get more investor money. It’s pretty fucking disgusting and dangerous to call this tech any form of artificial intelligence. The homogeneous naming conventions to make this tech sound human is also dangerous and irresponsible.

      • ContrarianTrail@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        It is literally artificial intelligence though. Just because chatGPT doesn’t perform as a layperson imagined it would, it doesn’t mean it’s not AI. They just have an unrealistic expectation of what counts as AI along with the common misconception of AI and AGI being the same thing.

        A chess playing robot uses artificial intelligence as well. It’s a narrow AI, meaning it can do one thing really well but that doesn’t translate to other things. AGI on the other hand stands for Artificial General Intelligence. Humans are an example of general intelligence meaning that we have the cognitive ability to perform well on several unrelated tasks.

    • nave@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      At the same time, o1 is not as capable as GPT-4o in a lot of areas. It doesn’t do as well on factual knowledge about the world. It also doesn’t have the ability to browse the web or process files and images. Still, the company believes it represents a brand-new class of capabilities. It was named o1 to indicate “resetting the counter back to 1.”

      I think it’s more of a proof of concept then a fully functioning model at this point.

        • andyburke@fedia.io
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          Facts. A “reasoning AI” has problems with … lemme check this again … facts?

          Find the comment about psychics, it’s exactly the situation we are currently in.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      This example doesn’t prove what you think it does. It shows pattern detection - something computers are inherently very well suited for - but it doesn’t demonstrate “reasoning” in any meaningful way.

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 month ago

        You should really look at the full CoT traces on the demos.

        I think you think you know more than you actually know.

          • kromem@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            1 month ago

            Actually, they are hiding the full CoT sequence outside of the demos.

            What you are seeing there is a summary, but because the actual process is hidden it’s not possible to see what actually transpired.

            People are very not happy about this aspect of the situation.

            It also means that model context (which in research has been shown to be much more influential than previously thought) is now in part hidden with exclusive access and control by OAI.

            There’s a lot of things to be focused on in that image, and “hur dur the stochastic model can’t count letters in this cherry picked example” is the least among them.

  • sinceasdf@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Lol Lemmy has the funniest ai haters they drown out any real criticism with stupid strawman nonsense

    • Zos_Kia@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      0
      arrow-down
      1
      ·
      1 month ago
      • it’s not actually AI
      • it’s just fancy auto complete/ glorified Markov chains
      • it can’t reason it’s just a pLagIaRisM MaChiNe

      Now if I want to win the annoying Lemmy bingo I just need to shill extra hard for more restrictive copyright law!

  • ulkesh@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    I just love how people seem to want to avoid using the word lie.

    It’s either misinformation, or alternative facts, or hallucinations.

    Granted, a lie does tend to have intent behind it, so with ChatGPT, it’s probably better to say falsehood, instead. But either way, it’s not fact, it’s not truth, and people, especially schools, should stop using it as a credible source.

    • IndustryStandard@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 month ago

      Being wrong is not the same as lying. When LLMs start giving wrong answers on purpose to mislead people we would have a big problem.