Artificial Generalized Incompetence

  • Komodo Rodeo@lemmy.world
    link
    fedilink
    English
    arrow-up
    108
    ·
    1 day ago

    I mean, I’m not going to spend time trying to duplicate their results, but it wouldn’t even slightly surprise me. Cops have been using ChatGPT to streamline their bullshit cop-lingo incident reports, to the extent that it’s caught the notice of lawyers and judges… 100% I believe that the dolts who shit out Trump’s tariff rates used it too.

  • brucethemoose@lemmy.world
    link
    fedilink
    English
    arrow-up
    122
    arrow-down
    4
    ·
    1 day ago

    How about the outlet checks and finds out?

    I did, and I couldn’t get low-temperature Gemini or a local LLM to replicate it, and not all the tariffs seem to be based on the trade deficit ratio, though some suspiciously are.

    Sorry, but this is a button of mine, outlets that ask stupidly easy to verify questions but dont even try. No, just cite people on Reddit and Twitter…

    • lightnsfw@reddthat.com
      link
      fedilink
      English
      arrow-up
      24
      ·
      23 hours ago

      That bothers me too. Get an actual expert source to verify before you publish shit from randos on Twitter and Reddit.

        • MyNameIsIgglePiggle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          22
          ·
          20 hours ago

          “these lazy fucks in the government are using ai to come up with policy”

          Also news outlet

          “I am too lazy to do the laziest thing I’m angry about, even though it’s my literal job”

          • Yoga@lemmy.ca
            link
            fedilink
            English
            arrow-up
            7
            ·
            19 hours ago

            “News outlet” is a huge stretch. It’s a crypto currency blog pretending to be news.

        • lightnsfw@reddthat.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          21 hours ago

          But that doesn’t confirm or deny that Trumps formula came from ChatGPT, they could both be drawing from some other source.

          • brucethemoose@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            20 hours ago

            You can generally toggle LLM “grounding” features, aka inserting web searches into their context.

            Modern LLMs have a information “cutoff” of a few months ago, at the latest, so the base models will have zero awareness of this formula.

            • lightnsfw@reddthat.com
              link
              fedilink
              English
              arrow-up
              3
              ·
              18 hours ago

              Unless the formula came from something that already existed that both Trumps people and these models are referencing to come up with the same number.

    • bassomitron@lemmy.world
      link
      fedilink
      English
      arrow-up
      32
      ·
      edit-2
      1 day ago

      though some suspiciously are.

      Some? A huge portion are. Numerous others have replicated it with visual proof. I agree that the news sites should be verifying it, but NYT did and also documented their proof.

    • Grostleton@lemm.ee
      link
      fedilink
      English
      arrow-up
      14
      arrow-down
      6
      ·
      1 day ago

      Because the article is likely just more GenAI vomit, and an LLM doesn’t have any degree of deductive reasoning ability to begin with.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        edit-2
        21 hours ago

        TBH it’s probably human written.

        I used to write small articles for a tech news outlet on the side (HardOCP), and the entire site went under well before the AI boom because no one can compete with conveyer belts of of thoughtless SEO garbage, especially when Google promotes it.

        Point being, this was a problem well before the rise of LLMs.

    • prole@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      1 day ago

      Are you annoyed that they didn’t try to replicate it, or that they’re disparaging LLMs?

  • qwerty@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    35
    ·
    9 hours ago

    They are reciprocal so should be the same as what other nations are charging the US. The formula for them is: tariff for X = X’s tariff on US, so no surprise here.

  • reksas@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    1 day ago

    what if they all come up with that because it has been publicised and they just refer to that because they have nothing else to base the questions about that specific topic on?

    I just glanced at it and wouldnt know how something like that is even supposed to be, so I dont really know how unhinged the tariff rate thing is. It wouldnt surprise me if it was based only to whatever happened to be going through the madmans mind at the time.

    • kyle@lemm.ee
      link
      fedilink
      English
      arrow-up
      12
      ·
      1 day ago

      The numbers come from an overly simple way to level out trade deficits.

      So if I sell you $100 in goods and you sell me $120 dollars in goods, I’m “losing” money, therefore 20% tariff (tax to sell me something). In reality, you’re going to increase your prices and sell me $140 worth of the same stuff.

      All the AIs did was expand this to a global scale, what’s insane to me is that the math adds up. It doesn’t take an AI to do this though, some economics undergrad could come up with the same thing. Understanding the underlying methodology shows how it completely lacks nuance or understanding of how the world really works.

    • Bytemeister@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 day ago

      Yeah, this makes sense to me. ChatGPT isn’t crunching the numbers, looking at conservative ideology, foreign policy goals and media optics before recommending the ideal number for the trump admin to implement. Instead it’s just looking for the most widely publicized set of numbers in relation to that query and regurgitating that.

    • UnderpantsWeevil@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 day ago

      what if they all come up with that because it has been publicised

      Then I’d ask who published and where they got their analysis from. Very possible that we’ve got an AI that’s built up a backlog of Harvard Business Studies and CalTech economics models to reach the ideal hypothetical tariff regime. But it’s just as likely they’re ingesting 4chan reposts of Ron Paul Newsletters and Michael Savage radio transcripts to build up its economic background.

      That’s sort of the problem with AI. There’s no specialist-driven guidance on what data is valuable and what data is crap. No litmus test to separate fact from fiction or serious discussion versus trolling. And these western developed models, in particular, are very bad about including the origins of their graphed logical output (because that would make the process of hashing and graphing more expensive, in a system that’s already inelegant and resource intensive).

      I just glanced at it and wouldnt know how something like that is even supposed to be, so I dont really know how unhinged the tariff rate thing is.

      The problem is less that we don’t know how bad the tariff rate is and more that the people designing the policies don’t know either. They’re fishing for answers in the answer pond, and they don’t even know if they’ve got a fish or a boot at the end of the line.

      • NABDad@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        They’re fishing for answers in the answer pond,

        Except, they’ve actually dropped their lines in the stupidity toilet.

      • reksas@sopuli.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        One would have had to ask the ai about it before all this to know where it might be getting its information from

  • elfin8er@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    16
    ·
    1 day ago

    Did ChatGPT come up with the color of the sky? AI chatbots ChatGPT, Gemini, Claude and Grok all return the same color for the sky, several X users claim.

      • acosmichippo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        the point is chat GPT is trained on ideas people have already had. it’s not inventing Trump’s economic theory out of thin air.

  • DarkCloud@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    6
    ·
    edit-2
    1 day ago

    All the search engines search the same internet, find similar text, output it using similar formulas.

      • DarkCloud@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        7
        ·
        edit-2
        23 hours ago

        They are. They record the data, stealing it. They search it (or characteristics of it), and reprint it (in whole or in part) upon request.

        Viewing it as something creative, or other than a glorified remixing machine is the problem. It’s a search engine for creative works they’ve stolen, and reproduce parts of.

        They search the data-space of what they’re “trained” on (our content, the content of human beings), and reproduce statistically defined elements of it.

        They’re search engines that have stolen what they’re “trained on”, and reproduce it as “results” (be that images or written text, it has to come from our collective data. Data we created). It’s theft. It’s copywrite fraud. Same as google stealing books (which they had to he sued over the digitizing of, and enter into rights agreements over).

        Searching and reproducing content they’ve already recorded (aka stolen without permission), is absolutely part of what they are. Part of what they do.

        Don’t stan for them or pretend they’re creative, intelligent, or doing anything original.

        The real lie is that it’s “training data”. It’s not. It’s the internet, and it’s not training - it’s theft, it’s stealing and copying (violating copyright). Digital stealing, and processing into a “data set”, a representation or repackaging of our original works.

      • UnderpantsWeevil@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        14
        ·
        edit-2
        1 day ago

        The basic graphing technology used by AI is the same pioneered by Alta Vista and optimized by Google years later. We’ve added a layer of abstraction through user I/O, such that you get a formalized text response encapsulating results rather than a series of links containing related search terms. But the methodology used to harvest, hash, and sort results is still all rooted in graph theory.

        The difference between then and now is that back then you’d search “Horse” in Alta Vista and getting a dozen links ranging from ranches and vet clinics to anime and porn. Now, you get a text blob that tries to synthesize all the information in those sources down to a few paragraphs of relevant text.

        • MartianSands@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          13
          arrow-down
          2
          ·
          1 day ago

          That simply isn’t true. There’s nothing in common between an LLM and a search engine, except insofar as the people developing the LLM had access to search engines, and may have used them during their data gathering efforts for training data

          • DarkCloud@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            4
            ·
            edit-2
            23 hours ago

            “data gathering” and “training data” is just what they’ve tricked you into calling it (just like they tried to trick people into calling it an “intelligence”).

            It’s not data gathering, it’s stealing. It’s not training data, it’s our original work.

            It’s not creating anything, it’s searching and selectively remixing the human creative work of the internet.

            • MartianSands@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              23 hours ago

              You’re putting words in my mouth, and inventing arguments I never made.

              I didn’t say anything about whether the training data is stolen or not. I also didn’t say a single word about intelligence, or originality.

              I haven’t been tricked into using one piece of language over another, I’m a software engineer and know enough about how these systems actually work to reach my own conclusions.

              There is not a database tucked away in the LLM anywhere which you could search through and find the phrases which it was trained on, it simply doesn’t exist.

              That isn’t to say it’s completely impossible for an LLM to spit out something which formed part of the training data, but it’s pretty rare. 99% of what it generates doesn’t come from anywhere in particular, and you wouldn’t find it in any of the sources which were fed to the model in training.

              • DarkCloud@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                3
                ·
                edit-2
                21 hours ago

                It’s searched in training, tagged for use/topic then that info is processed and filtered through layers. So it’s pre-searched if you will. Like meta tags in the early internet.

                Then the data is processed into cells which queries flow through during generation.

                99% of what it generates doesn’t come from anywhere in particular, and you wouldn’t find it in any of the sources which were fed to the model in training.

                Yes it does - the fact that you in particular can’t recognize from where it comes: doesn’t matter. It’s still using copywrited works.

                Anyways you’re an AI stan, and defending theft. You can deny it all day, but it’s what you’re doing. “It’s okay, I’m a software engineer I’m allowed to defend it”

                …as if being a software engineer doesn’t stop you from also being a dumbass. Of course it doesn’t.

                • MartianSands@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  21 hours ago

                  You’re still putting words in my mouth.

                  I never said they weren’t stealing the data

                  I didn’t comment on that at all, because it’s not relevant to the point I was actually making, which is that people treating the output of an LLM as if it were derived from any factual source at all is really problematic, because it isn’t.

    • IninewCrow@lemmy.ca
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      … and generating AI porn, so much AI porn, it will destroy humanity with so much AI porn

  • InvertedParallax@lemm.ee
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    Actually, it was the Palantir Gotham threat model… which has a backend to a private chatgpt model :(

  • ArchRecord@lemm.ee
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 day ago

    I tried replicating this myself, and got no similar results. It took enough coaxing just to get the model to not specify existing tariffs, then to make it talk about entire nations instead of tariffs on specific sectors, then after that it mostly just did 10, 12, and 25% for most of the answers.

    I have no doubt this is possible, but until I see some actual amount of proof, this is entirely hearsay.