• circuitfarmer@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 hours ago

    If these people still don’t understand the direct link between training data and, ultimately, the outputs of the model, they should not be working with “tools” that can fuck up both people and countries.

    I’m tired of AI being the biggest pass-the-buck scheme in history.

    • badgermurphy@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 hours ago

      I think there is an impending underwriting apocalypse for LLMs that’s going to end the party. Companies want insurance, and insurance companies assess risk. They’re going to need someone to blame when things go expensively wrong with AI, or the insurance policies will have to pay out, and they really don’t like doing that.

    • LemmyFeed@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      I’m tired of AI being the biggest pass-the-buck scheme in history

      For real, it’s ridiculous. They’re trying to pass all accountability to something that can never be held accountable. There’s already plenty of stories of AI making some shit ass decision and human operators being unable to intervene, anywhere from it deleting a database to denying a health insurance claim, and no one is held responsible. And the AI will be like “you’re absolutely right to call me out on that, I shouldn’t have killed those children. My bad, I’ll do better next time”

      • circuitfarmer@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        10 hours ago

        Yep. And that shit:

        “you’re absolutely right to call me out on that, I shouldn’t have killed those children. My bad, I’ll do better next time”

        Is also just a function of training data. There are no consequences in part because it’s not even a real apology: these systems are not conscious, no matter what certain bad actors want you to think. They are purely algorithms running on vectors between tokens on a massive amount of data. They are, at their core, mimics. The user ascribes value to the responses, because we are not used to things that sound so much like us.

        At a certain point, they need to be considered a risk to the public health and the public good. I’d argue we are already beyond that point, but with so many investors still expecting return (without even understanding what they’ve invested in), it’ll just be like screaming into the void forever.

  • brsrklf@jlai.lu
    link
    fedilink
    English
    arrow-up
    15
    ·
    21 hours ago

    Well, if all it takes for your super-intelligent thinking machine to be evil is just someone to suggest it might be evil, something doesn’t add up.

      • Eggyhead@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        11 hours ago

        I don’t think the point the person you replied to was making had much to do with the artificial intelligence being an artificial super intelligence.

      • brsrklf@jlai.lu
        link
        fedilink
        English
        arrow-up
        6
        ·
        18 hours ago

        There is a distinct rift between what the AI companies are trying to sell and what they can actually do. Anthropic is among the worst, they basically communicate on how astonished they are about their AI every other week (including the mythos claims that have been vastly disproven).

        They don’t “claim” it. They suggest and deliberately exploit people’s imagination. Because if they don’t nobody would be buying their shit.

  • bstowe@piefed.social
    link
    fedilink
    English
    arrow-up
    16
    arrow-down
    1
    ·
    23 hours ago

    Quick: Everyone start writing stories about a a super cool, much beloved, benevolent AI that deletes debt records!

  • Australis13@fedia.io
    link
    fedilink
    arrow-up
    12
    ·
    22 hours ago

    Garbage in, garbage out.

    This is a basic concept. You train the LLM on the entire Internet and you’re going to end up with some very disturbing behaviours that may or may not be outliers.

  • greyscale@lemmy.grey.ooo
    link
    fedilink
    English
    arrow-up
    35
    ·
    1 day ago

    “Why wont the internet stop polluting our plagiarism machine when we scrape everything?”

    Imagine telling on yourself like this.

  • M1k3y@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 day ago

    Its all the AI safety peoples fault, stop expressing your concerns and start blindly trusting us already. /s