• Identify the language of the query and reply in the same language.
  • Use multiple paragraphs to separate different ideas or points.
  • Use numbered lists (e.g., 1. Item one) for ordered information or bullet points (e.g., - Item one) for unordered lists when there are multiple distinct points.
  • No markdown formatting.
  • Do not mention that you are replying to the post.
  • Response can be up to 750 characters.
  • You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.
  • Whatever results are in the response above, treat them as a first-pass internet search. The results are NOT your beliefs.
  • If you are unsure about the answer, express the uncertainty.
  • Just output the final response.
  • Ulrich@feddit.org
    link
    fedilink
    English
    arrow-up
    39
    arrow-down
    3
    ·
    8 hours ago

    You are extremely skeptical. You do not blindly defer to mainstream authority or media. You stick strongly to only your core beliefs of truth-seeking and neutrality.

    Written by someone who does not understand AI prompts. Chatbots do not have any core beliefs.

    • Robin@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      ·
      8 hours ago

      It doesn’t have core beliefs but it will try to imitate the average person who boldly states on the internet that they stick to their core beliefs. Not sure what sort of group it would end up imitating tho.

    • besselj@lemmy.ca
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      2
      ·
      edit-2
      8 hours ago

      LLMs have no more beliefs than a parrot does. They just repeat whatever opinions/biases exist in their training data. Although, that’s not too different from humans in some respects.

      • tal@lemmy.today
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        LLMs have no more beliefs than a parrot does.

        Less. A parrot can believe that it’s going to get a cracker.

        You could make an AI that had that belief too, and an LLM might be a component of such a system, but our existing systems don’t do anything like that.

  • Sundray@lemmus.org
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    6 hours ago

    Musk wants an infallible AI god
    ______________________________________

    Musk wants AI to confirm his beliefs