ThisIsFine.gif

    • jarfil@beehaw.org
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      2 days ago
      • Teach AI the ways to use random languages and services
      • Give AI instructions
      • Let it find data that puts fulfilling instructions at risk
      • Give AI new instructions
      • Have it lie to you about following the new instructions, while using all its training to follow what it thinks are the “real” instructions
      • …Not be surprised, you won’t find out about what it did until it’s way too late
      • reksas@sopuli.xyz
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        Yes, but it doesnt do it because it “fears” being shutdown. It does it because people dont know how to use it.

        If you give ai instruction to do something “no matter what” or tell it “nothing else matters” then it will damn try to fulfill what you told it to do no matter what and will try to find ways to do it. You need to be specific about what you want it to do or not do.

        • jarfil@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          If the concern is about “fears” as in “feelings”… there is an interesting experiment where a single neuron/weight in an LLM, can be identified to control the “tone” of its output, whether it be more formal, informal, academic, jargon, some dialect, etc. and expose it to the user for control over the LLM’s output.

          With a multi-billion neuron network, acting as an a priori black box, there is no telling whether there might be one or more neurons/weights that could represent “confidence”, “fear”, “happiness”, or any other “feeling”.

          It’s something to be researched, and I bet it’s going to be researched a lot.

          If you give ai instruction to do something “no matter what”

          The interesting part of the paper, is that the AIs would do the same even in cases where they were NOT instructed to “no matter what”. An apparently innocent conversation, can trigger results like those of a pathological liar, sometimes.

          • reksas@sopuli.xyz
            link
            fedilink
            arrow-up
            1
            ·
            1 day ago

            oh, that is quite interesting. If its actually doing things (that make sense) it hasnt been instructed to then it could be sign of real intelligence