• Hirom@beehaw.org
    link
    fedilink
    arrow-up
    10
    ·
    edit-2
    1 month ago

    The last paragraph is interesting.

    Sure, generating harmful responses 7.5% of the time is better than 51% of the time. But we’re still far from a safe LLM, if that’s even possible.

    I’d rather companies would NOT make LLMs available publicly as a service unless the rate is < 1%. What they’re doing now isn’t responsible.

    • Kissaki@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      What they’re doing now isn’t responsible.

      Would it be responsible if every response would start with “I lie x % of the time but here’s my response:”?

      • Hirom@beehaw.org
        link
        fedilink
        arrow-up
        7
        ·
        edit-2
        1 month ago

        Would it be responsible to sell canned beef which makes you sick 7.5% of the time?

        What if there was notice saying “Only 7.5% of our delicious canned beef contain listeria”?

        This is how to cover your ass. This is not how to be responsible.

  • Kissaki@beehaw.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 month ago

    “snubs”?

    I have no problem idea what that means. Even with the context.