• RickRussell_CA@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    1 year ago

    That, in my mind, is a non-threat. AIs have no motivation; there’s no reason for an AI to do any of that.

    Unless it’s being manipulated by a bad actor who wants to do those things. THAT is the real threat. And we know those bad actors exist and will use any tool at their disposal.

    • JackGreenEarth@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      They have the motivation of whatever goal you programmed them with, which is probably not the goal you thought you programmed it with. See the paperclip maximiser.

      • RickRussell_CA@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        I’m familiar with that thought exercise, but I find it to be fearmongering. AI isn’t going to be some creative god that hacks and breaks stuff on its own. A paperclip maximizer AI isn’t going to manipulate world steel markets or take over steel mills unless that capability is specifically built into its operating parameters.

        The much greater risk in the near term is that bad actors exploit AI to accomplish very specific immoral, illegal, or exploitative tasks by building those tasks into AI. Such as deepfakes, or using drones to track and murder people, etc. Nation-state actors will probably start using this stuff for truly horrible reasons long before criminals do.