• AVincentInSpace@pawb.social
    link
    fedilink
    English
    arrow-up
    69
    ·
    1 year ago

    Seems to me it’d be pretty easy to tell. If the footage was AI generated, fingers would be appearing and disappearing.

  • Diabolo96@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    63
    ·
    edit-2
    1 year ago

    It’s scary that the speed of improvement in the AI sector is so fast that people are still talking about something that wasn’t a problem one month after it’s launch if the person sharing the picture spent a bit more time than just writing the prompt and tapping “GENERATE”. not only it wasn’t a problem even back then, you could even choose the pose of the character and all other sorts of parameters. Since several months you can just do the bare minimum and it output the correct number of fingers.

    People are underestimating AI improvement rate by a lot and big tech’s gonna abuse it.

    • kautau@lemmy.world
      link
      fedilink
      arrow-up
      47
      ·
      edit-2
      1 year ago

      Big tech proved in 48 hours with the OpenAI fiasco that, as with every other industry, ethics are gone and money wins in today’s hyper-capitalist system. Whatever promise AI ever held for being used for good is now vastly overshadowed by its likelihood to be used to increase quarterly profits for the highest bidder, along with whatever side effects that entails.

      • Diabolo96@lemmy.dbzer0.com
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        AM and the other AIs from the short story “I have no mouth and I must scream” could be a reality. The deep hatred it has towards humans was never explained and could be an alignment problem. They’re AGIs made to wage wars after all.

        I really recommend robert miles videos. He’s been uploading videos about AI research safety for 6 years, when the most powerful AI were in millions of parameters and vastly under trained.

        https://youtu.be/bJLcIBixGj8

    • Riskable@programming.dev
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      big tech’s gonna abuse it.

      Actually, it’s everyone that’s going to abuse it. Big tech wants to be the exclusive “AI provider” for everyday people’s AI needs and desires but the reality is that the tech isn’t that easy to keep secret/proprietary because most of the innovations pushing AI forward come from individuals fooling around with the technology and academia. Not from big tech R&D (which lately seems to all be spent trying to improve business processes).

      Big tech is spending billions on hardware and entire data centers just to do AI stuff with the expectation that it’ll give them a competitive advantage but the truth is that it’ll be the small companies and individuals that end up taking advantage of AI in ways that actually improve things for everyday people and/or make real money.

      My guess is that they’re betting on acquisitions of companies using their AI processing power 🤷. Either that or it’s just wishful thinking.

      • HiddenLayer5@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 year ago

        AI scams are already rampant where they pretend to be a loved one asking for help (read: “I’m in a bad situation right now can you send me as much money as you can?”) And unsurprisingly it’s unreasonably effective especially on older people.

        Just a reminder that the tech companies absolutely do not see the above as an issue BTW, in fact all they seem to do is tacitly endorse it by advertising that you can use their service to clone people and “bring them to life” virtually and stuff. Because they’re still making money when you use the AI (not to mention they collect and retain the training data you give them, with or without the subject’s consent) and it’s not like it’s that easy for investigators to tell which AI was responsible for a particular scam campaign so there’s really no risk to their reputation at all.

        I’m serious when I say this: If you have elderly or otherwise less tech inclined family members and especially if you have them and your voice and/or photos are publicly available online, set up some kind of password that you have to get right before they send you money, absolutely no exceptions no matter how distressed “you” look or sound. It can be as simple as a word or phrase, or pick a specific shared memory that people outside your family don’t know about that you’ll always mention before asking for money. Do this in advance and tell them that AI can now convincingly replicate human speech and even photos and videos, and that if “you” don’t know the password then they should hang up/block the account immediately and not respond further. You might even want to practice with them if they might forget. The vast majority of these types of scammers are just scraping the internet for information and have no idea who either of you are, so even a simple check like this should be able to significantly reduce the risk of scams.

  • The fingers wouldn’t work, as they’d move in real action like fake fingers rather than blending in and out like AI blended footage.

    This reminds me of the product image (it was never a real product) of a gun that disguises itself as a cell phone. It was never a real product, but US Law Enforcement uses it to justify shooting people brandishing a cell phone.

  • akd@lemm.ee
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    IANAL but this seems stupid to try in court assuming the footage is under good chain of custody.