• brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    4 months ago

    Its far worse because LLMs are so data hungry. Getting quality data for image diffusion models is not nearly as much of an issue, though still a problem.

    • YourPrivatHater@ani.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      You can probably limit the data intake of the image ones pretty good, with LLMs its worse yes. But since the entire shit is a bouble and a environmental catastrophe im happy when they all fail for the better, at least the big ones. The open source guys can still tinker on their hobby of course, maybe some day we get a actual AI that is actually useful.

      • brucethemoose@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        4 months ago

        Nah. I hate to sound bitter/salty, but all the AI haters are just going to fuel OpenAI’s crusade/lobbying against open source, and we will be stuck with expensive, inefficient, dumb corporate API models trained on copyrighted material in secret because the corporations literally don’t care. And it will do nothing to solve the environmental problems.

        There’s tons of research on making training and especially inference more power efficient, on making data cleaner and fairer, and it’s getting squandered from the lobbying against open source that the “AI is all bad” crowd is fueling. All the money to even turn these experiments into usable models is getting funneled away already.

        Everyone’s got it wrong, the battle isn’t between AI or no AI, it’s whether your own it and run it yourself, or big tech owns it and runs it. You know, like Lemmy vs Reddit.

        So… that’s my rant.