• 0 Posts
  • 197 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle




  • I think that ā€œWhateverā€, or maybe content(derisive) is yet another valuable bridging concept that connects the different threads of how we got here. If "Business Idiots* are the ā€˜who’ and ā€œthe Rot Economy/Shareholder Supremacyā€ is the ā€˜why’ then ā€œcontentificationā€ or ā€œWhateverizationā€ is a huge part of the ā€œhowā€.


  • This ties back into the recurring question of drawing boundaries around ā€œAIā€ as a concept. Too many people just blithely accept that it’s just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that we’re several AI ā€œcyclesā€ deep where every 30 years or so (whenever it stops being ā€œretroā€) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.

    This narrow frame focused on LLMs still allows for some discussion of the problems we’re seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.






  • Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform it’s dark alchemy. And while it’s not that the rabbit hole doesn’t go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.

    I’m not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you can’t do what they do so that you don’t ask the incredibly obvious questions about why it’s so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I don’t know what kinda excuse the business idiots and political bullshitters are going to come up with.


  • One of the YouTube comments was actually kind of interesting in trying to think through just how wildly you would need to change the creative process in order to allow for the quirks and inadequacies of this ā€œtoolā€. It really does seem like GenAI is worse than useless for any kind of artistic or communicative project. If you have something specific you want to say or you have something specific you want to create the outputs of these tools are not going to be that, no matter how carefully you describe it in the prompt. Not only that, but the underlying process of working in pixels, frames, or tokens natively, rather than as a consequence of trying to create objects, motions, or ideas, means that those outputs are often not even a very useful starting point.

    This basically leaves software development and spam as the only two areas I can think of where GenAI has a potential future, because they’re the only fields where the output being interpretable by a computer is just as if not more important than whatever its actual contents are.


  • It’s also a case where I think the lack of intentionality hurts. I’m reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasn’t ā€œsecretly fascistā€ and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called ā€œthe weird part of YouTube.ā€

    ChatGPT and other bots don’t have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology it’s always trying to create the next part of the story you most want to hear. We’ve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, it’s pretty well known that there are ā€˜cult hoppers’ who will join a variety of different fringe groups because there’s something about being in a fringe group that they’re attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.