

I had a straight-up āwait I thought he was back in his hole after being outedā moment. I hate that all the weird little dumbasses we know here keep becoming relevant.
I had a straight-up āwait I thought he was back in his hole after being outedā moment. I hate that all the weird little dumbasses we know here keep becoming relevant.
Damn cat just stood on my phone and launched Gemini for the first time, so we can drop Googleās monthly active user count by one relative to whatever they claim.
Hah! ampleksi, etendi, estingi
Google translate assures me that this is very funny.
I think that āWhateverā, or maybe content(derisive) is yet another valuable bridging concept that connects the different threads of how we got here. If "Business Idiots* are the āwhoā and āthe Rot Economy/Shareholder Supremacyā is the āwhyā then ācontentificationā or āWhateverizationā is a huge part of the āhowā.
This ties back into the recurring question of drawing boundaries around āAIā as a concept. Too many people just blithely accept that itās just a specific set of machine learning techniques applied to sufficiently large sets of data. This in spite of the fact that weāre several AI ācyclesā deep where every 30 years or so (whenever it stops being āretroā) some new algorithm or mechanism is definitely going to usher in Terminator II: Judgement Day.
This narrow frame focused on LLMs still allows for some discussion of the problems weāre seeing (energy use, training data sourcing, etc) but it cuts off a lot of the wider conversations about the social, political, and economic causes and impacts of outsourcing the business of being human to a computer.
I feel like thereās got to be a surreal horror movie in there somewhere. Like an AI-assisted Videodrome or something.
This isnāt studying possible questions, this is memorizing the answer key to the test and being able to identify that the answer to question 5 is ā17ā but not being able to actually answer it when they change the numbers slightly.
God I remember having to cite RFC at other vendors when I worked in support and it was never not a pain in the ass to try and find the right line that described the appropriate feature. And then when I was done I knew I sounded like this even as I hit send anyway.
Itās kind of a shame to have to downgrade Gary to ānot wrong, but kind of a dickā here. Especially because his sneer game as shown at the end there is actually not half bad.
Another winner from Zitron. One of the things I learned working in tech support is that a lot of people tend to assume the computer is a magic black box that relies on terrible, secret magicks to perform itās dark alchemy. And while itās not that the rabbit hole doesnāt go deep, there is a huge difference between the level of information needed to do what I did and the level of information needed to understand what I was doing.
Iām not entirely surprised that business is the same way, and I hope that in the next few years we have the same epiphany about government. These people want you to believe that you canāt do what they do so that you donāt ask the incredibly obvious questions about why itās so dumb. At least in tech support I could usually attribute the stupidity to the limitations of computers and misunderstandings from the users. I donāt know what kinda excuse the business idiots and political bullshitters are going to come up with.
One of the YouTube comments was actually kind of interesting in trying to think through just how wildly you would need to change the creative process in order to allow for the quirks and inadequacies of this ātoolā. It really does seem like GenAI is worse than useless for any kind of artistic or communicative project. If you have something specific you want to say or you have something specific you want to create the outputs of these tools are not going to be that, no matter how carefully you describe it in the prompt. Not only that, but the underlying process of working in pixels, frames, or tokens natively, rather than as a consequence of trying to create objects, motions, or ideas, means that those outputs are often not even a very useful starting point.
This basically leaves software development and spam as the only two areas I can think of where GenAI has a potential future, because theyāre the only fields where the output being interpretable by a computer is just as if not more important than whatever its actual contents are.
Itās also a case where I think the lack of intentionality hurts. Iām reminded of the way the YouTube algorithm contributed to radicalization by feeding people steadily more extreme versions of what they had already selected. The algorithm was (and is) just trying to pick the video that you would most likely click on next, but in so doing it ended up pushing people down the sales funnel towards outright white supremacy because what videos you were shown actually impacted which video you would choose to click next. Of course since the videos were user-supplied content they started taking advantage of that tendency with varying degrees of success, but the algorithm itself wasnāt āsecretly fascistā and in the same way would, over time, push people deeper into other rabbit holes, whether that meant obscure horror games, increasingly unhinged rage video collections, and generally everything that was once called āthe weird part of YouTube.ā
ChatGPT and other bots donāt have failed academics and comedians trying to turn people into Nazis, but it does have a similar lack of underlying anything, and that means that unlike a cult with a specific ideology itās always trying to create the next part of the story you most want to hear. Weāve seen versions of this that go down a conspiracy thriller route, a cyberpunk route, a Christian eschatology route, even a romance route. Like, itās pretty well known that there are ācult hoppersā who will join a variety of different fringe groups because thereās something about being in a fringe group that theyāre attracted to. But there are also people who will never join scientology, or the branch davidians, or CrossFit, but might sign on with Jonestown or QAnon with the right prompting. LLMs, by virtue of trying to predict the next series of tokens rather than actually having any underlying thoughts, will, on a long enough timeframe, lead people down any rabbit hole they might be inclined to follow, and for a lot of people - even otherwise mentally healthy people - that includes a lot of very dark and dangerous places.
The folks over at futurism are continuing to do their damnedest to spotlight the ongoing mental health crisis being spurred by chatbot sycophants.
I think the real problem this poses for OpenAI is that in order to address it they basically need to back out of their entire sales pitch. Like, these are basically people who fully believe the hype and it pretty clearly is part of sending them down a very bad road.
Thatās fucking abominable. I was originally going to ask why anyone would bother throwing their slop on Newgrounds of all sites, but given the business model here I think we can be pretty confident they were hoping to use it to advertise.
Also, fully general bullshit detection question no.142 applies: if this turnkey game studio works as well as you claim, why are you selling it to me instead of doing it yourself? (Hint: itās because it doesnāt actually work)
I also feel like while itās absolutely true that the whole āweāll make AGI and get a ton of moneyā narrative was always bullshit (whether or not anyone relevant believed it) it is also another kind of evil. Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves. Like, if they did believe their own hype and werenāt grifting their hearts out then theyāre a whole different class of monster. From an ethical perspective, the grift narrative lets everyone involved be better people.
Also tell me more about how you donāt have a lower-class or nonwhite-coded accent.
The whole list of āimprovedā sources is a fascinating catalogue of preprints, pop sci(-fi) schlock, and credible-sounding vanity publishers. And even most of those appear to reference āinner alignmentā as a small part of some larger things, which I would expect to merit something like a couple sentences in other articles. Ideally ones that start with āso thereās this one weird cult that believesā¦ā
Iām still allowed to dream, right?
We were joking about this last week if memory serves, but at least one person out there has started a rough aggregator of different sources of pre-AI internet dumps.
Itās all gotta be in the models by now, but itās gonna be a cool resource for something, right?
Iām somewhat disappointed by the fair use assessment, since I think calling AI models ātransformativeā is a bit of a stretch from how that is normally used, but I also see where the judge is coming from. Would the analytics that go into Googleās Ngram word frequency engine be considered infringing? You know, provided we ignore that the fuckers couldnāt be bothered to find a single goddamn copy of the book they wanted to feed into the data shredder.
All this technology and we still havenāt gotten past Grease 2.