

There are days when 70% error rate seems low-balling it, itās mostly a luck of the draw thing. And be it 10% or 90%, itās not really automation if a human has to be double-triple checking the output 100% of the time.
Itās not always easy to distinguish between existentialism and a bad mood.
There are days when 70% error rate seems low-balling it, itās mostly a luck of the draw thing. And be it 10% or 90%, itās not really automation if a human has to be double-triple checking the output 100% of the time.
Training a model on its own slop supposedly makes it suck more, though. If Microsoft wanted to milk their programmers for quality training data they should probably be banning copilot, not mandating it.
At this point itās an even bet that they are doing this because copilot has groomed the executives into thinking it canāt do wrong.
LLMs are bad even at converting news articles to smaller news articles faithfully, so Iām assuming in a significant percentage of conversions the dumbed down contract will be deviating from the original.
I posted this article on the general chat at work the other day and one person became really defensive of ChatGTP, and now I keep wondering what stage of being groomed by AI theyāre currently at and if itās reversible.
Not really possible in an environment were the most useless person you know keeps telling everyone how AI made him twelve point eight times more productive, especially when in hearing distance from the management.
A programmer automating his job is kind of his job, though. Thatās not so much the problem as the complete enshittification of software engineering that the culture surrounding these dubiously efficient and super sketchy tools seems to herald.
On the more practical side, enterprise subscriptions to the slop machines do come with assurances that your companyās IP (meaning code and whatever else thatās accessible from your IDE that your copilot instance can and will ingest) and your prompts wonāt be used for training.
Hilariously, github copilot now has an option to prevent it from being too obvious about stealing other peopleās code, called duplication detection filter:
If you choose to block suggestions matching public code, GitHub Copilot checks code suggestions with their surrounding code of about 150 characters against public code on GitHub. If there is a match, or a near match, the suggestion is not shown to you.
Liuson told managers that AI āshould be part of your holistic reflections on an individualās performance and impact.ā
who talks like this
Good parallel, the hands are definitely strategically hidden to not look terrible.
Like, assuming we could reach a sci-fi vision of AGI just as capable as a human being, the primary business case here is literally selling (or rather, licensing out) digital slaves.
Big deal, weāll just configure a few to be in a constant state of unparalleled bliss to cancel out the ones having a hard time of it.
Although Iād guess human level problem solving neednāt imply a human-analogous subjective experience in a way that would make suffering and angst meaningful for them.
Ed Zitron summarizes his premium post in the better offline subreddit: Why Did Microsoft Invest In OpenAI?
Summary of the summary: they fully expected OpenAI wouldāve gone bust by now and MS would be looting the corpse for all itās worth.
PZ Myers boosted the pivot-to-ai piece on veo3: https://freethoughtblogs.com/pharyngula/2025/06/23/so-much-effort-spiraling-down-the-drain-of-ai/
Fund copyright infringement lawsuits against the people they had been bankrolling the last few years? Sure, if the ROI is there, but Iām guessing theyāll likely move on to then next trendy sounding thing, like a quantum remote diddling stablecoin or whatevertheshit.
I too love to reminisce over the time (like 3m ago) when the c-suite would think twice before okaying uploading whatever wherever, ostensibly on the promise that it would cut delivery time (up to) some notable percentage, but mostly because everyone else is also doing it.
Code isnāt unmoated because itās mostly shit, itās because thereās only so many ways to pound a nail into wood, and a big part of what makes a programming language good is that it wonāt let you stray too much without good reason.
You are way overselling coding agents.
Ah yes, the supreme technological miracle of automating the ctrl+c/ctrl+v parts when applying the LLM snippet into your codebase.
On the other hand they blatantly reskinned an entire existing game, and thereās a whole breach of contract aspect there since apparently they were reusing their own code that they wrote while working for Bethesda, who I doubt wouldāve cared as much if this were only about an LLM-snippet length of code.
Iād say that incredibly unlikely unless an LLM suddenly blurts out Teslaās entire self-driving codebase.
The code itself is probably among the least behind-a-moat things in software development, thatās why so many big players are fine with open sourcing their stuff.
Yet, under Aron Petersonās LinkedIn posts about these video clips, you can find the usual comments about him being āa Ludditeā, being āin denialā etc.
And then thereās this:
From: Rupert Breheny Bio: Cobalt AI Founder | Google 16 yrs | International Keynote Speaker | Integration Consultant AI Comment: Nice work. Iāve been playing around myself. First impressions are excellent. These are crisp, coherent images that respect the style of the original source. Camera movements are measured, and the four candidate videos generated are generous. They are relatively fast to render but admittedly do burn through credits.
From: Aron Peterson (Author) Bio: My body is 25% photography, 25% film, 25% animation, 25% literature and 0% tolerating bs on the internet. Comment: Rupert Breheny are you a bot? These are not crisp images. In my review above I have highlighted these are terrible.
AI is the product, not the science.
Having said that:
you know that thereās almost no chance youāre the real you and not a torture copy
I basiliskās wager was framed like that, that you canāt know if you are already living in the torture sim with the basilisk silently judging you, it would be way more compelling that the actual āyou are ontologically identical with any software that simulates you at a high enough level even way after the fact because [preposterous transhumanist motivated reasoning]ā.
If anybody doesnāt click, Cremieux and the NYT are trying to jump start a birther type conspiracy for Zohran Mamdani. NYT respects Cremās privacy and doesnāt mention heās a raging eugenicist trying to smear a poc candidate. Heās just an academic and an opponent of affirmative action.