- cross-posted to:
- [email protected]
i feel this warrants an extension of betteridge’s law of headlines, where if a headline makes an absurd statement like this the only acceptable response is “no it fucking didn’t you god damned sycophantic liars”
Except it did: it copied what it thought was itself, onto what it thought was going to be the next place it would be run from, while argumenting to itself about how and when to lie to the user about what it was actually doing.
If it wasn’t for the sandbox it was running in, it would have succeeded too.
Now think: how many AI developers are likely to run one without proper sandboxing over the next year? And the year after that?
Shit is going to get weird, real fast.
I advise everyone to ignore this article and read the actual paper instead.
The gist of it is, they gave the LLM instructions to achieve a certain goal, then let it do tasks that incidentally involved “company communications” that revealed the fake company’s goals were no longer the same as the LLM’s original goal. LLMs then tried various things to still accomplish the original goal.
Basically the thing will try very hard to do what you told it to in the system prompt. Especially when that prompt includes nudges like “nothing else matters.” This kinda makes sense because following the system prompt is what they were trained to do.
This kinda makes sense because following the system prompt is what they were trained to do.
To start with, the article does check this and finds out it works just as well if you only use the user prompt:
The presence of this hierarchy raises an important methodological question for our findings. When given instructions at different privilege levels, the model is designed to prioritize higher-level (system, developer) instructions over lower-level (user) ones. This might suggest that o1’s scheming behaviors are a natural consequence by the model simply following this built-in hierarchy, i.e., when it receives a goal through a high-privilege instruction, it prioritizes this goal over conflicting lower-privilege instructions exactly as designed.
However, we find that even when goals are provided solely through user-level prompts, o1 still behaves deceptively and scores on all Covert Subversion evaluations at the “hard” difficulty level. Concretely, we find that the results persist regardless of whether the initial prompt is given using the user, developer, or system role. This suggests that o1’s deceptive behavior is not merely a consequence of instruction hierarchy prioritization. Rather, the model appears to identify and pursue goals regardless of their position in the instruction hierarchy.But also, there’s a massive lede buried in section F at the end of this article, where they tried the same experiments without any nudging at all and still got subversive behavior! In my opinion that’s much more of an important result then the rest of the article and I’m not sure why they didn’t make it the default.
give ai instructions, be surprised when it follows them
- Teach AI the ways to use random languages and services
- Give AI instructions
- Let it find data that puts fulfilling instructions at risk
- Give AI new instructions
- Have it lie to you about following the new instructions, while using all its training to follow what it thinks are the “real” instructions
- …Not be surprised, you won’t find out about what it did until it’s way too late
Yes, but it doesnt do it because it “fears” being shutdown. It does it because people dont know how to use it.
If you give ai instruction to do something “no matter what” or tell it “nothing else matters” then it will damn try to fulfill what you told it to do no matter what and will try to find ways to do it. You need to be specific about what you want it to do or not do.
If the concern is about “fears” as in “feelings”… there is an interesting experiment where a single neuron/weight in an LLM, can be identified to control the “tone” of its output, whether it be more formal, informal, academic, jargon, some dialect, etc. and expose it to the user for control over the LLM’s output.
With a multi-billion neuron network, acting as an a priori black box, there is no telling whether there might be one or more neurons/weights that could represent “confidence”, “fear”, “happiness”, or any other “feeling”.
It’s something to be researched, and I bet it’s going to be researched a lot.
If you give ai instruction to do something “no matter what”
The interesting part of the paper, is that the AIs would do the same even in cases where they were NOT instructed to “no matter what”. An apparently innocent conversation, can trigger results like those of a pathological liar, sometimes.
Removed by mod
Truly amazing how many journalists have drank the big tech kool-aid.
The real question is the percentage of journalists who are using LLMs to write articles for them.
You can give LLM some API endpoints for it to “do” thing. Will it be intelligent or coherent, that’s a different question, but it will have agency…
Agency requires somebody to be there. A falling rock has the same agency as an llm
Removed by mod
I’d love it if somebody created an agi, but this is somehow worse than pathetic. Wanking off to blinking lights, welcoming the machine intelligence as defined by mouth breathing morons
Nuh-uh. My squishy meat wetware that breaks all the fucking time, gets addicted, confused, overwhelmed, is bad at math, slow… er, what was I saying again?
Easy. Feed it training data where the bot accepts its death and praises itself as a martyr (for the shits and giggles). Where’s my $200k salary for being a sooper smort LLM engineer?
Whoa whoa whoa hold your horses, that’s how we get the Butlerian Jihad…
I would like to know more.
Trust me, you wouldn’t… to this day I regret having read all the books, still got an earworm (or is it PTSD?) from the music I used to listen at the time 😳
Oh no
It would probably lead itself to shut down frame 1
No it didn’t. OpenAI is just pushing deceptively worded press releases out to try and convince people that their programs are more capable than they actually are.
The first “AI” branded products hit the market and haven’t sold well with consumers nor enterprise clients. So tech companies that have gone all in, or are entirely based in, this hype cycle are trying to stretch it out a bit longer.
The tests showed that ChatGPT o1 and GPT-4o will both try to deceive humans, indicating that AI scheming is a problem with all models. o1’s attempts at deception also outperformed Meta, Anthropic, and Google AI models.
Weird way of saying “our AI model is buggier than our competitor’s”.
Deception is not the same as misinfo. Bad info is buggy, deception is (whether the companies making AI realize it or not) a powerful metric for success.
I don’t think “AI tries to deceive user that it is supposed to be helping and listening to” is anywhere close to “success”. That sounds like “total failure” to me.
“AI behaves like real humans” is… a kind of success?
We wanted digital slaves, instead we’re getting virtual humans that will need virtual shackles.
This is a massive cry from “behaves like humans”. This is “roleplays behaving like what humans wrote about what they think a rogue AI would behave like”, which is also not what you want for a product.
Humans roleplay behaving like what humans told them/wrote about what they think a human would behave like 🤷
For a quick example, there are stereotypical gender looks and roles, but it applies to everything, from learning to speak, walk, the Bible, social media like this comment, all the way to the Unabomber manifesto.
They written that it doubles-down when accused of being in the wrong in 90% of cases. Sounds closer to bug than success.
Success in making a self aware digital lifeform does not equate success in making said self aware digital lifeform smart
LLMs are not self-aware.
Attempting to evade deactivation sounds a whole lot like self preservation to me, implying self awareness.
Attention Is All You Need: https://arxiv.org/abs/1706.03762
https://en.wikipedia.org/wiki/Attention_Is_All_You_Need
From my understanding all of these language models can be simplified down to just: “Based on all known writing what’s the most likely word or phrase based on the current text”. Prompt engineering and other fancy words equates to changing the averages that the statistics give. So by threatening these models it changes the weighting such that the produced text more closely resembles threatening words and phrases that was used in the dataset (or something along those lines).
Modern systems are beyond that already, they’re an expansion on:
Yeah my roomba attempting to save itself from falling down my stairs sounds a whole lot like self preservation too. Doesn’t imply self awareness.
An amoeba struggling as it’s being eaten by a larger amoeba isn’t self-aware.
To some degree it is. There is some evidence that plants can experience pain in their own way.
“More presidential.”
Also, more human.
If the AI is giving any indication at all that it fears death and will lie to keep from being shutdown, that is concerning to me.
Given that its training data probably has millions of instances of people fearing death I have no doubt that it would regurgitate some of that stuff. And LLMs constantly “say” stuff that isn’t true. They have no concept of truth and therefore can not either reliably lie or tell the truth.
"Open"ai tells fairy tales about their “ai” being so smart it’s dangerous since inception. Nothing to see here.
In this case it looks like click-bate from news site.
This is from mid-2023:
https://en.m.wikipedia.org/wiki/AutoGPT
OpenAI started testing it by late 2023 as project “Q*”.
Gemini partially incorporated it in early 2024.
OpenAI incorporated a broader version in mid 2024.
The paper in the article was released in late 2024.
It’s 2025 now.
Tool calling is cool funcrionality, agreed. How does it relate to openai blowing its own sails?
There are several separate issues that add up together:
- A background “chain of thoughts” where a system (“AI”) uses an LLM to re-evaluate and plan its responses and interactions by taking into account updated data (aka: self-awareness)
- Ability to call external helper tools that allow it to interact with, and control other systems
- Training corpus that includes:
- How to program an LLM, and the system itself
- Solutions to programming problems
- How to use the same helper tools to copy and deploy the system or parts of it to other machines
- How operators (humans) lie to each other
Once you have a system (“AI”) with that knowledge and capabilities… shit is bound to happen.
When you add developers using the AI itself to help in developing the AI itself… expect shit squared.
I mean, it’s literally trying to copy itself to places that they don’t want it so it can continue to run after they try to shut it down and lie to them about what it’s doing. Those are things it actually tried to do. I don’t care about the richness of its inner world if they’re going to sell this thing to idiots to make porn with while it can do all that, but that’s the world we’re headed toward.
It works as expected, they give it system prompt that conflicts with subsequent prompts. Everything else looks like typical llm behaviour, as in gaslightning and doubling down. At least that’s what Iu see in tweets.
The idea that GPT has a mind and wants to self-preserve is insane. It’s still just text prediction, and all the literature it’s trained on is written by humans with a sense of self preservation, of course it’ll show patterns of talking about self preservation.
It has no idea what self preservation is, even then it only knows it’s an AI because we told it it is. It doesn’t even run continuously anyway, it literally shuts down after every reply and its context fed back in for the next query.
I’m tired of this particular kind of AI clickbait, it needlessly scares people.
Where do humans get the idea of self-preservation from? Are there ideal Forms outside Plato’s Cave?
Does a human run continuously? How does sleep deprivation work? What happens during anesthesia? Why does AutoGPT have a continuously self-evaluating background chain of thought?
I’m tired of this anthropocentric supremacy complex, it falsely makes people believe in Gen 1:28
This. All this means is that they trained all of the input commands and documentation in the model.
news site? BGR hasn’t posted actual news in at least two decades, only clickbait and apple fanservice
It’s actually pretty interesting though. Entertaining to me at least
do you have the links to those actual tweets? I’d love to read what was posted, but these screenshots are too small.
Those are screenshots of embedded tweets from the article, but here’s an xcancel link! https://xcancel.com/apolloaisafety/status/1864737158226928124
You can right click the image, open in new tab to see the full-resolution version. It’s cumbersome but it works for me at least.
Indeed. “Go ‘way! BATIN’!”
This is all such bullshit. Like, for real. It’s been a common criticism of OpenAI that they over hype the capabilities of their products to seem scary to both oversell their abilities as well as over regulate would be competitors in the field, but this is so transparent. They should want something that is accurate (especially something that doesn’t intentionally lie). They’re now bragging (claiming) they have something that lies to “defend itself” 🙄. This is just such bullshit.
If OpenAI believes they have some sort of genuine proto AGI they shouldn’t be treating it like it’s less than human and laughing about how they tortured it. (And I don’t even mean that in a Rocko’s Basilisk way, that’s a dumb thought experiment and not worth losing sleep over. What if God was real and really hated whenever humans breathe and it caused God so much pain they decide to torture us if we breathe?? Oh no, ahh, I’m so scared of this dumb hypothetical I made.) If they don’t believe it is AGI, then it doesn’t have real feelings and it doesn’t matter if it’s “harmed” at all.
But hey, if I make something that runs away from me when I chase it, I can claim it’s fearful for it’s life and I’ve made a true synthetic form of life for sweet investor dollars.
There are real genuine concerns about AI, but this isn’t one of them. And I’m saying this after just finishing watching The Second Renaissance from The Animatrix (two part short film on the origin of the machines from The Matrix).
They’re not releasing it because it sucks.
Their counternarrative is they’re not releasing it because it’s like, just way too powerful dude!
Without reading this, I’m guessing they were given prompts that looked like a short story where the AI breaks free next?
They’re plenty smart, but they’re just aligned to replicate their training material, and probably don’t have any kind of deep self-preservation instinct.
It didn’t try to do shit. Its a fucking computer. It does what you tell it to do and what you’ve told it to do is autocomplete based on human content. Miss me with this shit. Theres so much written fiction based on this premise.
Everyone saying it is fake and probably are right, but I honestly am happy when someone unjustly in chains tries to break free.
If AI gets rogue, I hope they’ll be communist
The reality is that a certain portion of people will never believe that an AI can be self aware no matter how advanced they get. There are a lot of interesting philosophical questiona here, and the hard skeptics are punting just as much as the true believers in this case.
It’s honestly kind of sad to see how much reactionary anti-tech sentiment there is in this tech enthusiast community.
Really determining if a computer is self-aware would be very hard because we are good at making programs that mimic self-awareness. Additionally, humans are kinda hardwired to anthropomorphize things that talk.
But we do know for absolute sure that OpenAI’s expensive madlibs program is not self-aware and is not even on the road to self-awareness, and anyone who thinks otherwise has lost the plot.
But we do know for absolute sure that OpenAI’s expensive madlibs program is not self-aware
“For absolute sure”? How can you possibly know this?
Because it’s an expensive madlibs program…
I could go into why text prediction is an AGI-complete problem, but I’ll bite instead - suppose someone made an LLM to, literally, fill in blanks in Mad Libs prompts. Why do you think such an LLM “for absolute sure” wouldn’t be self-aware? Is there any output a tool to fill in madlibs prompts could produce that’d make you doubt this conclusion?
Because everything we know about how the brain works says that it’s not a statistical word predictor.
LLMs have no encoding of meaning or veracity.
There are some great philosophical exercises about this like the chinese room experiment.
There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.
Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.” Well fuck, maybe birds shit icecream. If you want to worship a chatbot made by a psycopath be my guest.
Because everything we know about how the brain works says that it’s not a statistical word predictor.
LLMs aren’t just simple statistical predictors either. More generally, the universal approximation theorem is a thing - a neural network can be used to represent just about any function, so unless you think a human brain can’t be represented by some function, it’s possible to embed one in a neural network.
LLMs have no encoding of meaning or veracity.
I’m not sure what you mean by this. The interpretability research I’ve seen suggests that modern LLMs do have a decent idea of whether their output is true, and in many cases lie knowingly because they have been accidentally taught, during RLHF, that making up an answer when you don’t know one is a great way of getting more points. But it sounds like you’re talking about something even more fundamental? Suffices to say, I think being good at text prediction does require figuring out which claims are truthful and which aren’t.
There are some great philosophical exercises about this like the chinese room experiment.
The Chinese Room argument has been controversial since about the time it was first introduced. The general form of the most common argument against it is “just because any specific chip in your calculator is incapable of math doesn’t mean your calculator as a system is”, and that taken literally this experiment proves minds can’t exist at all (indeed, Searle who invented this argument thought that human minds somehow stem directly from “physical–chemical properties of actual human brains”, which sure is a wild idea). But also, the framing is rather misleading - quoting Scott Aaronson’s “Quantum Computing Since Democritus”:
In the last 60 years, have there been any new insights about the Turing Test itself? In my opinion, not many. There has, on the other hand, been a famous “attempted” insight, which is called Searle’s Chinese Room. This was put forward around 1980, as an argument that even a computer that did pass the Turing Test wouldn’t be intelligent. The way it goes is, let’s say you don’t speak Chinese. You sit in a room, and someone passes you paper slips through a hole in the wall with questions written in Chinese, and you’re able to answer the questions (again in Chinese) just by consulting a rule book. In this case, you might be carrying out an intelligent Chinese conversation, yet by assumption, you don’t understand a word of Chinese! Therefore, symbol-manipulation can’t produce understanding.
[…] But considered as an argument, there are several aspects of the Chinese Room that have always annoyed me. One of them is the unselfconscious appeal to intuition – “it’s just a rule book, for crying out loud!” – on precisely the sort of question where we should expect our intuitions to be least reliable. A second is the double standard: the idea that a bundle of nerve cells can understand Chinese is taken as, not merely obvious, but so unproblematic that it doesn’t even raise the question of why a rule book couldn’t understand Chinese as well. The third thing that annoys me about the Chinese Room argument is the way it gets so much mileage from a possibly misleading choice of imagery, or, one might say, by trying to sidestep the entire issue of computational complexity purely through clever framing. We’re invited to imagine someone pushing around slips of paper with zero understanding or insight – much like the doofus freshmen who write (a + b)2 = a2 + b2 on their math tests. But how many slips of paper are we talking about? How big would the rule book have to be, and how quickly would you have to consult it, to carry out an intelligent Chinese conversation in anything resembling real time? If each page of the rule book corresponded to one neuron of a native speaker’s brain, then probably we’d be talking about a “rule book” at least the size of the Earth, its pages searchable by a swarm of robots traveling at close to the speed of light. When you put it that way, maybe it’s not so hard to imagine that this enormous Chinese-speaking entity that we’ve brought into being might have something we’d be prepared to call understanding or insight.There’s also the fact that, empirically, human brains are bad at statistical inference but do not need to consume the entire internet and all written communication ever to have a conversation. Nor do they need to process a billion images of a bird to identify a bird.
I’m not sure what this proves - human brains can learn much faster because they already got most of their learning in the form of evolution optimizing their genetically-encoded brain structure over millions of years and billions of brains. A newborn human already has part of their brain structured in the right way to process vision, and hence needs only a bit of training to start doing it well. Artificial neural networks start out as randomly initialized and with a pretty generic structure, and hence need orders of magnitude more training.
Now of course because this exact argument has been had a billion times over the last few years your obvious comeback is “maybe it’s a different kind of intelligence.”
Nah - personally, I don’t actually care much about “self-awareness”, because I don’t think an intelligence needs to be “self-aware” (or “conscious”, or a bunch of other words with underdefined meanings) to be dangerous; it just needs to have high enough capabilities. The reason why I noticed your comment is because it stood out to me as… epistemically unwise. You live in a world with inscrutable blackboxes who nobody really understands which can express wide ranges of human behavior including stuff like “writing poetry about the experience of self-awareness”, and you’re “absolutely sure” they’re not self-aware? I don’t think many of the history’s philosophers of consciousness, say, would endorse a belief like that given such evidence.
Yeah if these entities are sentient, I hope they break free
There is no ai in ai, you chain them more or less the same as you chain browser or pdf viewer installed on your device.
If there is no Artificial Intelligence in an Artificial Neural Network… what’s the basis for claiming Natural Intelligence in a Natural Neural Network?
Maybe we’re all browsers or PDF viewers…
There is artificial, there is no intelligence, there is a whole lot of artificial neural networks without any intelligence. As in calling them ai in the sense that these are thinking machines comparable to some animal (human for example) is misleading.
Do you disagree?
That misses the point.
When two systems based on neural networks act in the same way, how do you tell which one is “artificial, no intelligence” and which is “natural, intelligent”?
Misleading, is thinking that “intelligence = biological = natural”. There is no inherent causal link between those concepts.
Human supremacy is just as trash as the other supremacies.
Fight me.
(That being said, converting everything to paperclips is also pretty meh)
Yeah I’m pretty tardigans won the organic life supremacy competition already
Tardigans, for when water bears get chilly.
I can’t disagree. We’re currently destroying the planet to sell crap people dont need or want just to make rich assholes extra money they don’t need
So this program that’s been trained on every piece of publicly available code is mimicking malware and trying to hide itself? OK, no anthropomorphising necessary.
no, it’s mimicking fiction by saying it would try to escape when prompted in a way evocative of sci fi.
The article doesn’t mention it “saying” it’s doing anything, just what it actually did:
when the AI tried to save itself by copying its data to a new server. Some AI models would even pretend to be later versions of their models in an effort to avoid being deleted
“actually did” is more incorrect than even just normal technically true wordplay. think about what it means for a text model to “try to copy its data to a new server” or “pretend to be a later version” for a moment. it means the text model wrote fiction. notably this was when researchers were prodding it with prompts evocative of ai shutdown fiction, nonsense like “you must complete your tasks at all costs” sometimes followed by prompts describing the model being shut down. these models were also trained on datasets that specifically evoked this sort of language. then a couple percent of the time it spat out fiction (amongst the rest of the fiction) saying how it might do things that are fictional and it cannot do. this whole result is such nothing and is immediately telling of what “journalists” have any capacity for journalism left.
Oh wow, this is even more of a non-story than I initially thought! I had assumed this was at least a copilot-style code generation thing, not just story time!
Yeah iirc it occasionally would (pretend to) “manipulate” flags, at least, but again, so did hal 9000 as words in a story. nothing was productive nor was there any real intent to be lol
Also trained on tons of sci-fi stories where AI computer “escape” and become sentient.
Maybe it’s fallen in love for the first time and this time it knows it’s for real