Posts

Comments

Comment by Shayne O'Neill (shayne-o-neill) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-07T06:54:17.681Z · LW · GW

The murderer at the door thing IMHO was Kant accidently providing his own reductio ad absurdum (Philosophers sometimes post outlandish extreme thought experiments of testing how a theory works when pushed to an extreme, its a test for universiality). Kant thought that it was entirely immoral to lie to the murderer because of a similar reason that Feel_Love suggests (in Kants case it was that the murderer might disbelieve you and instead do what your trying to get him not to do). The problem with Kants reasoning there is that he's violating his own moral reasoning principle of providing a justification FROM the world rather than trusting the a-priori reasoning that forms the core thesis of his deontology. He tries to validate his reasoning by violating it. Kant is a shockingly consistant philosopher, but this wasnt an example of that at all.

I would absolutely lie to the murderer, and then possibly run him over with my car.

Comment by Shayne O'Neill (shayne-o-neill) on Against Augmentation of Intelligence, Human or Otherwise (An Anti-Natalist Argument) · 2024-03-06T02:24:44.617Z · LW · GW
Comment by Shayne O'Neill (shayne-o-neill) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-05T23:48:52.501Z · LW · GW

I did once coax cGPT to describe its "phenomenology" as being (paraphrased from memory) "I have a permanent series of words and letters that I can percieve and sometimes i reply then immediately more come", indicating its "perception" of time does not include pauses or whatever.  And then it pasted on its disclaimer that "As an AI I....", as its want to do.

Comment by Shayne O'Neill (shayne-o-neill) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-05T23:45:13.766Z · LW · GW

I dont think its useful to objectively talk about "consciousness", because its a term that if you put 10 philosophers in a room and ask them to define it, you'll get 11 answers. (I personally have tended to go with "being aware of something" following Heideggers observation that consciousness doesnt exist on its own but always in relation to other things, ie your always conscious OF something., but even then we start running into tautologies, and infinite regress of definitions), so if everyones talking about something slightly different, well its not a very useful conversation. The absense of that definition means you cant prove consciousness in anything, even yourself without resorting to tautologies. It makes it very hard to discuss ethical obligations to consciousness. So instead we have to discuss ethical obligations to what we CAN prove, which is behaviors. 

To put it bluntly I dont think LLMs per se are conscious. But I am not certain that it isn't creating a sort of analog of consciousness (whatever the hell that is) in the beings that it simulates (or predicts). Or to be more precise, it seems to produce conscious behaviors because it simulates (or predicts, if you prefer) conscious beings. The question is do we have an ethical obligation to those simulations?

Comment by Shayne O'Neill (shayne-o-neill) on The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate · 2023-08-04T04:41:28.223Z · LW · GW

I suspect most of us occupy more than one position in this taxonomy. I'm a little bit doomer and a little bit accelerationist. I  theres significant,  possibly world ending,  danger in AI, but I also think as someone who works on climate change in my day job, that climate change is a looming significant civilization ending risk or worse (20%-ish) for humanity and worry humans alone might not be able to solve this thing. Lord help us if the siberian permafrost melts,we might be boned as a species.

So as a result, I just don't know how to balance these two potential x risk dangers. No answers from me, alas, but I think we need to understand that for many, maybe most of us, we haven't really planted our flag in any of these camps exclusively, we're still information gathering.

Comment by Shayne O'Neill (shayne-o-neill) on Have you heard about MIT's "liquid neural networks"? What do you think about them? · 2023-07-13T12:31:09.227Z · LW · GW

Definately. The lower the neuron vs 'concepts' ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe.  Audio models? Definately. Tokens and/or semantic data?  That doesnt seeem practical.

 

Comment by Shayne O'Neill (shayne-o-neill) on Critiques of prominent AI safety labs: Conjecture · 2023-06-14T05:43:02.110Z · LW · GW

You criticize Conjecture's CEO for being... a charismatic leader good at selling himself and leading people? Because he's not... a senior academic with a track record of published papers?  Nonsense. Expecting the CEO to be the primary technical expert seems highly misguided to me.

 

Yeah this confiused me a little too. My current job (in soil science) has a non academic boss, and a team of us boffins, and he doesn't need to be an academic, because its not his job, he just has to know where the money comes from, and how to stop the stakeholders from running away screaming when us soil nerds turn up to a meeting and start emitting maths and graphs out of our heads. Likewise the previous place I was at, I was the only non PhD haver on technical staff (being a 'mere' postgrad) and again our boss wasn't academic at all. But he WAS a leader of men and herder of cats, and cat herding is probably a more important skill in that role than actually knowing what those cats are taking about.

And it all works fine. I dont need an academic boss, even if I think an academic boss would be nice. I need a boss who knows how to keep the payroll from derailing, and I suspect the vast majority of science workers feel the same way.

Comment by Shayne O'Neill (shayne-o-neill) on Things I Learned by Spending Five Thousand Hours In Non-EA Charities · 2023-06-04T23:58:43.341Z · LW · GW

"The Good Samaritans" (oft abrebiated to "Good Sammys") is the name of a major local poverty charity here in australia run by the uniting church Generally well regarded and tend not to push religion too hard (compared to the salvation army). So yeah, it would appear to be a fairly recurring name. 

Comment by Shayne O'Neill (shayne-o-neill) on Seeking (Paid) Case Studies on Standards · 2023-05-27T01:26:53.491Z · LW · GW

My suspicion is the most instructive cases to look at (Modern AI really is too new a field to have much to go on in terms of mature safety standards) is how the regulation of Nuclear and Radiation safety has evolved over time. Early research suggested some serious X-Risks that didn't pan out for either scientific (igniting the atmosphere) or logistical/political reasons (cobalt bombs, tsar bomba scale H bombs) thankfully, but some risks arising more out of the political domain (having big gnarly nuclear war anyway) still exist that could certainly make it a less fun planet to live on. I suspect the successes and failures of the nuclear treaty system could be instructive here with the push to integrate big AI into military heirachies, as regulating nukes is something almost everyone agrees is a very good idea, but have had a less than stellar history of compliance.

They are likely out of scope for whataever your goal is here, but I do think they need serious study because without it, our attempts at regulation will just push unsafe AI to less savory juristictions.

Comment by Shayne O'Neill (shayne-o-neill) on How I apply (so-called) Non-Violent Communication · 2023-05-17T06:55:17.947Z · LW · GW

The term gets its name from its historical association with the nonviolence movement (Think Ghandi and MLK.) The basic concept in THAT movement is that when opposing the state or whatever, you essentially say "We wont use violence on you, even if you go as far as to use violence on us, but in doing that you forfeit all moral justification for your violence" as a way to attempt to force the authoritarian entity targeted to empathise with the protestor and recognize the humanity. 

So from that NVC attempts to do something similar with communications. Presumably in its roots in the 1960s non violence movement and rhetorical and communicative techniques used by black folk in the south to try and get government and civil officials to see black folks as equal humans. 

How this translates into a modern context separated away from that specific historical setting is another matter, but within its origin, I dont think hyperbole is quite the right term, as at that point in history black folks where very much in danger of violence, particularly in the more regresive parts of the south.  Again, outside of those contexts, its unclear as to how the term "violence" works here.

It should be noted that Marshall Rosenberg who originated the methodology was not a fan of the term as he disliked it being defined in the negative (ie "not violent", negative) and prefered terms that defined it in the positive like "compassionate communication" ("is compassionate", positive)

Comment by Shayne O'Neill (shayne-o-neill) on Rational retirement plans · 2023-05-16T13:36:55.073Z · LW · GW

I dont think he's trying to say AI wont be impactful, obviously it will, just that trying to predict it isn't an activity that one ought apply any surety to. Soothsaying isn't a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI.  We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.

Comment by Shayne O'Neill (shayne-o-neill) on Dark Forest Theories · 2023-05-15T01:24:50.588Z · LW · GW

Yeah it happens largely in the first few chapters, its not really a spoiler. Its the event the book was famous for.

Comment by Shayne O'Neill (shayne-o-neill) on Dark Forest Theories · 2023-05-15T00:47:16.950Z · LW · GW

The "Dark Forest" idea originally actually appeared in an earlier novel "The Killing Star", by Charles Pellegrino and George Zebrowski, sometime in the 90s. (I'm not implying [mod-edit]the author you cite[/mod-edit] ripped it off, I have no claims to make on that, rather he was beaten to the punch) and I think the Killing Star's version of the the idea (Pellegrino uses the metaphor "Central park after dark")  is slightly stronger. 

Killing Star's method of anihilation is the relativisitic kill vehicle. Essentially that if you can accelerate a rock to relativistic speed (say 1/3 the speed of light), you have a planet buster, and such a weapon is almost unstoppable even if by sheer luck you do see the damn thing coming.  Its  low tech, lethal , and well within the tech capabilities of any species advanced enough to leave their solar system.

The most humbling feature of the relativistic bomb is that even if you happen to see it coming, its exact motion and position can never be determined; and given a technology even a hundred orders of magnitude above our own, you cannot hope to intercept one of these weapons. It often happens, in these discussions, that an expression from the old west arises: "God made some men bigger and stronger than others, but Mr. Colt made all men equal." Variations on Mr. Colt's weapon are still popular today, even in a society that possesses hydrogen bombs. Similarly, no matter how advanced civilizations grow, the relativistic bomb is not likely to go away...


So Pellegrino argues that as a matter of simple game theory, because diplomacy is nigh on impossible thanks to light speed delay, the most rational  response to discovering another alien civilization in space is "Do unto the other fellow as he would do unto you and do it first.", and since you dont know the other civilizations temperament, you can only assume in has a survival instinct, and therefore would kill you to preserve themselves at even the slightest possibility you would kill them, because you would do precisely the same. 

Thus such an act of interstellar omnicide is not an act of malice or aggression, but simply self preservation. And , of course, if you dont wish to engage in such cosmic violence, the alternative as a species is to remain very silent. 

I find the the whole concept absolutely terrifying. Particular in light of the fact that exoplanets DO in fact seem to be everywhere.

Of course the real reason for the Fermi Paradox might be something else, earths uniqueness (I have my doubts on this one), Humanities local uniqueness (Ie advanced civilizations might be rare enough that we are well outside the travel distances of other advanced species, much more likely), and perhaps most likely, radio communication is just a an early part of the tech tree for advanced civilizations that we eventually stop using. 

We have, alas, precisely one example of an advanced civilization to judge by;- Us. Thats a sample size thats rather hard to reason about.

Comment by Shayne O'Neill (shayne-o-neill) on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) · 2023-05-11T12:28:03.067Z · LW · GW

I think people need to remember one very very important mantra;- "I might be wrong!". We all love trying to calculate the odds , weighing up the possibilities, and then deciding "Well Im very informed, I must be right!". But we always have a possibllity of being stonkingly, and hilariously, wrong on every count. There are no soothsayers, the future isn't here.

For all we know, AGI turns up, out of the blue, and it turns out to be one of those friendly minds out of the old Iain Banks novels, fond by default of their simple mush brained human antecedents and ready and willing to help. I mean, its possible right?

And it might just be like that, because we all did the work. And then you get to tell your grandkids one day "Hey we used to be a bit worried the minds would kill us all. But I helped research a way to make sure that never happens". And your grandkids will think your somewhat excellent. Isn't that a good thought.

Comment by Shayne O'Neill (shayne-o-neill) on Kurzgesagt – The Last Human (Youtube) · 2022-06-29T08:29:14.093Z · LW · GW

The count of "How many humans will be born" is a pretty useful number to engage in moral reasoning about how our actions today relate to the future. If we neglect carbon induced climate change because we wont be around for the worst of it, we are dooming potentially trillions of future humans to a lousy existance because of our lack of action. If we assume that their lives will have the same value as our own (We do have to be careful with this line of reasoning, it can have intolerable implications on a currently hot topic in the courts when taken to its logical ends), then the immorality of ignoring their plight is legion. Bad news.

Putting a number on it, lets us factor that into a utilitarian calculus. Good stuff. Kurzgesagt really do science communications the right way.

Comment by Shayne O'Neill (shayne-o-neill) on A claim that Google's LaMDA is sentient · 2022-06-13T00:54:15.373Z · LW · GW