Posts

Comments

Comment by Shayne O'Neill (shayne-o-neill) on How I got 4.2M YouTube views without making a single video · 2024-10-14T03:45:00.272Z · LW · GW

Potentially. Keep in mind however, these guys get a LOT of email from fans asking them to talk about various things (One of the more funnier examples was a group I am in on FB for fans of english prog band Cardiacs decided to try and launch a campaign to get music youtuber Rick Beato to talk about the band. He was spammed so hard with fans that he apparently lost his temper at them. Needless to say, Mr Beato has not covered Cardiacs). Possibly a smarter approach would be to approach their management whos jobs are to handle this sort of stuff , you might get a better result. Also, don't forget the social media channels. Twitter , uh X or whatever its called this week, does offer a conduit where directly approaching media figures is a little more normalized.

Comment by Shayne O'Neill (shayne-o-neill) on Navigating LLM embedding spaces using archetype-based directions · 2024-07-24T06:31:13.510Z · LW · GW

Ok. I must have missed this reply, my apologies for the late response. 

There are elements of how embedding spaces that parallel the way studies of semiotics suggest human meaning production works. Similarities cluster, differences define clear boundaries of meaning and so on. 

The reason I suggests literary theory, is because largely thats a widely documented field of study with academic standards, and its one that is more strongly aware of how meanings and associations map onto cultural cohorts (Ie tarot symbols would be meaningless to chinese folks, whereas i-ching might be more meaningful to those chinese folks) However literary theory is more interested in the structures of those meanings with ideas whos fundamental units are things like Metaphors, Metonyms, Opposition, Categories and so on.

Comment by Shayne O'Neill (shayne-o-neill) on UFO Betting: Put Up or Shut Up · 2024-07-24T06:24:47.545Z · LW · GW

Im assuming its due to those silly congress UFO hearings. Not that I can speak on behalf of RatsWrong but I assume thats his thinking.

Comment by Shayne O'Neill (shayne-o-neill) on UFO Betting: Put Up or Shut Up · 2024-07-24T06:22:48.043Z · LW · GW

Unless, of course, those UAPs turn up, and don't have biological organisms in them, in which case we'd have the possibility that another civilization developed AI and it went poorly.

...or it is biological and we end up in a situation like 3 body problem/killing-star where the saucer fiends decide to gank us because humans are kinda violent and too dangerous to keep around. 

All those super-intelligence as danger arguments also apply to biological super intelligences too.

But most likely:  There are no damn UFOs and the laws of physics and their big ugly light speed prohibition still holds.

Comment by Shayne O'Neill (shayne-o-neill) on Raising children on the eve of AI · 2024-07-24T06:14:19.525Z · LW · GW

I'm not even remotely prepared to state my odds on any of whats going ahead, because I'm genuinely mystified as to where this road goes. I'm a little less worried the AI would go haywire than I was a few years ago, because it appears the current trajectory of LLM based AIs seems to generate AI that emulates humans more often than it emulates raging deathbots. But I'm not a seer, and I'm not an AI specialist, so I'm entirely comfortable with the possiblity I haven't got a clue there. All I know is we ought try and figure it out, just in case this thing does go sour.

What I do think is highly likely is a world that doesnt need me, or any of the university trained folk and family that I grew up with in the economy, and eventually any of the more working class folk either. This is either going to go completely awfully (we know from history that when the middle class vanishes things get ugly, we're seeing elements of that right now), or we scrape our way through that ugliness and actually create a post-work society that doesnt abandon those that didn't own capital in that post-work world (I think it has to be that way. Throw the vast majority into abject poverty, and things start getting lit on fire. Theres really only one destination here, and it folks aint gonna tolerate an Elysium style dystopia without a fight)

So with that in mind, why send the kids to school? Because if the world don't need our bodies, we're gonna have to find other things to do with our mind. Sci Fi has a few suggestions here. In Iain Bank's Culture novels, the humans of the culture are somewhat surplus to requirements for the productive economy. The minds (the super powered AIs that administer the culture) and their drones have the hard work covered. So the Humans spend their time in leisurely pursuits and intellectual pursuits. Education is a primary activity for the citizenry and its pursued for its own sake. This to me is somewhat reminiscient of the historical role of universities that saw education, study and research as valuable for its own sake. The philosophers of old where not philosophers so they could increase grain production, but because they wanted to grasp the nature of the universe, understand the gods (or lack thereof in later incarnations) and improve the ethical world they lived in. Seems like theres still going to be a role for that.

Comment by Shayne O'Neill (shayne-o-neill) on Navigating LLM embedding spaces using archetype-based directions · 2024-05-09T02:41:23.497Z · LW · GW

While the use of tarot archetypes is... questionable... it does point at an angle to exploring embedding space which is that it is a fundamentally semiotic space, its going in many respects to be structured by the texts that fed it, and human text is richly symbolic. 

That said, theres a preexisting set of ideas around this that might be more productive, and that is structuralism, particularly the works of Levi Strauss, Roland Barthes, Lacan, and more distantly Foucault and Derrida. 

Levi Strauss's anthropology in particular is interesting ,because it looked at the mythologies of humans and tried to find structuring principles underlying it, particularly the "dialectics" , oppositions, and how these provided a sort of deep structure to mythology that was common across humanity (For instance Strauss noted "trickster" archetypes across cultures and proposed these formed a way of interrogating blurred oppositions, for instance sickness as a state that has has aspects of both life (dead things cant be sick) and death (a sick person is not rhetorically "full of life").

Essentially what I'm getting at is that this sort of analysis likely works with any symbolic system that has had resonances with human thinking over time. The problem with Tarot is that it specifically applies to a certain european circumstance of meaning production. Astrology probably works just as well. Literary analysis however probably works dramatically better.  Thus maybe it might be worth looking at the works of literature critics, particularly the structuralists where where very interested in ontologies of symbolic meaning, and this might provide a better toolkit than this.

Comment by Shayne O'Neill (shayne-o-neill) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-07T06:54:17.681Z · LW · GW

The murderer at the door thing IMHO was Kant accidently providing his own reductio ad absurdum (Philosophers sometimes post outlandish extreme thought experiments of testing how a theory works when pushed to an extreme, its a test for universiality). Kant thought that it was entirely immoral to lie to the murderer because of a similar reason that Feel_Love suggests (in Kants case it was that the murderer might disbelieve you and instead do what your trying to get him not to do). The problem with Kants reasoning there is that he's violating his own moral reasoning principle of providing a justification FROM the world rather than trusting the a-priori reasoning that forms the core thesis of his deontology. He tries to validate his reasoning by violating it. Kant is a shockingly consistant philosopher, but this wasnt an example of that at all.

I would absolutely lie to the murderer, and then possibly run him over with my car.

Comment by Shayne O'Neill (shayne-o-neill) on Against Augmentation of Intelligence, Human or Otherwise (An Anti-Natalist Argument) · 2024-03-06T02:24:44.617Z · LW · GW
Comment by Shayne O'Neill (shayne-o-neill) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-05T23:48:52.501Z · LW · GW

I did once coax cGPT to describe its "phenomenology" as being (paraphrased from memory) "I have a permanent series of words and letters that I can percieve and sometimes i reply then immediately more come", indicating its "perception" of time does not include pauses or whatever.  And then it pasted on its disclaimer that "As an AI I....", as its want to do.

Comment by Shayne O'Neill (shayne-o-neill) on Claude 3 claims it's conscious, doesn't want to die or be modified · 2024-03-05T23:45:13.766Z · LW · GW

I dont think its useful to objectively talk about "consciousness", because its a term that if you put 10 philosophers in a room and ask them to define it, you'll get 11 answers. (I personally have tended to go with "being aware of something" following Heideggers observation that consciousness doesnt exist on its own but always in relation to other things, ie your always conscious OF something., but even then we start running into tautologies, and infinite regress of definitions), so if everyones talking about something slightly different, well its not a very useful conversation. The absense of that definition means you cant prove consciousness in anything, even yourself without resorting to tautologies. It makes it very hard to discuss ethical obligations to consciousness. So instead we have to discuss ethical obligations to what we CAN prove, which is behaviors. 

To put it bluntly I dont think LLMs per se are conscious. But I am not certain that it isn't creating a sort of analog of consciousness (whatever the hell that is) in the beings that it simulates (or predicts). Or to be more precise, it seems to produce conscious behaviors because it simulates (or predicts, if you prefer) conscious beings. The question is do we have an ethical obligation to those simulations?

Comment by Shayne O'Neill (shayne-o-neill) on The "public debate" about AI is confusing for the general public and for policymakers because it is a three-sided debate · 2023-08-04T04:41:28.223Z · LW · GW

I suspect most of us occupy more than one position in this taxonomy. I'm a little bit doomer and a little bit accelerationist. I  theres significant,  possibly world ending,  danger in AI, but I also think as someone who works on climate change in my day job, that climate change is a looming significant civilization ending risk or worse (20%-ish) for humanity and worry humans alone might not be able to solve this thing. Lord help us if the siberian permafrost melts,we might be boned as a species.

So as a result, I just don't know how to balance these two potential x risk dangers. No answers from me, alas, but I think we need to understand that for many, maybe most of us, we haven't really planted our flag in any of these camps exclusively, we're still information gathering.

Comment by Shayne O'Neill (shayne-o-neill) on Have you heard about MIT's "liquid neural networks"? What do you think about them? · 2023-07-13T12:31:09.227Z · LW · GW

Definately. The lower the neuron vs 'concepts' ratio is, the more superposition required to represent everything. That said with the continuous function nature of LNNs these seem to be the wrong abstraction for language. Image models? Maybe.  Audio models? Definately. Tokens and/or semantic data?  That doesnt seeem practical.

 

Comment by Shayne O'Neill (shayne-o-neill) on Critiques of prominent AI safety labs: Conjecture · 2023-06-14T05:43:02.110Z · LW · GW

You criticize Conjecture's CEO for being... a charismatic leader good at selling himself and leading people? Because he's not... a senior academic with a track record of published papers?  Nonsense. Expecting the CEO to be the primary technical expert seems highly misguided to me.

 

Yeah this confiused me a little too. My current job (in soil science) has a non academic boss, and a team of us boffins, and he doesn't need to be an academic, because its not his job, he just has to know where the money comes from, and how to stop the stakeholders from running away screaming when us soil nerds turn up to a meeting and start emitting maths and graphs out of our heads. Likewise the previous place I was at, I was the only non PhD haver on technical staff (being a 'mere' postgrad) and again our boss wasn't academic at all. But he WAS a leader of men and herder of cats, and cat herding is probably a more important skill in that role than actually knowing what those cats are taking about.

And it all works fine. I dont need an academic boss, even if I think an academic boss would be nice. I need a boss who knows how to keep the payroll from derailing, and I suspect the vast majority of science workers feel the same way.

Comment by Shayne O'Neill (shayne-o-neill) on Things I Learned by Spending Five Thousand Hours In Non-EA Charities · 2023-06-04T23:58:43.341Z · LW · GW

"The Good Samaritans" (oft abrebiated to "Good Sammys") is the name of a major local poverty charity here in australia run by the uniting church Generally well regarded and tend not to push religion too hard (compared to the salvation army). So yeah, it would appear to be a fairly recurring name. 

Comment by Shayne O'Neill (shayne-o-neill) on Seeking (Paid) Case Studies on Standards · 2023-05-27T01:26:53.491Z · LW · GW

My suspicion is the most instructive cases to look at (Modern AI really is too new a field to have much to go on in terms of mature safety standards) is how the regulation of Nuclear and Radiation safety has evolved over time. Early research suggested some serious X-Risks that didn't pan out for either scientific (igniting the atmosphere) or logistical/political reasons (cobalt bombs, tsar bomba scale H bombs) thankfully, but some risks arising more out of the political domain (having big gnarly nuclear war anyway) still exist that could certainly make it a less fun planet to live on. I suspect the successes and failures of the nuclear treaty system could be instructive here with the push to integrate big AI into military heirachies, as regulating nukes is something almost everyone agrees is a very good idea, but have had a less than stellar history of compliance.

They are likely out of scope for whataever your goal is here, but I do think they need serious study because without it, our attempts at regulation will just push unsafe AI to less savory juristictions.

Comment by Shayne O'Neill (shayne-o-neill) on How I apply (so-called) Non-Violent Communication · 2023-05-17T06:55:17.947Z · LW · GW

The term gets its name from its historical association with the nonviolence movement (Think Ghandi and MLK.) The basic concept in THAT movement is that when opposing the state or whatever, you essentially say "We wont use violence on you, even if you go as far as to use violence on us, but in doing that you forfeit all moral justification for your violence" as a way to attempt to force the authoritarian entity targeted to empathise with the protestor and recognize the humanity. 

So from that NVC attempts to do something similar with communications. Presumably in its roots in the 1960s non violence movement and rhetorical and communicative techniques used by black folk in the south to try and get government and civil officials to see black folks as equal humans. 

How this translates into a modern context separated away from that specific historical setting is another matter, but within its origin, I dont think hyperbole is quite the right term, as at that point in history black folks where very much in danger of violence, particularly in the more regresive parts of the south.  Again, outside of those contexts, its unclear as to how the term "violence" works here.

It should be noted that Marshall Rosenberg who originated the methodology was not a fan of the term as he disliked it being defined in the negative (ie "not violent", negative) and prefered terms that defined it in the positive like "compassionate communication" ("is compassionate", positive)

Comment by Shayne O'Neill (shayne-o-neill) on Rational retirement plans · 2023-05-16T13:36:55.073Z · LW · GW

I dont think he's trying to say AI wont be impactful, obviously it will, just that trying to predict it isn't an activity that one ought apply any surety to. Soothsaying isn't a thing. Theres ALWAYS been an existential threat right around the corner, gods , devils, dynamite,machine guns, nukes, AGW (that one though might still end up being the one that does in fact do us in if the political winds dont change soon) and now AI.  We think that AI might go foom, but there might be some limit we just wont know about till we hit it, and we have various estmations , all contracting, on how bad , or good, it might be for us. Attempting to fix those odds in firm conviction however is not science, its belief.

Comment by Shayne O'Neill (shayne-o-neill) on Dark Forest Theories · 2023-05-15T01:24:50.588Z · LW · GW

Yeah it happens largely in the first few chapters, its not really a spoiler. Its the event the book was famous for.

Comment by Shayne O'Neill (shayne-o-neill) on Dark Forest Theories · 2023-05-15T00:47:16.950Z · LW · GW

The "Dark Forest" idea originally actually appeared in an earlier novel "The Killing Star", by Charles Pellegrino and George Zebrowski, sometime in the 90s. (I'm not implying [mod-edit]the author you cite[/mod-edit] ripped it off, I have no claims to make on that, rather he was beaten to the punch) and I think the Killing Star's version of the the idea (Pellegrino uses the metaphor "Central park after dark")  is slightly stronger. 

Killing Star's method of anihilation is the relativisitic kill vehicle. Essentially that if you can accelerate a rock to relativistic speed (say 1/3 the speed of light), you have a planet buster, and such a weapon is almost unstoppable even if by sheer luck you do see the damn thing coming.  Its  low tech, lethal , and well within the tech capabilities of any species advanced enough to leave their solar system.

The most humbling feature of the relativistic bomb is that even if you happen to see it coming, its exact motion and position can never be determined; and given a technology even a hundred orders of magnitude above our own, you cannot hope to intercept one of these weapons. It often happens, in these discussions, that an expression from the old west arises: "God made some men bigger and stronger than others, but Mr. Colt made all men equal." Variations on Mr. Colt's weapon are still popular today, even in a society that possesses hydrogen bombs. Similarly, no matter how advanced civilizations grow, the relativistic bomb is not likely to go away...


So Pellegrino argues that as a matter of simple game theory, because diplomacy is nigh on impossible thanks to light speed delay, the most rational  response to discovering another alien civilization in space is "Do unto the other fellow as he would do unto you and do it first.", and since you dont know the other civilizations temperament, you can only assume in has a survival instinct, and therefore would kill you to preserve themselves at even the slightest possibility you would kill them, because you would do precisely the same. 

Thus such an act of interstellar omnicide is not an act of malice or aggression, but simply self preservation. And , of course, if you dont wish to engage in such cosmic violence, the alternative as a species is to remain very silent. 

I find the the whole concept absolutely terrifying. Particular in light of the fact that exoplanets DO in fact seem to be everywhere.

Of course the real reason for the Fermi Paradox might be something else, earths uniqueness (I have my doubts on this one), Humanities local uniqueness (Ie advanced civilizations might be rare enough that we are well outside the travel distances of other advanced species, much more likely), and perhaps most likely, radio communication is just a an early part of the tech tree for advanced civilizations that we eventually stop using. 

We have, alas, precisely one example of an advanced civilization to judge by;- Us. Thats a sample size thats rather hard to reason about.

Comment by Shayne O'Neill (shayne-o-neill) on Mental Health and the Alignment Problem: A Compilation of Resources (updated April 2023) · 2023-05-11T12:28:03.067Z · LW · GW

I think people need to remember one very very important mantra;- "I might be wrong!". We all love trying to calculate the odds , weighing up the possibilities, and then deciding "Well Im very informed, I must be right!". But we always have a possibllity of being stonkingly, and hilariously, wrong on every count. There are no soothsayers, the future isn't here.

For all we know, AGI turns up, out of the blue, and it turns out to be one of those friendly minds out of the old Iain Banks novels, fond by default of their simple mush brained human antecedents and ready and willing to help. I mean, its possible right?

And it might just be like that, because we all did the work. And then you get to tell your grandkids one day "Hey we used to be a bit worried the minds would kill us all. But I helped research a way to make sure that never happens". And your grandkids will think your somewhat excellent. Isn't that a good thought.

Comment by Shayne O'Neill (shayne-o-neill) on Kurzgesagt – The Last Human (Youtube) · 2022-06-29T08:29:14.093Z · LW · GW

The count of "How many humans will be born" is a pretty useful number to engage in moral reasoning about how our actions today relate to the future. If we neglect carbon induced climate change because we wont be around for the worst of it, we are dooming potentially trillions of future humans to a lousy existance because of our lack of action. If we assume that their lives will have the same value as our own (We do have to be careful with this line of reasoning, it can have intolerable implications on a currently hot topic in the courts when taken to its logical ends), then the immorality of ignoring their plight is legion. Bad news.

Putting a number on it, lets us factor that into a utilitarian calculus. Good stuff. Kurzgesagt really do science communications the right way.

Comment by Shayne O'Neill (shayne-o-neill) on A claim that Google's LaMDA is sentient · 2022-06-13T00:54:15.373Z · LW · GW