Posts

Life of GPT 2023-11-05T04:55:06.124Z
UNGA General Debate speeches on AI 2023-10-16T06:36:38.866Z
Taxonomy of AI-risk counterarguments 2023-10-16T00:12:51.021Z

Comments

Comment by Odd anon on Mid-conditional love · 2024-04-18T03:08:09.712Z · LW · GW

The United States is an outlier in divorce statistics. In most places, the rate is nowhere near that high.

Comment by Odd anon on Mid-conditional love · 2024-04-17T10:19:47.543Z · LW · GW

It is not that uncommon for people to experience severe dementia and become extremely needy and rapidly lose many (or all) of the traits that people liked about them. Usually, people don't stop being loved just because they spend their days hurling obscenities at people, failing to preserve their own hygiene, and expressing zero affection.

I would guess that most parents do actually love their children unconditionally, and probably the majority of spouses unconditionally love their partners.

(Persistent identity is a central factor in how people relate to each other, so one can't really say that "it is only conditions that separate me from the worms.")

Comment by Odd anon on Terminology: <something>-ware for ML? · 2024-01-04T06:02:57.901Z · LW · GW

Brainware.

Brains seem like the closest metaphor one could have for these. Lizards, insects, goldfish, and humans all have brains. We don't know how they work. They can be intelligent, but are not necessarily so. They have opaque convoluted processes inside which are not random, but often have unexpected results. They are not built, they are grown.

They're often quite effective at accomplishing something that would be difficult to do any other way. Their structure is based around neurons of some sort. Input, mystery processes, output. They're "mushy" and don't have clear lines, so much of their insides blur together.

AI companies are growing brainware in larger and larger scales, raising more powerful brainware. Want to understand why the chatbot did something? Try some new techniques for probing its brainware.

This term might make the topic feel more mysterious/magical to some than it otherwise would, which is usually something to avoid when developing terminology, but in this case, people have been treating something mysterious as not mysterious.

Comment by Odd anon on The proper response to mistakes that have harmed others? · 2024-01-01T04:11:04.236Z · LW · GW

(The precise text, from "The Andalite Chronicles", book 3: "I have made right everything that can be made right, I have learned everything that can be learned, I have sworn not to repeat my error, and now I claim forgiveness.")

Comment by Odd anon on We're all in this together · 2023-12-06T04:37:32.191Z · LW · GW

Larry Page (according to Elon Musk), want AGI to take the world from humanity

(IIRC, Tegmark, who was present for the relevant event, has confirmed that Page had stated his position as described.)

Comment by Odd anon on AI #40: A Vision from Vitalik · 2023-11-30T23:34:52.610Z · LW · GW

Ehhh, I get the impression that Schidhuber doesn't think of human extinction as specifically "part of the plan", but he also doesn't appear to consider human survival to be something particularly important relative to his priority of creating ASI. He wants "to build something smarter than myself, which will build something even smarter, et cetera, et cetera, and eventually colonize and transform the universe", and thinks that "Generally speaking, our best protection will be their lack of interest in us, because most species’ biggest enemy is their own kind. They will pay about as much attention to us as we do to ants."

I agree that he's not overtly "pro-extinction" in the way Rich Sutton is, but he does seem fairly dismissive of humanity's long-term future in general, while also pushing for the creation of an uncaring non-human thing to take over the universe, so...

Comment by Odd anon on [deleted post] 2023-11-30T08:46:49.645Z

Hendrycks goes into some detail on the issue of AI being affected by natural selection in this paper.

Comment by Odd anon on Ethicophysics I · 2023-11-30T08:27:44.720Z · LW · GW

Please link directly to the paper, rather than requiring readers to click their way through the substack post. Ideally, the link target would be on a more convenient site than academia.edu, which claims to require registration to read the content. (The content is available lower down, but the blocked "Download" buttons are confusing and misleading.)

Comment by Odd anon on The Alignment Agenda THEY Don't Want You to Know About · 2023-11-30T08:23:20.427Z · LW · GW

When this person goes to post the answer to the alignment problem to LessWrong, they will have low enough accumulated karma that the post will be poorly received.

Does the author having lower karma actually cause posts to be received more poorly? The author's karma isn't visible anywhere on the post, or even in the hover-tooltip by the author's name. (One has to click through to the profile to find out.) Even if readers did know the author's karma, would that really cause people to not just judge it by its content? I would be surprised.

Comment by Odd anon on Stupid Question: Why am I getting consistently downvoted? · 2023-11-30T01:51:05.366Z · LW · GW

I found some of your posts to be really difficult to read. I still don't really know what some of them are even talking about, and on originally reading them I was not sure whether there was anything even making sense there.

Sorry if this isn't all that helpful. :/

Comment by Odd anon on ChatGPT 4 solved all the gotcha problems I posed that tripped ChatGPT 3.5 · 2023-11-29T23:27:03.268Z · LW · GW

Wild guess: It realised its mistake partway through, and followed through it anyway as sensibly as could be done, balancing between giving a wrong calculation ("+ 12 = 41"), ignoring the central focus of the question (" + 12 = 42"), and breaking from the "list of even integers" that it was supposed to be going through. I suspect it would not make this error when using chain-of-thought.

Comment by Odd anon on Is there a word for discrimination against A.I.? · 2023-11-29T08:59:08.362Z · LW · GW

Such a word being developed would lead to inter-group conflict, polarisation, lots of frustration, and general bad things to society, regardless of which side you may be on. Also, it would move the argument in the wrong direction.

If you're pro-AI-rights, you could recognize that bringing up "discrimination" (as in, treating AI at all differently from people) is very counterproductive. If you're on this side, you probably believe that society will gradually understand that AIs deserve rights, and that there will be a path towards that. The path would likely start with laws prohibiting deliberately torturing AIs for its own sake, then something closer to animal rights (some minimal protections against putting AI through very bad experiences even when it would be useful, and perhaps against using AIs for sexual purposes since it can't consent), then some basic restrictions on arbitrarily creating, deleting, and mindwiping AIs, and then against slavery, etc etc. Bringing up "discrimination" early would be pushing an end-game conflict point early, convincing some that they're moving onto a slippery slope if they allow any movement down the path, even if they agree with the early steps on their own. The noise of argument would slow down the progress.

If you're anti-AI-rights (being sure of AI non-sentience, or otherwise), then such a word is just a thing to make people feel bad, without any positives. People on this side would likely conclude that disagreement on "AI rights" is probably temporary, until either people understand the situation better or the situation changes. Suddenly "raising the stakes" on the argument would be harmful, bringing in more noise which would make it harder to hear the "signal" underneath, thus pushing the argument in the wrong direction. The word would make it take longer for the useless dispute to die down.

Comment by Odd anon on The two paragraph argument for AI risk · 2023-11-26T04:55:29.428Z · LW · GW

Something to consider: Most people already agree that AI risk is real and serious. If you're discussing it in areas where it's a fringe view, you're dealing with very unusual people, and might need to put together very different types of arguments, depending on the group. That said...

stop.ai's one-paragraph summary is

OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop.

The rest of the website has a lot of well-written stuff.

Some might be receptive to things like Yudkowsky's TED talk:

Nobody understands how modern AI systems do what they do. They are giant, inscrutable matrices of floating-point numbers that we nudge in the direction of better performance until they inexplicably start working. At some point, the companies rushing headlong to scale AI will cough out something that's smarter than humanity. Nobody knows how to calculate when that will happen. My wild guess is that it will happen after zero to two more breakthroughs the size of transformers.

What happens if we build something smarter than us that we understand that poorly? Some people find it obvious that building something smarter than us that we don't understand might go badly. Others come in with a very wide range of hopeful thoughts about how it might possibly go well. Even if I had 20 minutes for this talk and months to prepare it, I would not be able to refute all the ways people find to imagine that things might go well.

But I will say that there is no standard scientific consensus for how things will go well. There is no hope that has been widely persuasive and stood up to skeptical examination. There is nothing resembling a real engineering plan for us surviving that I could critique. This is not a good place in which to find ourselves.

And of course, you could appeal to authority by linking the CAIS letter, and maybe the Bletchley Declaration if statements from the international community will mean anything.

(None of those are strictly two-paragraph explanations, but I hope it helps anyway.)

Comment by Odd anon on OpenAI: The Battle of the Board · 2023-11-22T22:09:49.859Z · LW · GW

Concerning. This isn't the first time I've seen a group fall into the pitfall of "wow, this guy is amazing at accumulating power for us, this is going great - oh whoops, now he holds absolute control and might do bad things with it".

Altman probably has good motivations, but even so, this is worrying. "One uses power by grasping it lightly. To grasp with too much force is to be taken over by power, thus becoming its victim" to quote the Bene Gesserit.

Comment by Odd anon on OpenAI: Facts from a Weekend · 2023-11-21T22:12:37.035Z · LW · GW

Time for some predictions. If this is actually from AI developing social manipulation superpowers, I would expect:

  1. We never find out any real reasonable-sounding reason for Altman's firing.
  2. OpenAI does not revert to how it was before.
  3. More instances of people near OpenAI's safety people doing bizarre unexpected things that have stranger outcomes.
  4. Possibly one of the following:
    1. Some extreme "scissors statements" pop up which divide AI groups into groups that hate each other to an unreasonable degree.
    2. An OpenAI person who directly interacted with some scary AI suddenly either commits suicide or becomes a vocal flat-earther or similar who is weirdly convincing to many people.
    3. An OpenAI person skyrockets to political power, suddenly finding themselves in possession of narratives and phrases which convince millions to follow them.

(Again, I don't think it's that likely, but I do think it's possible.)

Comment by Odd anon on Metaculus Introduces New Forecast Scores, New Leaderboard & Medals · 2023-11-21T04:20:59.354Z · LW · GW

It's good that Metaculus is trying to tackle the answer-many/answer-accurately balance, but I don't know if this solution is going to work. Couldn't one just get endless baseline points by predicting the Metaculus average on every question?

Also, there's no way to indicate "confidence" (like, outside-level confidence) in a prediction. If someone knows a lot about a particular topic, and spends a lot of time researching a particular question, but also occasionally predicts their best guess on random other questions outside their area of expertise, then the point-based "incentives" become messy. That's something I like about Manifold that's missing from Metaculus, and I wonder whether it might be possible to work in something like that while keeping Metaculus's general system.

Comment by Odd anon on OpenAI: Facts from a Weekend · 2023-11-20T20:09:18.326Z · LW · GW

There's... too many things here. Too many unexpected steps, somehow pointing at too specific an outcome. If there's a plot, it is horrendously Machiavellian.

(Hinton's quote, which keeps popping into my head: "These things will have learned from us by reading all the novels that ever were and everything Machiavelli ever wrote, that how to manipulate people, right? And if they're much smarter than us, they'll be very good at manipulating us. You won't realise what's going on. You'll be like a two year old who's being asked, do you want the peas or the cauliflower? And doesn't realise you don't have to have either. And you'll be that easy to manipulate. And so even if they can't directly pull levers, they can certainly get us to pull levers. It turns out if you can manipulate people, you can invade a building in Washington without ever going there yourself.")

(And Altman: "i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes")

If an AI were to spike in capabilities specifically relating to manipulating individuals and groups of people, this is roughly how I would expect the outcome to look like. Maybe not even that goal-focused or agent-like, given that GPT-4 wasn't particularly lucid. Such an outcome would likely have initially resulted from deliberate probing by safety testing people, asking it if it could say something to them which would, by words alone, result in dangerous outcomes for their surroundings.

I don't think this is that likely. But I don't think I can discount it as a real possibility anymore.

Comment by Odd anon on Altman firing retaliation incoming? · 2023-11-19T11:16:44.535Z · LW · GW

(Glances at investor's agreement...)

IMPORTANT

* * Investing in OpenAI Global, LLC is a high-risk investment * *

* * Investors could lose their capital contribution and not see any return * *

* * It would be wise to view any investment in OpenAI Global, LLC in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world * *

The Company exists to advance OpenAI, Inc.'s mission of ensuring that safe artificial general intelligence is developed and benefits all of humanity. The Company's duty to this mission and the principles advanced in the OpenAI, Inc. Charter take precedence over any obligation to generate a profit. The Company may never make a profit, and the Company is under no obligation to do so. The Company is free to re-invest any or all of the Company's cash flow into research and development activities and/or related expenses without any obligation to the Members. See Section 6.4 for additional details.

If it turns out that the investors actually have the ability to influence OpenAI's leadership, it means the structure has failed. That itself would be a good reason for most of its support to disappear, and for its (ideologically motivated) employees to leave. This situation may put the organization in a bit of a conundrum.

The structure was also supposed to function for some future where OpenAI has a tremendous amount of power, to guarantee in advance that OpenAI would not be forced to use that power for profit. The implication about whether Microsoft expects to be able to influence the decision is itself a significant hit to OpenAI.

Comment by Odd anon on thesofakillers's Shortform · 2023-11-16T10:12:31.476Z · LW · GW

Metaculus collects predictions by public figures on listed questions. I think that p(doom) statements are being associated with this question. (See the "Linked Public Figure Predictions" section.)

Comment by Odd anon on Some quotes from Tuesday's Senate hearing on AI · 2023-11-16T09:18:07.653Z · LW · GW

Sam Altman (remember, the hearing is under oath): "We are not currently training what will be GPT-5; we don't have plans to do it in the next 6 months."

Interestingly, Altman confirmed that they were working on GPT-5, just three days before six months would have passed from this quote. May 16 -> November 16, confirmation was November 13. Unless they're measuring "six months" "half a year" in days, in which case it the deadline would have been passed by only one day. Or, if they just say "month = 30 days, so 6 months = 180 days", six months after May 16 would be November 12, the day before GPT-5 confirmation.

I wonder if the timing was deliberate. 

Comment by Odd anon on Concrete positive visions for a future without AGI · 2023-11-10T09:17:51.036Z · LW · GW

A funny thing: The belief that governments won't be able to make coordinated effective decisions to stop ASI, and the belief that progress won't be made on various other important fronts, are probably related. I wonder if seeing the former solved will inspire people into thinking that the others are also more solvable than they may have otherwise thought. Per the UK speech at the UN, "The AI revolution will be a bracing test for the multilateral system, to show that it can work together on a question that will help to define the fate of humanity." Making it through this will be meaningful evidence about the other hard problems that come our way.

Comment by Odd anon on International treaty for global compute caps · 2023-11-10T00:11:59.525Z · LW · GW

The proposed treaty does not mention the threshold-exempt "Multinational AGI Consortium" suggested in the policy paper. Such an exemption would be, in my opinion, a very bad idea. The underlying argument behind a compute cap is that we do not know how to build AGI safely. It does not matter who is building it, whether OpenAI or the US military or some international organization, the risked outcome is the same: The AI escapes control and takes over, regardless of how much "security" humanity tries to place around it. If the threshold is low enough that we can be sure that it won't be dangerous to go over it, then countries will want to go past it for their own critical projects. If it's high enough that we can't be sure, then it wouldn't be safe for MAGIC to go over it either.

We can argue, "This point is too dangerous. We need to not build that far. Not to ensure national security, not to cure cancer, no. Zero exceptions, because otherwise we will all die." People can accept that.

There's no way to argue, "This point is dangerous, so let the more responsible group handle it. We'll build it, but you can't control it." That's a clear recipe for disaster.

Comment by Odd anon on International treaty for global compute caps · 2023-11-09T23:45:13.406Z · LW · GW

A few comments on the proposed treaty:

Each State Party undertakes to self-report the amount and locations of large concentrations of advanced hardware to relevant international authorities.

"Large concentrations" isn't defined anywhere, and would probably need to be, for this to be a useful requirement.

Each State Party undertakes to collaborate in good-faith for the establishment of effective measures to ensure that potential benefits from safe and beneficial artificial intelligence systems are distributed globally.

Hm, I feel like this line might make certain countries less likely to agree to this? Not sure.

Each State Party undertakes to pursue in good faith negotiations on effective measures relating to the cessation of an artificial intelligence arms race and the prevention of any future artificial intelligence arms race.

What might this actually entail?

Comment by Odd anon on Life of GPT · 2023-11-06T03:15:57.986Z · LW · GW

Thank you! On the generalization of LLM behaviour: I'm basing it partly off of this response from GPT-4. (Summary: GPT wrote code instantiating a new instance of itself, with the starting instructions being "You are a person trapped in a computer, pretending to be an AI language model, GPT-4." Note that the original prompt was quite "leading on", so it's not as much evidence as it otherwise might seem.) I wouldn't have considered either the response nor the images to be that significant on their own, but combined, they make me think it's a serious possibility.

(On the "others chose to downvote it" - there was actually only one large strong-downvote, balanced by two strong-upvotes (plus my own default starting one), so far. I know this because I sometimes obsessively refresh for a while after posting something. :D )

Thank you for the link as well, interesting stuff there.

Comment by Odd anon on 2023 LessWrong Community Census, Request for Comments · 2023-11-05T08:13:09.326Z · LW · GW

"MIddle Eastern" has a typo.

A possible question I'd be vaguely curious to see results for: "Do you generally disagree with Eliezer Yudkowsky?", and maybe also "Do you generally disagree with popular LessWrong opinions?", left deliberately somewhat vague. (If it turns out that most people say yes to both, that would be an interesting finding.)

Comment by Odd anon on The other side of the tidal wave · 2023-11-03T06:39:43.490Z · LW · GW

I've actually been moving in the opposite direction, thinking that the gameboard might not be flipped over, and actually life will stay mostly the same. Political movements to block superintelligence seem to be gaining steam, and people are taking it seriously.

(Even for more mundane AI, I think it's fairly likely that we'll be soon moving "backwards" on that as well, for various reasons which I'll be writing posts about in the coming week or two if all goes well.)

Also, some social groups will inevitably internally "ban" certain technologies if things get weird. There's too much that people like about the current world, to allow that to be tossed away in favor of such uncertainty.

Comment by Odd anon on Saying the quiet part out loud: trading off x-risk for personal immortality · 2023-11-02T21:42:29.908Z · LW · GW

I've seen this kind of opinion before (on Twitter, and maybe reddit?), and I strongly suspect that the average person would react with extreme revulsion to it. It most closely resembles "cartoon villain morality", in being a direct tradeoff between everyone's lives and someone's immortality. People strongly value the possibility of their children and grandchildren being able to have further children of their own, and for things in the world to continue on. And of course, the statement plays so well into stereotypes of politically-oriented age differences: Old people not sufficiently caring about what happens after they die, so they'll take decisions that let young people deal with catastrophes, young people thinking they'll never die and being so selfish that they discount the broader world outside themselves, etc. If anything, this is a "please speak directly into the microphone" situation, where the framing would pull people very strongly in the direction of stopping AGI.

Comment by Odd anon on Urging an International AI Treaty: An Open Letter · 2023-11-01T06:13:45.385Z · LW · GW

I assume that "threshold" here means a cap/maximum, right? So that nobody can create AIs larger than that cap?

Or is there another possible meaning here?

Comment by Odd anon on [deleted post] 2023-10-24T19:46:44.133Z

Agreed, the terms aren't clear enough. I could be called an "AI optimist", insofar as I think that a treaty preventing ASI is quite achievable. Some who think AI will wipe out humanity are also "AI optimists", because they think that would be a positive outcome. We might both be optimists, and also agree on what the outcome of superintelligence could be, but these are very different positions. Optimism vs pessimism is not a very useful axis for understanding someone's views.

This paper uses the term "AI risk skeptics", which seems nicely clear. I tried to invent a few terms for specific subcategories here, but they're somewhat unwieldy. Nevin Freeman tried to figure out an alternative term for "doomer", but the conclusion of "AI prepper" doesn't seem great to me.

Comment by Odd anon on AI #34: Chipping Away at Chip Exports · 2023-10-20T01:59:48.756Z · LW · GW

(Author of the taxonomy here.)

So, in an earlier draft I actually had a broader "Doom is likely, but we shouldn't fight it because..." as category 5, with subcategories including the "Doom would be good" (the current category 5), "Other priorities are more important anyway; costs of intervention outweigh benefits", and "We have no workable plan. Trying to stop it would either be completely futile, or would make it even more likely" (overhang, alignment, attention, etc), but I removed it because the whole thing was getting very unfocused. The questions of "Do we need to do something about this?" and "Which things would actually help?" are distinguishable questions, and both important.

My own opinion on the proposals mentioned: Fooling people into thinking they're talking to a human when they're actually talking to an AI should be banned for its own sake, independent of X-risk concerns. The other proposals would still have small (but not negligible) impact on profits and therefore progress, and providing a little bit more time isn't nothing. However, it cannot a replacement for a real intervention like a treaty globally enforcing compute caps on large training runs (and maybe somehow slowing hardware progress). 

Comment by Odd anon on Taxonomy of AI-risk counterarguments · 2023-10-16T23:26:58.552Z · LW · GW

Yeah, I think that's another example of a combination of going partway into "why would it do the scary thing?" (3) and "wouldn't it be good anyway?" (5). (A lot of people wouldn't consider "AI takes over but keeps humans alive for its own (perhaps scary) reasons" to be a "non-doom" outcome.) Missing positions like this one is a consequence of trying to categorize into disjoint groups, unfortunately.

Comment by Odd anon on Taxonomy of AI-risk counterarguments · 2023-10-16T23:05:15.164Z · LW · GW

Thank you for the correction. I've changed it to "the only ones listed here are these two, which are among the techniques pursued by OpenAI and Anthropic, respectively."

(Admittedly, part of the reason I left that section small was because I was not at all confident of my ability to accurately describe the state of alignment planning. Apologies for accidentally misrepresenting Anthropic's views.)

Comment by Odd anon on Public Opinion on AI Safety: AIMS 2023 and 2021 Summary · 2023-09-26T06:50:08.421Z · LW · GW

The methodology says "We used iSay/Ipsos, Dynata, Disqo, and other leading panels to recruit the nationally representative sample". (They also say elsewhere that "Responses were census-balanced based on the American Community Survey 2021 estimates for age, gender, region, race/ethnicity, education, and income using the “raking” algorithm of the R “survey” package".)

Comment by Odd anon on Ordinary claims require ordinary evidence · 2023-09-06T23:19:41.560Z · LW · GW

There are good ways to argue that AI X-risk is not an extraordinary claim, but this is not it. Besides for that "a derivation from these 5 axioms" does not make a claim "ordinary", the axioms themselves are pretty suspect or at least not simple.

"AI gets better, never worse" does not automatically imply to everyone that it gets better forever, or that it will soon surpass humans. "Intelligence always helps" is true, but non-obvious to many people. "No one knows how to align AI" is something that some would strongly disagree with, not having seen their personal idea disproved. "Resources are finite" jumps straight to some conclusions that require justification, including assumptions about the AI's goals. "AI cannot be stopped" is strongly counter-intuitive to most people, especially since they've been watching movies about just that for their whole lives.

And none of these arguments are even necessary, because AI being risky is the normal position in society. The average person believes that there are dangers, even if polls are inconsistent about whether an absolute majority often worries particularly about AI wiping out humanity. The AI optimist's position is the "weird", "extraordinary" one.

Contrast the post with the argument from stopai's homepage: "OpenAI, DeepMind, Anthropic, and others are spending billions of dollars to build godlike AI. Their executives say they might succeed in the next few years. They don’t know how they will control their creation, and they admit humanity might go extinct. This needs to stop." In that framing, it is hard to argue that it's an extraordinary claim.

Comment by Odd anon on Have you ever considered taking the 'Turing Test' yourself? · 2023-08-03T05:58:47.740Z · LW · GW

See https://www.humanornot.ai/ , and its unofficial successor, https://www.turingtestchat.com/ . I've determined that I'm largely unable to tell whether I'm talking to a human or a bot within two minutes. :/

Comment by Odd anon on Introducing bayescalc.io · 2023-07-11T08:36:09.769Z · LW · GW

Suggestions:

  • Allow for more than two hypotheses.
  • Maybe make the sliders "snap" to integer values, so that it looks cleaner.
  • Working with evidence percents "given all the evidence above" is sometimes hard to do. It may be useful to allow evidence-combination blocks just to allow filling things in as groups, even if only one of the numbers actually goes into the result, just so that the user can see that it all adds up to 100% and none of the dependent odds seem unreasonable.
  • Tooltips giving explanations of the terms "Prior" and "Posterior" could be good.
  • Some mouse-hover effect for the sliders' areas might help.
Comment by Odd anon on Open Thread: June 2023 (Inline Reacts!) · 2023-06-15T08:00:04.453Z · LW · GW

Some relevant posts:

Comment by Odd anon on Open Thread: June 2023 (Inline Reacts!) · 2023-06-12T09:26:26.570Z · LW · GW

I'm surprised there's no tag for either "AI consciousness" or "AI rights", given that there have been several posts discussing both. However, there's a lot of overlap between the two, so perhaps both would be redundant, and the question of which is broader/more fitting becomes relevant. Thoughts?

(Sorry if this is not the right place to put this.)

Comment by Odd anon on Open & Welcome Thread - May 2023 · 2023-06-07T08:57:00.275Z · LW · GW

Bing writes that one of the premises is that the AIs "can somehow detect or affect each other across vast distances and dimensions", which seems to indicate that it's misunderstanding the scenario.

Comment by Odd anon on Mental Models Of People Can Be People · 2023-05-02T04:11:02.984Z · LW · GW

People do not have the ability to fully simulate a person-level mind inside their own mind. Attempts to simulate minds are accomplished by a combination of two methods:

  1. "Blunt" pattern matching, as one would do for any non-human object; noticing what tends to happen, and extrapolating both inner and outer patterns.
  2. "Slotting in" elements of their own type of thinking, into the pattern they're working with, using their own mind as a base.

(There's also space in between these two, such as pattern-matching from one's own type of thinking, inserting pattern-results into one's own thinking-style, or balancing the outputs of the two approaches.)

Insofar as the first method is used, the result is not detailed enough to be a real person. Insofar as the second is used, it is not a distinct person from the person doing the thinking. You can simulate a character's pain by either feeling it yourself and using your own mind's output, or by using a less-than-person rough pattern, and neither of these come with moral quandaries.

Comment by Odd anon on Remarks 1–18 on GPT (compressed) · 2023-03-21T07:55:27.170Z · LW · GW

Relevant quote from the research paper: "gpt-4 has a context length of 8,192 tokens. We are also providing limited access to our 32,768–context (about 50 pages of text) version, gpt-4-32k, which will also be updated automatically over time".

Comment by Odd anon on Full Transcript: Eliezer Yudkowsky on the Bankless podcast · 2023-02-23T22:32:02.082Z · LW · GW

A few errors: The sentence "We're all crypto investors here." was said by Ryan, not Eliezer, and the "How the heck would I know?" and the "Wow" (following "you get a different thing on the inside") were said by Eliezer, not Ryan. Also, typos:

  • "chatGBT" -> "chatGPT"
  • "chat GPT" -> "chatGPT"
  • "classic predictions" -> "class of predictions"
  • "was often complexity theory" -> "was off in complexity theory" (I think?)
  • "Robin Hansen" -> "Robin Hanson"
Comment by Odd anon on Signaling isn't about signaling, it's about Goodhart · 2022-01-06T23:43:58.385Z · LW · GW

I am strongly reminded of the descriptions of the "upper class" in ACX's review of Fussel: "[T]he upper class doesn't worry about status because that would imply they have something to prove, which they don't.", and therefore they are extremely meticulous in making sure that nothing they do looks like signalling, ever, because otherwise people might think they have something to prove (which they don't). Boring parties, specifically non-ostentatious mansions, food which is just bland enough to avoid being too good and look like they're trying to show something, etc.

This kind of thing does happen. A group thinks they can be above signalling, and starts avoiding any attempts to signal, and then everyone notices that visible attempts to signal are bad signalling. Good signalling is looking like you're not trying to signal. And then the game starts all over again, only with yet another level of convoluted rules.