Posts

Towards Evaluating AI Systems for Moral Status Using Self-Reports 2023-11-16T20:18:51.730Z
80k podcast episode on sentience in AI systems 2023-03-15T20:19:09.251Z
What to think when a language model tells you it's sentient 2023-02-21T00:01:54.585Z
Key questions about artificial sentience: an opinionated guide 2022-04-25T12:09:39.322Z
Using Brain-Computer Interfaces to get more data for AI alignment 2021-11-07T00:00:28.072Z
Who has argued in detail that a current AI system is phenomenally conscious? 2021-05-14T22:03:12.882Z

Comments

Comment by Robbo on Universal Love Integration Test: Hitler · 2024-01-12T14:52:59.348Z · LW · GW

That poem was not written by Hitler.

According to this website and other reputable-seeming sources, the German poet Georg Runsky published that poem, "Habe Geduld", around 1906.

On 14 May 1938 a copy of this poem was printed in the Austrian weekly Agrarische Post, under the title 'Denke es'. It was then falsely attributed to Adolf Hitler.

In the Hitler biography of John Toland (1976) it appeared for the first time in English translation. Toland made the mistake in identifying it as a true Hitler poem, supposedly written in 1923.

Comment by Robbo on Book Review: Going Infinite · 2023-10-24T21:49:34.578Z · LW · GW

There was a rush to deontology that died away quickly, mostly retreating back into its special enclave of veganism.

Can you explain what you mean by the second half of that sentence?

Comment by Robbo on My Experience With Loving Kindness Meditation · 2023-03-01T01:51:36.790Z · LW · GW

https://hollyelmore.substack.com/p/i-believed-the-hype-and-did-mindfulness-meditation-for-dumb-reasons-now-im-trying-to-reverse-the-damage

and some links therein

Comment by Robbo on What to think when a language model tells you it's sentient · 2023-02-22T00:45:37.169Z · LW · GW

To clarify, what question were you thinking that is more interesting than? I see that as one of the questions that is raised in the post. But perhaps you are contrasting "realize it is conscious by itself" with the methods discussed in "Could we build language models whose reports about sentience we can trust?"

Comment by Robbo on What to think when a language model tells you it's sentient · 2023-02-21T04:05:41.647Z · LW · GW

I think I'd need to hear more about what you mean by sapience (the link didn't make it entirely clear to me) and why that would ground moral patienthood. It is true in my opinion that there are other plausible grounds for moral patienthood besides sentience (which, its ambiguity notwithstanding, I think can be used about as precisely as sapience, see my note on usage), most notably desires, preferences, and goals. Perhaps those are part of what you mean by 'sapience'?

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-29T14:26:26.224Z · LW · GW

Great, thanks for the explanation. Just curious to hear your framework, no need to reply:

-If you do have some notion of moral patienthood, what properties do you think are important for moral patienthood? Do you think we face uncertainty about whether animals or AIs have these properties? -If you don't, are there questions in the vicinity of "which systems are moral patients" that you do recognize as meaningful?

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-28T15:04:47.046Z · LW · GW

Very interesting! Thanks for your reply, and I like your distinction between questions:

Positive valence involves attention concentration whereas negative valence involves diffusion of attention / searching for ways to end this experience.

Can you elaborate on this? What is do attention concentration v. diffusion mean? Pain seems to draw attention to itself (and to motivate action to alleviate it). On my normal understanding of "concentration", pain involves concentration. But I think I'm just unfamiliar with how you / 'the literature' use these terms.

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-28T15:00:34.196Z · LW · GW

I'm trying to get a better idea of your position. Suppose that, as TAG also replied, "realism about phenomenal consciousness" does not imply that consciousness is somehow fundamentally different from other forms of organization of matter. Suppose I'm a physicalist and a functionalist, so I think phenomenal consciousness just is a certain organization of matter. Do we still then need to replace "theory" with "ideology" etc?

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-28T14:56:21.900Z · LW · GW

to say that [consciousness] is the only way to process information

I don't think anyone was claiming that. My post certainly doesn't. If one thought consciousness were the only way to process information, wouldn't there not even be an open question about which (if any) information-processing systems can be conscious?

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-28T14:51:19.630Z · LW · GW

A few questions:

  1. Can you elaborate on this?

Suffering seems to need a lot of complexity

and also seems deeply connected to biological systems.

I think I agree. Of course, all of the suffering that we know about so far is instantiated in biological systems. Depends on what you mean by "deeply connected." Do you mean that you think that the biological substrate is necessary? i.e. you have a biological theory of consciousness?

AI/computers are just a "picture" of these biological systems.

What does this mean?

Now, we could someday crack consciousness in electronic systems, but I think it would be winning the lottery to get there not on purpose.

Can you elaborate? Are you saying that, unless we deliberately try to build in some complex stuff that is necessary for suffering, AI systems won't 'naturally' have the capacity for suffering? (i.e. you've ruled out the possibility that Steven Byrnes raised in his comment)

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-26T12:53:13.419Z · LW · GW

Thanks for this great comment! Will reply to the substantive stuff later, but first - I hadn't heard of the The Welfare Footprint Project! Super interesting and relevant, thanks for bringing to my attention

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-25T20:04:18.693Z · LW · GW

A third (disconcerting) possibility is that the list of demands amounts to saying “don’t ever build AGIs”

That would indeed be disconcerting. I would hope that, in this world, it's possible and profitable to have AGIs that are sentient, but which don't suffer in quite the same way / as badly as humans and animals do. It would be nice - but is by no means guaranteed - if the really bad mental states we can get are in a kinda arbitrary and non-natural point in mind-space. This is all very hard to think about though, and I'm not sure what I think.

I’m hopeful (and hoping!) that one can soften the “we are rejecting strong illusionism” claim in #3 without everything else falling apart.

I hope so too. I was more optimistic about that until I read Kammerer's paper, then I found myself getting worried. I need to understand that paper more deeply and figure out what I think. Fortunately, I think one thing that Kammerer worries about is that, on illusionism (or even just good old fashioned materialism), "moral patienthood" will have vague boundaries. I'm not as worried about that, and I'm guessing you aren't either. So maybe if we're fine with fuzzy boundaries around moral patienthood, things aren't so bad.

But I think there's other more worrying stuff in that paper - I should write up a summary some time soon!

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-25T19:55:03.016Z · LW · GW

Thanks, I'll check it out! I agree that the meta-problem is a super promising way forward

Comment by Robbo on Key questions about artificial sentience: an opinionated guide · 2022-04-25T19:54:25.431Z · LW · GW

The whole field seems like an extreme case of anthropomorphizing to me.

Which field? Some of these fields and findings are explicitly about humans; I take it you mean the field of AI sentience, such as it is?

Of course, we can't assume that what holds for us holds for animals and AIs, and have to be wary of anthropomorphizing. That issue also comes up in studying, e.g., animal sentience and animal behavior. But what were you thinking is anthropomorphizing exactly? To be clear, I think we have to think carefully about what will and will not carry over from what we know about humans and animals.

The "valence" thing in humans is an artifact of evolution

I agree. Are you thinking that this means that valenced experiences couldn't happen in AI systems? Are unlikely to? Would be curious to hear why.

where most of the brain is not available to introspection because we used to be lizards and amoebas

I also agree.with that. What was the upshot of this supposed to be?

That's not at all how the AI systems work

What's not how the AI systems work? (I'm guessing this will be covered by my other questions)

Comment by Robbo on There is essentially one best-validated theory of cognition. · 2021-12-10T21:10:15.201Z · LW · GW

would read a review!

Comment by Robbo on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-24T18:18:06.840Z · LW · GW

This author is: https://fantasticanachronism.com/2021/03/23/two-paths-to-the-future/

"I believe the best choice is cloning. More specifically, cloning John von Neumann one million times"

Comment by Robbo on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2021-11-24T17:28:59.448Z · LW · GW

I guess even though I don't disagree that knowledge accumulation has been a bottleneck for humans dominating all other species, I don't see any strong reason to think that knowledge accumulation will be a bottleneck for an AGI dominating humans, since the limits to human knowledge accumulation seem mostly biological. Humans seem to get less plastic with age, mortality among other things forces us to specialize our labor, we have to sleep, we lack serial depth, we don't even approach the physical limits on speed, we can't run multiple instances of our own source, we have no previous example of an industrial civilization to observe, I could go on: a list of biological fetters that either wouldn't apply to an AGI or that an AGI could emulate inside of a single mind instead of across a civilization.

I agree with this, and I think that you are hitting on a key a reason that these debates don't hinge on what the true story of the human intelligence explosion ends up being. Whichever of these is closer to the truth

a) the evolution of individually smarter humans using general reasoning ability was the key factor

b) the evolution of better social learners and the accumulation of cultural knowledge was the key factor

...either way, there's no reason to think that AGI has to follow the same kind of path that humans did. I found an earlier post on the Henrich model of the evolution of intelligence, Musings on Cumulative Cultural Evolution and AI. I agree with Rohin Shah's takeaway on that post :

I actually don't think that this suggests that AI development will need both social and asocial learning: it seems to me that in this model, the need for social learning arises because of the constraints on brain size and the limited lifetimes. Neither of these constraints apply to AI -- costs grow linearly with "brain size" (model capacity, maybe also training time) as opposed to superlinearly for human brains, and the AI need not age and die. So, with AI I expect that it would be better to optimize just for asocial learning, since you don't need to mimic the transmission across lifetimes that was needed for humans.

Comment by Robbo on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2021-11-24T12:02:52.232Z · LW · GW

The core part of Ajeya's model is a probability distribution over how many OOMs of compute we'd need with today's ideas to get to TAI / AGI / APS-AI / AI-PONR / etc.

I didn't know the last two acronyms despite reading a decent amount of this literature, so thought I'd leave this note for other readers. Listing all of them for completeness (readers will of course know the first two):

TAI: transformative AI

AGI: artificial general intelligence

APS-AI: Advanced, Planning, Strategically aware AI [1]

AI-PONR: AI point of no return [2]

[1] from Carlsmith, which Daniel does link to

[2] from Daniel, which he also linked

Comment by Robbo on Yudkowsky and Christiano discuss "Takeoff Speeds" · 2021-11-23T18:11:10.424Z · LW · GW

In general, I don't yet see a strong reason to think that our general brain architecture is the sole, or potentially even primary reason why we've developed civilization, discontinuous with the rest of the animal kingdom. A strong requirement for civilization is the development of cultural accumulation via language, and more specifically, the ability to accumulate knowledge and technology over generations.

In The Secrets of Our Success, Joe Henrich argues that without our stock of cultural knowledge, individual humans are not particularly more generally intelligent than apes. (Neanderthals may very well have been more generally intelligent than humans - and indeed, their brains are bigger than ours.)

And, he claims, to the extent that individual humans are now especially intelligent, this was because of culture-driven natural selection. For Henrich, the story of human uniqueness is a story of a feedback loop: increased cultural know-how, which drives genetic selection for bigger brains and better social learning, which leads to increased cultural know-how, which drives genetic selection for bigger brains….and so forth, until you have a very weird great ape that is weak, hairless, and has put a flag on the moon.

Note: this evolution + culture feedback loop is still a huge discontinuity that led to massive changes in relatively short evolutionary time!

Just having a generalist brain doesn't seem like enough; for example, could there have been a dolphin civilization?

Heinrich speculates that a bunch of idiosyncratic features came together to launch us into the feedback loop that led to us being cultural species. Most species, including dolphins, do not get onto this feedback loop because of a "startup" problem: bigger brains will give a fitness advantage only up to a certain point, because individual learning can only be so useful. For there to be further selection for bigger brains, you need a stock of cultural know-how (cooking, hunting, special tools) that makes individual learning very important for fitness. But, to have a stock of cultural know-how, you need big brains.

Heinrich speculates that humans overcame the startup problem due to a variety of factors that came together when we descended from the trees and started living on the ground. The important consequences of a species being on the ground (as opposed to in the trees):

  1. It frees up your hands for tool use. Captive chimps, which are more “grounded” than wild chimps, make more tools.
  2. It’s easier for you to find tools left by other people.
  3. It’s easier for you to see what other people are doing and hang out with them. (“Hang out” being inapt, since that’s precisely not what you’re doing).
  4. You need to group up with people to survive, since there are terrifying predators on the ground. Larger groups offer protection; these larger groups will accelerate the process of people messing around with tools and imitating each other.

Larger groups also produce new forms of social organization. Apparently, in smaller groups of chimps, the reproductive strategy that every male tries to follow is “fight as many males as you can for mating opportunities.” But in a larger group, it becomes better for some males to try to pair bond – to get multiple reproductive opportunities with one female, by hanging around her and taking care of her.

Pair bonding in turn allows for more kinship relationships. Kinship relationships mean you grow up around more people; this accelerates learning. Kinship also allows for more genetic selection for big-brained, slow-developing learners: it becomes less prohibitively costly to give birth to big-brained, slow-growing children, because more people are around to help out and pool food resources.

This story is, by Henrich's own account, quite speculative. You can find it in Chapter 16 of the book.

Comment by Robbo on You are probably underestimating how good self-love can be · 2021-11-15T21:58:02.316Z · LW · GW

Thanks so much for this!

  1. Curious about

For example, I was aiming to pursue a PhD in machine learning, partly because I thought it would make me worthwhile. When I felt worthwhile I stopped that; I was able to think more freely about which strategy looked best according to my values.

If you have a chance I’d love to hear more about what this process looked like. What did thinking something would make you worthwhile feel like? Do you think that self love helped you care less about the status of a PhD? Or was it some other mechanisms? In general, how self-love stuff intersects with career planning sounds like an important sub-topic.

  1. Kendrick Lamar loves himself.
Comment by Robbo on Covid 11/11: Winter and Effective Treatments Are Coming · 2021-11-12T11:18:24.772Z · LW · GW

I'm curious about this passage:

Yes, yes, all of that is good, that is an excellent list of some of the downsides one should measure. It reminds me of nothing so much as an Effective Altruist trying to measure the advantages of an intervention to find things they can measure and put into a report. Yes, they will say, here is a thing called ‘development’ so it counts as something that can be put into the utility function and we can attach a number, excellent, very good. Then we can pretend that this is a full measure of how things actually work, and still imply that anyone who doesn’t give enough money is sort of kind of like Mega-Hitler, especially given all these matching funds.

This seems like it's alluding to a more detailed, strongly-held, and (if correct) damning assessment of (at least early-years) effective altruism. I'd like to understand that position more. Have you written about this elsewhere?

Comment by Robbo on Using Brain-Computer Interfaces to get more data for AI alignment · 2021-11-11T21:56:33.352Z · LW · GW

fixed the "Samberg" typo - thanks!

Samberg

Comment by Robbo on Using Brain-Computer Interfaces to get more data for AI alignment · 2021-11-10T15:28:40.525Z · LW · GW

Thanks for your thoughts! I think I'm having a bit of trouble unpacking this. Can you help me unpack this sentence:

"But I our success rides on overcoming these arguments and designing AI where more is better."

What is "more"? And what are "these arguments"? And how does this sentence relate to the question of whether explain data makes us put place more or less weight on similar-to-introspection hypotheses?

Comment by Robbo on Rafael Harth's Shortform · 2021-11-09T19:04:48.569Z · LW · GW

You might be interested in this post by Harri Besceli, which argues that "the best and worst experiences you had last week probably happened when you were dreaming".

Eric Schwitzgebel has also written that philosophical hedonists, if consistent, would care more about the quality of dream experiences: https://schwitzsplinters.blogspot.com/2012/04/how-much-should-you-care-about-how-you.html

Comment by Robbo on Using Brain-Computer Interfaces to get more data for AI alignment · 2021-11-09T11:01:06.039Z · LW · GW

Even if we were able to get good readings from insula & cingulate cortex & amygdala et alia, do you have thoughts on how and whether we could "ground" these readings? Would we calibrate on someone's cringe signal, then their gross signal, then their funny signal - matching various readings to various stimuli and subjective reports?

Comment by Robbo on Using Brain-Computer Interfaces to get more data for AI alignment · 2021-11-08T11:01:31.116Z · LW · GW

Hi Steven, thanks!

  1. On terminology, I agree.

Wait But Why, which of course is not an authoritative neuroscience source, uses "scale" to mean "how many neurons can be simultaneously recorded". But then it says fMRI and EEG have "high scale", but "low spatial resolution" - somewhat confusing since low spatial resolution means that fMRI and EEG don't record any individual neurons. So, my gloss on "scale" is more like WBW actually is talking about, and probably is better called "coverage". And then it's best to just talk about "number of simultaneously recorded [individual] neurons" without giving that a shorthand--and only talk about that when we really are recording individual neurons. That's what Stevenson and Kording (2011) do in "How advances in neural recording affect data analysis".

  1. Good call on Kernel, I'll edit to reflect that.

  2. Yep - invasive techniques are necessary - but not sufficient, as the case of ECoG shows.

Comment by Robbo on You can talk to EA Funds before applying · 2021-10-01T15:12:26.407Z · LW · GW

I scheduled a conversation with Evan based on this post and it was very helpful. If you're on the fence, do it! For me, it was helpful as a general career / EA strategy discussion, in addition to being useful for thinking about specifically Long-Term Future Fund concerns.

And I can corroborate that Evan is indeed not that intimidating.

Comment by Robbo on A review of Steven Pinker's new book on rationality · 2021-09-29T15:33:18.247Z · LW · GW

"I'm tempted to recommend this book to people who might otherwise be turned away by Rationality: From A to Z."

Within the category of "recent accessible introduction to rationality", would you recommend this Pinker book, or Julia Galef's "Scout Mindset"? Do thoughts on the pros and cons of each, or who would benefit more from each?

Comment by Robbo on Brain-Computer Interfaces and AI Alignment · 2021-09-27T13:10:49.088Z · LW · GW

Thanks for collecting these things! I have been looking into these arguments recently myself, and here are some more relevant things:

  1. EA forum post "A New X-Risk Factor: Brain-Computer Interfaces" (August 2020) argues for BCI as a risk factor for totalitarian lock-in.
  2. In a comment on that post, Kaj Sotala excerpts a section of Sotala and Yampolskiy (2015), "Responses to catastrophic AGI risk: a survey". This excerpts contains links to many other relevant discussions:
    1. "De Garis [82] argues that a computer could have far more processing power than a human brain, making it pointless to merge computers and humans. The biological component of the resulting hybrid would be insignificant compared to the electronic component, creating a mind that was negligibly different from a 'pure' AGI. Kurzweil [168] makes the same argument, saying that although he supports intelligence enhancement by directly connecting brains and computers, this would only keep pace with AGIs for a couple of additional decades.
    2. "The truth of this claim seems to depend on exactly how human brains are augmented. In principle, it seems possible to create a prosthetic extension of a human brain that uses the same basic architecture as the original brain and gradually integrates with it [254]. A human extending their intelligence using such a method might remain roughly human-like and maintain their original values. However, it could also be possible to connect brains with computer programs that are very unlike human brains and which would substantially change the way the original brain worked. Even smaller differences could conceivably lead to the adoption of 'cyborg values' distinct from ordinary human values [290].
    3. "Bostrom [49] speculates that humans might outsource many of their skills to non-conscious external modules and would cease to experience anything as a result. The value-altering modules would provide substantial advantages to their users, to the point that they could outcompete uploaded minds who did not adopt the modules. [...]
    4. "Moravec [194] notes that the human mind has evolved to function in an environment which is drastically different from a purely digital environment and that the only way to remain competitive with AGIs would be to transform into something that was very different from a human."
  3. The sources in question from the above are:
    1. de Garis H 2005 The Artilect War: Cosmists vs Terrans (Palm Springs, CA: ETC Publica-Tions)
    2. Kurzweil, R. (2001). Response to Stephen Hawking. Kurzweil Accelerating Intelligence. September, 5.
    3. Sotala K and Valpola H 2012 Coalescing minds Int. J. Machine Consciousness 4 293–312
    4. Warwick K 2003 Cyborg morals, cyborg values, cyborg ethics Ethics Inf. Technol. 5 131–7
    5. Bostrom N 2004 The future of human evolution ed C Tandy pp 339–71 Two Hundred Years After Kant, Fifty Years After Turing (Death and Anti-Death vol 2)
    6. Moravec H P 1992 Pigs in cyberspace www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1992/CyberPigs.html
  4. Here's a relevant comment on that post from Carl Shulman, who notes that FHI has periodically looked into BCI in unpublished work: "I agree the idea of creating aligned AGI through BCI is quite dubious (it basically requires having aligned AGI to link with, and so is superfluous; and could in any case be provided by the aligned AGI if desired long term)" 
     


 

Comment by Robbo on Pleasure and Pain are Long-Tailed · 2021-09-09T12:37:36.697Z · LW · GW

Thank you for writing about this. It's a tremendously interesting issue. 

I feel qualitatively more conscious, which I mean in the "hard problem of consciousness" sense of the word. "Usually people say that high-dose psychedelic states are indescribably more real and vivid than normal everyday life." Zen practitioners are often uninterested in LSD because it's possible to reach states that are indescribably more real and vivid than (regular) real life without ever leaving real life. (Zen is based around being totally present for real life. A Zen master meditates eyes open.) It is not unusual for proficient meditators to describe mystical experiences as at least 100× more conscious than regular everyday experience.

I'm very curious about the issue of what it means to say that one creature is "more conscious" than another--or, that one person is more conscious while meditating than while surfing Reddit. Especially if this is meant in the sense of "more phenomenally conscious". (I take it that you do mean "more phenomenally conscious", and that's what you are saying by invoking the hard problem. But let me know if that's not right). Can you say more about what you mean? Some background:

Pautz (2019) has been influential on my thinking about this kind of talk about 'more conscious' or 'level of conscious' or 'degree of consciousness'. Pautz distinguishes between many consciousness-related things that certainly do come in degrees. 

On the one hand, we have certain features of the particular character of phenomenally conscious experiences:

  • Intensity level (193)
    • A whisper is less intense than a heavy metal concert; faint pink is less intense than bright red.  And of course, certain pleasures and pains are more intense than others
  • Complexity level
    • The whiff of mint is a 'simpler' experience than visual experience of a bustling London street
  • Determinacy level
    • A tomato in the center of vision is represented more determinately than a tomato in the periphery
  • Access level
    • If you think that things can be more or less 'access' of phenomenal conscious experiences, then there might be some experiences that are not accessed, versus those that are fully accessed--e.g. something right in front of you that you are paying full attention to.

And then there is a 'global' feature of a creature's phenomenal consciousness:

  • Richness of experiential repertoire: the ‘number’ of distinct experiences (types and tokens) the creature has the capacity to have (194). Adult humans probably have a greater richness of experiential repertoire than a worm (if indeed worms are phenomenally conscious).

In light of this, my questions for you:

  1. Along which of these dimensions are you 'more' conscious when meditating? Would love to hear more. (I'm guessing: intensity, complexity, and access?)
  2. Do you think there is some further way in which you are 'more conscious', that is not cashed out in these terms? (Pautz does not, and he uses this to criticize Integrated Information Theory)

Finally: this post has inspired me to be more ambitious about exploring the broader regions of consciousness space for myself. ("Our normal waking consciousness, rational consciousness as we call it, is but one special type of consciousness, whilst all about it, parted from it by the filmiest of screens, there lie potential forms of consciousness entirely different." -William James). And for that, I am grateful.

Comment by Robbo on My Productivity Tips and Systems · 2021-08-11T18:18:31.906Z · LW · GW

Tons of handy stuff here, thanks!

I love the sound of Cold Turkey. I use Freedom for my computer, and I use it less than I otherwise would because of this anxious feeling, almost certainly exaggerated but still with a basis in reality, that whenever I start a full block it is a Really Big Deal and I might accidentally screw myself over - for example, if I suddenly remember I have to do something else. (Say, I'm looking for houses and it turns out I actually need to go look something up). But Cold Turkey, I'd just block stuff a lot more freely without the anxiety - I'll know if I really need something I can unlock it.  All while having the calm that comes from Twitter not being immediately accessible.

I also find the Freedom interface really terrible and that trivial inconvenience can keep me from starting blocks.

How often would you say you spend time-you-don't-endorse after unlocking something with the N random characters? Is it pretty effective at keeping you in line?

Comment by Robbo on Willa's Shortform · 2021-05-20T15:45:19.546Z · LW · GW

I enjoyed reading this and skimming through your other shortforms. I’m intrigued by this idea of using the short form as something like a journal (albeit a somewhat public facing one).

Any tips, if I might want to start doing this? How helpful have you found it? Any failure modes?

Comment by Robbo on Who has argued in detail that a current AI system is phenomenally conscious? · 2021-05-15T17:12:50.416Z · LW · GW

Jonathan Simon is working on such a project: "What is it like to be AlphaGo"? 

Comment by Robbo on TurnTrout's shortform feed · 2021-05-15T03:47:17.630Z · LW · GW

[disclaimer: not an expert, possibly still confused about the Baldwin effect]

A bit of feedback on this explanation: as written, it didn’t make clear to me what makes it a special effect. “Evolution selects for genome-level hardcoding of extremely important learned lessons.” As a reader I was like, what makes this a special case? If it’s useful lesson then of course evolution would tend to select for knowing it innately - that does seem handy for an organism.

As I understand it, what is interesting about the Baldwin effect is that such hard coding is selected for more among creatures that can learn, and indeed because of learning. The learnability of the solution makes it even more important to be endowed with the solution. So individual learning, in this way, drives selection pressures. Dennett’s explanation emphasizes this - curious what you make of his?

https://ase.tufts.edu/cogstud/dennett/papers/baldwincranefin.htm

Comment by Robbo on Open and Welcome Thread - May 2021 · 2021-05-14T19:39:57.157Z · LW · GW

I'm very intrigued by "prosthetic human voice meant for animal use"! Not knowing much about animal communication or speech in general, I don't even know what this mean. Could you say a bit more about what that would be?

Comment by Robbo on Open and Welcome Thread - May 2021 · 2021-05-14T19:37:23.740Z · LW · GW

Welcome, David! What sort of math are you looking to level up on? And do you know what AI safety/related topics you might explore? 

Comment by Robbo on Interview with Christine M. Korsgaard: Animal Ethics, Kantianism, Utilitarianism · 2021-05-08T17:32:18.634Z · LW · GW

Thanks for this! People interested in the claim (which Korsgaard takes to be a deficiency of utilitarianism) that for utilitarians "people and animals don’t really matter at all; they are just the place where the valuable things happen", might be interested in Richard Yetter Chappell's [1] paper "Value Receptacles" (pdf). It's an exploration of what this claim could even mean, and a defense of utilitarianism in light of it. 

[1] Not incidentally, a long-time effective altruist. Whose blog is great.

Comment by Robbo on MrGus99's Shortform · 2021-05-07T19:18:34.222Z · LW · GW

Interesting - what sort of thing do you use this for? what sort of thing have you done after rolling a 2? 

I imagine it must be things that are in some sense 'optional' since (quite literally) odds are you will not end up doing it.