Posts

Contra Infinite Ethics, Utility Functions 2022-02-02T22:32:24.897Z

Comments

Comment by kyleherndon on ARC is hiring theoretical researchers · 2023-06-13T07:44:33.294Z · LW · GW

I think the problem of "actually specifying to an AI to do something physical, in reality, like 'create a copy of strawberry down to the cellular but not molecular level', and not just manipulate its own sensors to believe it perceives itself achieving that even if it accomplishes real things in the world to do that" is a problem that is very deeply related to physics, and is almost certainly dependent on the physical laws of world more than some abstract disembodied notion of an agent.

Comment by kyleherndon on Shutting down AI is not enough. We need to destroy all technology. · 2023-04-02T00:51:38.150Z · LW · GW

You're thinking much too small, this only stops things occurring that are causally *downstream* of us. Things will still occur in other timelines, and we should prevent though things from happening too. I propose we create a "hyperintelligence" that acausally trades across timelines or invents time travel to prevent anything from happening in any other universe or timeline as well. Then we'll be safe from AI ruin.

Comment by kyleherndon on GPT-4 · 2023-03-15T02:30:20.390Z · LW · GW

Thanks for the great link. Fine-tuning leading to mode collapse wasn't the core issue underlying my main concern/confusion (intuitively that makes sense). paulfchristiano's reply leaves me now mostly completely unconfused, especially with the additional clarification from you. That said I am still concerned; this makes RLHF seem very 'flimsy' to me.

Comment by kyleherndon on GPT-4 · 2023-03-15T02:25:30.574Z · LW · GW

I was also thinking the same thing as you, but after reading paulfchristiano's reply, I now think it's that you can use the model to use generate probabilities of next tokens, and that those next tokens are correct as often as those probabilities. This is to say it's not referring to the main way of interfacing with GPT-n (wherein a temperature schedule determines how often it picks something other than the option with the highest probability assigned; i.e. not asking the model "in words" for its predicted probabilities).

Comment by kyleherndon on GPT-4 · 2023-03-14T17:50:00.569Z · LW · GW

GPT-4 can also be confidently wrong in its predictions, not taking care to double-check work when it’s likely to make a mistake. Interestingly, the base pre-trained model is highly calibrated (its predicted confidence in an answer generally matches the probability of being correct). However, through our current post-training process, the calibration is reduced.

What??? This is so weird and concerning.

Comment by kyleherndon on Why and How to Graduate Early [U.S.] · 2023-01-29T09:46:43.134Z · LW · GW

I graduated college in four years with two bachelors and a masters. Some additions:

AP Tests:

You don't need to take the AP course to take the test at all. This is NOT a requirement. If your high school doesn't offer the test you may need to take it at another school, though. Also unfortunate is that if it is the same as when I did this, your school probably gets test fees waived for students who took the course and thus you may need to pay for the test. https://apstudents.collegeboard.org/faqs/can-i-register-ap-exam-if-my-school-doesnt-offer-ap-courses-or-administer-ap-exams

Proficiency Tests:

The college I went to offered "Proficiency Tests" for many courses (mostly freshman targeted) which were effectively final exams for courses that you could take, and if you satisfied with some grade you got credit for the course. If you are good at studying on your own, this will probably be significantly less work than taking the course and it is an especially effective for courses that you are not interested in. 

Taking More Classes:

I literally planned my entire course load for all four years way before I got on campus (with built in flexibility for when courses were full or if I wanted to leave a couple of wildcards in for fun or whatever). This is important because if you're planning something like what I was doing, it's important not to have all your hard classes in the same semester and then burn out.

Comment by kyleherndon on Have we really forsaken natural selection? · 2023-01-12T00:33:26.916Z · LW · GW

The big accusation, I think, is of sub-maximal procreation. If we cared at all about the genetic proliferation that natural selection wanted for us, then this time of riches would be a time of fifty-child families, not one of coddled dogs and state-of-the-art sitting rooms.

 

Natural selection, in its broadest, truest, (most idiolectic?) sense, doesn’t care about genes. 

 

So what did natural selection want for us? What were we selected for? Existence.

I think there might be a meaningful way to salvage the colloquial concept of "humans have overthrown natural selection."

Let [natural selection] refer to the concept of trying to maximize genetic fitness and specifically refer to maximizing the spread of genes. Let [evolution] refer to the concept of trying to maximize 'existence' or persistence. There's sort of a hierarchy of optimizers where [evolution] > [natural selection] > humanity where you could make the claim that humanity has "overthrown our boss and took their position" such that humanity reports directly to [evolution] now instead of having [natural selection] as our middle manager boss. One can make the argument that ideas in brains are the preferred substrate over DNA now, as an example of this model. 

This description also makes the warning with respect to AI a little more clear: any box or "boss" is at risk of being overthrown.

Comment by kyleherndon on What's the deal with AI consciousness? · 2023-01-11T21:07:26.789Z · LW · GW

(This critique contains not only my own critiques, but also critiques I would expect others on this site to have)

First, I don't think that you've added anything new to the conversation. Second, I don't think what you have mentioned even provides a useful summary of the current state of the conversation: it is neither comprehensive, nor the strongest version of various arguments already made. Also, I would prefer to see less of this sort of content on LessWrong. Part of that might be because it is written for a general audience, and LessWrong is not very like the general audience.

This is an example of something that seems to push the conversation forward slightly, by collecting all the evidence for a particular argument and by reframing the problem as different, specific, answerable questions. While I don't think this actually "solves the hard problem of consciousness as Halberstadt notes in the comments, I think it could help clear up some confusions for you. Namely, I think it is most meaningful to start from a vaguely panpsychist model of "everything is conscious," what we mean by consciousness is "the feeling of what it is like to be" and the move on to talk about what sorts of consciousness we care about: namely consciousness that looks remotely similar to ours. In this framework, AI is already conscious, but I don't think there's any reason to care about that.  

More specifics:

Consciousness is not, contrary to the popular imagination, the same thing as intelligence.

I don't think that's a popular opinion here. And while I think some people might just have a cluster of "brain/thinky" words in their head when they don't think about the meaning of things closely, I don't think this is a popular opinion of people in general unless they're really not thinking about it.

But there’s nothing that it’s like to be a rock

Citation needed.

But that could be very bad, because it would mean we wouldn’t be able to tell whether or not the system deserves any kind of moral concern.

Assuming we make an AI conscious, and that consciousness is actually something like what we mean by it more colloquially (human-like, not just panpsychistly), it isn't clear that this makes it a moral concern. 

There should be significantly more research on the nature of consciousness.

I think there shouldn't. At least not yet. The average intelligent person thrown at this problem produces effectively nothing useful, in my opinion. Meanwhile, I feel like there is a lot of lower hanging fruit in neuroscience that would also help solve this problem more easily later in addition to actually being useful now.

In my opinion, you choose to push for more research when you have questions you want answered. I do not consider humanity to have actually phrased the hard problem of consciousness as a question, nor do I think we currently have the tools to notice an answer if we were given one. I think there is potentially useful philosophy to do around but not on the hard problem of consciousness in terms of actually asking a question or learning how we could recognize an answer

Researchers should not create conscious AI systems until we fully understand what giving those systems rights would mean for us.

They cannot choose not to because they don't know what it is, so this is unactionable and useless advice.

AI companies should wait to proliferate AI systems that have a substantial chance of being conscious until they have more information about whether they are or not.

Same thing as above, and also the prevailing view here is that it is much more important that AI will kill us, and if we're theoretically spending (social) capital to make these people care about things, the not killing us is astronomically more important.

AI researchers should continue to build connections with philosophers and cognitive scientists to better understand the nature of consciousness

I don't think you've made strong enough arguments to support this claim given the opportunity costs. I don't have an opinion on whether or not you are right here.

Philosophers and cognitive scientists who study consciousness should make more of their work accessible to the public

Same thing as above.

Nitpick: there's something weird going on with your formatting because some of your recommendations show up on the table of contents and I don't think that's intended.

Comment by kyleherndon on Towards Hodge-podge Alignment · 2022-12-20T03:16:41.888Z · LW · GW

I haven't quite developed an opinion on the viability of this strategy yet, but I would like to appreciate that you produced a plausible sounding scheme that I, a software engineer not mathematician, feel like I could actually probably contribute to. I would like to request people come up with MORE proposals similar along this dimension and/or readers of this comment to point me to other such plausible proposals. I think I've seen some people consider potential ways for non-technical people to help, but I feel like I've seen disproportionately few ways for technically competent but not theoretically/mathematically minded to help.

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-11T23:30:40.730Z · LW · GW

If I discover something first, our current culture doesn't assign much value to the second person finding it, is why I mentioned exploration as not-positive sum. Avoiding death literally requires free energy, a limited resource, but I realize that's an oversimplification at the scale we're talking.

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-10T21:09:19.984Z · LW · GW

I see. I feel like honor/idealism/order/control/independence don't cleanly decompose to these four even with a layer of abstraction, but your list was more plausible than I was expecting.

That said, I think an arbitrary inter-person interaction with respect to these desires is pretty much guaranteed to be zero or negative sum, as they all depend on limited resources. So I'm not sure what aligning on the values would mean in terms of helping cooperation.

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-10T00:40:04.074Z · LW · GW

I don't think most people are consciously aware, but I think most people are unconsciously aware that "it is merely their priorities that are different, rather than their fundamental desires and values" and furthermore our society largely looks structured such that only the priorities are different, but that the priorities differ significantly enough because of the human-sparseness of value-space.

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-10T00:27:52.363Z · LW · GW

I am skeptical of psychology research in general, but my cursory exploration has suggested to me that it is potentially reasonable to think there are 16. My best estimates are probably that there literally are 100 or more, but that most of those dimension largely don't have big variance/recognizable gradations/are lost in noise. I think humans are reasonably good at detecting 1 part in 20, and that the 16 estimate above is a reasonable ballpark, meaning I believe that 20^16=6.5E20 is a good approximation of the number of states in the discretized value space. With less than 1E10 humans, this would predict very few exact collisions.

I would be really dubious of any models that suggest there are less than 5. Do you have any candidates for systems of 3 or 4 fundamental desires?

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-10T00:17:06.908Z · LW · GW

 I'm not sure why adjacency has to be "proper"; I'm just talking about social networks, where people can be part of multiple groups and transmit ideas and opinions between them.  

I approximately mean something as follows:

 Take the vector-value model I described previously. Consider some distance metric (such as the L2 norm), D(a, b) where a and b are humans/points in value-space (or mind-space, where a mind can "reject" an idea by having it be insufficiently compatible). Let k be some threshold for communicability of a particular idea. Assume once an idea is communicated, it is communicated in full-fidelity (you can replace this with a probabilistic or imperfect communication model, but it's not necessary to illustrate my point). If you create the graph amongst all humans in value-space, where an edge exists between a and b iff D(a,b) < k, it's not clear to me that this graph is connected, or even has many edges at all. If this is true for a particular idea/k pair, then the idea is unlikely to undergo information cascade, because additional effort is needed in many locations to cross the inferential gap.

As you say, the ability to coordinate large-scale action by decree requires a high place in a hierarchy.  With the internet, though, it doesn't take authority just to spread an idea, as long it's one that people find valuable or otherwise really like. 

Somewhat related, somewhat tangential, I think the internet itself is organized hierarchically as nested "echo-chambers" or something similar where the smallest echo chambers are what we currently call echo-chambers. This means you can translate any idea/concept as existing somewhere on the hierarchy of internet communities, and only ideas high on the hierarchy can effectively spread messages/information cascades widely.

Is there anywhere you can concretely point to in my model(s) you would disagree with?

if there's anyone in this community who recognizes the potential of facilitating communication.  

I agree this is (potentially) high leverage. My strategy has general been that expressing ideas with greater precision more greatly aids communication. An arbitrary conversation is unlikely to transmit the full precision of your idea, but it becomes less likely that you transmit something you don't mean and that makes a huge difference. The domain of politics seems mostly littered with extremely low precision communication, and in particular, often deceptively precise communication, wherein wording is chosen between two concepts to allow any error correction of behalf of a listener to be in favor of the communicator. Is there any reason why you want to specifically target politics instead of generally trying to make the human race more sane, such as what Yudkowsky did with the sequences?

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-06T21:19:32.149Z · LW · GW

I was thinking of issues like the economy, healthcare, education, and the environment.

I disagree and will call any national or global political issues high-hanging fruit. I believe there is low-hanging fruit at the local level, but coordination problems of million or more people are hard.

They can influence the people ideologically adjacent to them, who can influence the people adjacent to them, et cetera.  

In my experience, it's not clear that there is really much "proper adjacency." Sufficiently high dimensional spaces make any sort of clustering ambiguous and messy if even possible. Even more specifically, I haven't seen much of any ideas in politics that spread quickly that wasn't also coordinated from (near) the top, suggesting to me that information cascades in this domain are impractical. 

I think that largely that's what is even meant by hierarchical structures. Small/low elements have potentially rich, complicated inner lives, but have very little signal they can send upwards/outwards. High/large structures have potentially bureaucratically or legally constrained action space, but their actions have wide and potentially large influences. 

So far as I can tell, the tools I've accumulated for this endeavor appear to be helping the people around me a great deal.

Great. Keep on doing it, then.

My message for everyone else.

 It starts with expressing as simply as possible what matters most.  It turns out there is a finite number of concepts that describe what people care about.  

Say there are 100 fundamental desires, and all desires stem from these 100 fundamental desires. Each can still take on any number from -1 to 1, allowing a person to care about each of these things in different proportions. Even if we restrict the values to 0 to 1, you still get conflict because what is most important to one person is not what's most important to another, causing real value divergences.

Is there another approach to making the world a better place without changing how humans think, that I'm unaware of?  

I can think of some that you didn't explicitly mention.

  • You can make the world just a slightly better place by normal means, trying to be kind, etc.
  • You can have kids, and teach them a better way to think while they're still especially pliable, and ignore trying to teach old dogs new tricks
    • Maximize your inclusive genetic fitness, live a long life, make sure your ideas are such good ones that your kids will teach their kids and eventually outlive and out-compete inferior ideas
  • You can change how humans think, but you can do it in not the domain of politics

For what it's worth, I also largely agree with things you said and your original post. At the point where the Wanderer contributed, I guessed both how the story would end, and the worse compromise the Wanderer mentioned. I guess I especially agree with your target. It's not clear to me that I agree with your methods after having spent a fair deal of time on this sort of problem myself. That said, it's extremely likely that you have real skill advantages in this domain over me. That said, I think any premise that begins with "the economy, healthcare, education, and the environment are low-hanging fruit in politics" is one where you get burned and waste time.

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-04T08:05:53.510Z · LW · GW

I don't think there's many potential negative consequences in trying. My response wasn't a joke so much as taking issue with

It is apparent to me that making human politics more constructive is a low-hanging fruit

I think it really, really is not low hanging fruit. The rights and personhood line seems quite a reasonable course of discussion to go down, but you're frequently talking to people who don't want to apply reason, at least not at the level of conversation.

Religion is a "reasonable choice" in that you buy a package and it's pretty solid and defended by a conglomerate with the intent that you defend and get some defense back. I don't think you're going to get far without dismantling institutions such as religions, and I don't think your process is sufficient to dismantle those institutions.

Many people have effectively made the decision "you are not in my tribe, so I will not engage with you in a productive way, because I need to assume you are deceiving me." I think amongst any parties that aren't pre-opposed to one another, looking for win-wins is the default, sane thing that basically everyone does all the time. The problem is all coordination problems are downstream of effective communication, and there are many people with whom you will not communicate with effectively.

The real potential negative consequence that is likely is you waste your time, and frankly, I don't think you'll be the one to solve this, because I don't think there are win-wins on this subject, and a good number of other subjects from politics.

Comment by kyleherndon on The Village and the River Monsters... Or: Less Fighting, More Brainstorming · 2022-10-03T23:22:16.650Z · LW · GW

Great, now solve pro-choice vs pro-life.

Comment by kyleherndon on Why do People Think Intelligence Will be "Easy"? · 2022-09-12T18:19:12.528Z · LW · GW

I think most people would agree that at some point there is likely to be diminishing returns. I, and I think the prevailing view on lesswrong, is that the biological constraints you mentioned are actually huge constraints that silicon-based intelligence won't/doesn't have. And the lack of these constraints will push off the point of diminishing returns to a point much past humans.

Comment by kyleherndon on Let's See You Write That Corrigibility Tag · 2022-09-10T01:33:28.842Z · LW · GW

You can find it here. https://www.glowfic.com/replies/1824457#reply-1824457

I would describe it as extremely minimal spoilers as long as you read only the particular post and not preceding or later ones. The majority of the spoilerability is knowing that the content of the story is even partially related, which you would already learn by reading this post. The remainder of the spoilers is some minor characterization.

Comment by kyleherndon on Let's See You Write That Corrigibility Tag · 2022-09-10T01:23:41.501Z · LW · GW
Comment by kyleherndon on What are the Limits on Computability? · 2022-08-20T23:02:54.911Z · LW · GW

https://en.wikipedia.org/wiki/Limits_of_computation

Great relevant wikipedia page

Comment by kyleherndon on How do you get a job as a software developer? · 2022-08-15T21:37:09.156Z · LW · GW

At the same time it's basically the only filtering criteria provided besides "software developer job." Having worked a few different SWE jobs, I know that some company cultures which people love are cultures I hate, and vice versa. I would point someone to completely different directions based off a response. Not because I think it's likely they got their multidimensional culture preferences exactly perfectly communicated, but because the search space is so huge it's good to at least have an estimator on how to order what things to look into.

Comment by kyleherndon on How do you get a job as a software developer? · 2022-08-15T18:42:59.907Z · LW · GW

I don't have strong preferences about what the company does. I mostly care about working with a team that has a good culture.

This is pretty subjective, and I would find it helpful to know what sort of culture you're looking for.

Comment by kyleherndon on Anti-squatted AI x-risk domains index · 2022-08-12T16:51:01.524Z · LW · GW

so I have forwarded all of these domains to my home page

On my end this does not appear to be working.

Also, nice work.

Comment by kyleherndon on The Burden of Worldbuilding · 2022-06-04T11:37:41.227Z · LW · GW

Disambiguation is a great feature of language, but we can also opt instead to make things maximally ambiguous with my favorite unit system: CCC. All measurements expressed with only the letter C.

Comment by kyleherndon on We're already in AI takeoff · 2022-03-10T20:28:43.641Z · LW · GW

A sketch of solution that doesn't involve (traditional) world leaders could look like "Software engineers get together and agree that the field is super fucked, and start imposing stronger regulations and guidelines like traditional engineering disciplines use but on software." This is a way of lowering the cost of alignment tax in the sense that, if software engineers all have a security mindset, or have to go through a security review, there is more process and knowledge related to potential problems and a way of executing a technical solution at the last moment. However, this description is itself is entirely political not technical, yet easily could not reach the awareness of world leaders or the general populace.

Comment by kyleherndon on We're already in AI takeoff · 2022-03-09T22:58:44.147Z · LW · GW

My conclusion: Let's start the meme that Alignment (the technical problem) is fundamentally impossible (maybe it is? why think you can control something supposedly smarter than you?) and that you will definitely kill yourself if you get to the point where finding a solution to Alignment is what could keep you alive. Pull a Warhammer 40k, start banning machine learning, and for that matter, maybe computers (above some level of performance) and software. This would put more humans in the loop for the same tasks we have now, which offers more opportunities to find problems with the process than how a human right now can program 30 lines of C++, have it LGTM'd by one other person at Google and then have those lines of code be used billions of time, per the input of two humans, ever.

(This meme hasn't undergone sufficient evolution, feel free to attack with countermemes and supporting memes until it evolves into one powerful enough to take over, and delay the death of the world)

"MIRI walked down this road, a faithful scout, trying to solve future problems before they're relevant. They're smart, they're resourceful, they made noise to get other people to look at the problem. They don't see a solution in sight. If we don't move now, the train will run us over. There is no technical solution to alignment, just political solutionsjust like there's no technical solution to nuclear war, just treaties and individuals like Petrov doing their best to avoid total annihilation."

Comment by kyleherndon on We're already in AI takeoff · 2022-03-09T22:07:43.589Z · LW · GW

I think there's an important difference Valentine tries to make with respect to your fourth bullet (and if not, I will make). You perhaps describe the right idea, but the wrong shape. The problem is more like "China and the US both have incentives to bring about AGI and don't have incentives towards safety." Yes deflecting at the last second with some formula for safe AI will save you, but that's as stupid as jumping away from a train at the last second. Move off the track hours ahead of time, and just broker a peace between countries to not make AGI.

Comment by kyleherndon on Covid 3/3/2022: Move Along · 2022-03-03T17:52:41.988Z · LW · GW

Yes, there are those who are so terrified of Covid that they would advise practicing social distancing in the wake of nuclear Armageddon. This is an insight into that type of thinking. I do think keeping your mask on would be wise, but for obvious other reasons.

 

I saw this too and was very put off to find social distancing being mentioned in a nuclear explosion survival guide, glad I'm not the only one who noticed this. Doubt many would survive (myself included) without the aid of other humans in such an apocalyptic situation, you know, like a crowded bus out of the fallout zone that I would have to turn down to follow social distancing.

Comment by kyleherndon on Shah and Yudkowsky on alignment failures · 2022-03-01T01:22:26.544Z · LW · GW

Ah, I forgot to emphasize that these were things to look into to get better. I don't claim to know EY's lineage. That said, how many people do you think are well versed in cryptography? If someone said, "I am one of very few people who is well versed in cryptography" that doesn't sound particularly wrong to me (if they are indeed well versed). I guess I don't know exactly how many people EY thinks is in this category with him, but people versed enough in cryptography to, say, make their own novel and robust scheme is probably on the order of 1,000-10,000 worldwide. His phrasing would make sense to me for any fraction of the population lower than 1 in 1,000, and I think he's probably referring to a category at the size of or less than 1 in 10,000. That said, I would like to emphasize that I don't think cryptography is especially useful to this ends, rather, the reason it was mentioned above was to bring up the security mindset.

Zen/mindfulness meditation generally has an emphasis on noticing concrete sensations. In particular, it might help you interject your attention at the proper level of abstraction to reroute concrete observations and sensations into your language. Also, with all of these examples, I do not claim that any individual one will be enough, but I do believe that experience with these things can help.

One fun way to learn concreteness is something I tried to exercise in this reply: use actual numbers. Fermi estimation is a skill that's relatively easy to pick up and makes you exercise your ability to think concretely about actual numbers that you are aware of to predict numbers you that are not. The process of actually referencing the concrete observations into a concrete prediction is a pattern that I have found to produce concrete thoughts which get verbalized in concrete language. :)

Comment by kyleherndon on Shah and Yudkowsky on alignment failures · 2022-02-28T22:57:25.659Z · LW · GW

Cryptography was mentioned in this post in a relevant manner, though I don't have enough experience with it to advocate it with certainty. Some lineages of physics (EY points to Feynman) try to evoke this, though it's pervasiveness has decreased. You may have some luck with Zen. Generally speaking, I think if you look at the Sequences, the themes of physics, security mindset, and Zen are invoked for a reason.

Comment by kyleherndon on The metaphor you want is "color blindness," not "blind spot." · 2022-02-16T09:31:54.501Z · LW · GW

Color blindness is a blind spot in color space.

Comment by kyleherndon on Covid 2/10/22: Happy Birthday · 2022-02-10T20:12:15.526Z · LW · GW

I think you forgot to insert "Vaccinations graphs"

Comment by kyleherndon on Contra Infinite Ethics, Utility Functions · 2022-02-05T10:26:13.283Z · LW · GW

It is almost a fully general counter argument. It argues against all knowledge, but to different degrees. You can at least compare the references of symbols to finite calculations that you have already done within your own head, and then use Occam's Razor.

Comment by kyleherndon on Contra Infinite Ethics, Utility Functions · 2022-02-04T18:09:00.199Z · LW · GW

I don't accept "math" as a proper counterexample. Humans doing math aren't always correct, how do you reason about when math is correct?

My argument is less about "finite humans cannot think about infinities perfectly accurately" and more, "your belief that humans can think about infinities at all is predicated upon the assumption (which can only be taken on faith) that the symbol you manipulate relates to reality and its infinities at all."

Comment by kyleherndon on Contra Infinite Ethics, Utility Functions · 2022-02-03T01:57:55.325Z · LW · GW

By what means are you coming to your reasoning about infinite quantities? How do you know the quantities you are operating on are infinite at all?

Comment by kyleherndon on On infinite ethics · 2022-01-31T10:10:00.259Z · LW · GW

I am confused how you got to the point of writing such a thoroughly detailed analysis of the application of the math of infinities to ethics while (from my perspective) strawmanning finitism by addressing only ultrafinitism. “Infinities aren’t a thing” is only a "dicey game" if the probability of finitism is less than 100% :). In particular, there's an important distinction between being able to reference the "largest number + 1" and write it down versus referencing it as a symbol as we do, because in our referencing of it as a symbol, in the original frame, it can be encoded as a small number.

Another easy way to just dismiss the question of infinite ethics that I feel you overlooked is that you can assign zero probability to our choice of mathematical axioms is exactly correct about the nature of infinities (or even probabilities).

You'll notice that both of these examples correspond to absolute certainty, and that one may object that I am "not being open minded" or something like that for having infinitely trapped priors. However, I would remind readers that you cannot chose all of your beliefs and that, practically, understanding your own beliefs can be more important than changing (or being able to change) them. We can play word games regarding infinities, but will you put your life at stake? Or will your body reject the attempts of your confused mind when the situation and threats at hand are visible?

I would also like to directly claim, regardless of the truth of aforementioned claims, that entities and actions beyond the cosmic horizon of our observable universe are forfeit for consideration of ethics (and only once they are past that horizon). In particular, I dislike that your argument relies on the notion that cosmologists believe that the universe is infinite, while cosmologist will also definitely tell you that things beyond the cosmological horizon are outside of causal influence. Your appeal to logos only to later reject it in your favor is inconsistent and unpalatable to me.

I also am generally under the impression that a post like this should be classified as a cognitohazard, as I am under the impression that the post will cause net harm under the premise that it attempts to update people in the direction of susceptibility to arguments of the nature of Pascal's Wager.

I'm sorry if I'm coming off as harsh. In particular, I know from reading your posts that I think you generally contribute positively and I have enjoyed much of your content. However, I am under the impression that this post is likely a net negative, and directly conflicting against the proposition that we "help our species make it to a wise and empowered future" because I think that this contributes towards misleading our species. I have found myself, and obviously others may find otherwise, that as far as I can tell there is ingrained in my experience of consciousness itself that assigns zero probability to our choice of axioms as being literally entirely correct (the map is not the territory). I also claim that regardless of the supposed "actual truth" of the existence of infinities in ethics, that a practical standpoint suggests that you should definitely reject the idea, as I believe practically having any modicum of belief is more likely to lead you astray and likely to perform worse in exceptional case that our range of causal influence is "actually infinite" though clearly this is not something I can prove.

Comment by kyleherndon on The Liar and the Scold · 2022-01-21T07:36:27.674Z · LW · GW

Typo in this sentence: "And probably I we would have had I not started working on The Machine."

Comment by kyleherndon on Getting diagnosed for ADHD if I don't plan on taking meds? · 2021-12-17T20:03:44.515Z · LW · GW

I was in a similar position, but I am now at a point where I believe ADHD is negatively affecting my life in way that has overturned my desire to not take medication. It's hard to predict the future, but if you have a cheap or free way to get a diagnosis, I would recommend doing so for your own knowledge and to maybe make getting prescriptions in the future a smidge easier. I think it's really believable that in your current context there are no or nearly no negative repercussions to your ADHD if you have it, but it's hard to be certain of your future contexts, and even to know what aspects of your context would have to change for your symptoms to act (sufficiently) negatively.

Comment by kyleherndon on Where can one learn deep intuitions about information theory? · 2021-12-16T20:54:03.951Z · LW · GW

To start, I propose a different frame to help you. Ask yourself not "How do I get intuition about information theory?" instead ask "How is information theory informing my intuitions?" 

It looks to me like it's more central than is Bayes' Theorem, and that it provides essential context for why and how that theorem is relevant for rationality.

You've already noticed that this is "deep" and "widely applicable." Another way of saying these things is "abstract," and abstraction reflects generalizations over some domain of experience. These generalizations are the exact sort of things which form heuristics and intuitions to apply to more specific cases.

To the meat of question: 

Step 1) grok the core technology (which you seem have already started)

Step 2) ask yourself the aforementioned question.

Step 3) try to apply it to as many domains as possible

Step 4) as you come into trouble with 3), restart from 1).

When you find yourself looking for more of 1) from where you are now, I recommend at least checking out Shannon's original paper on information. I find his writing style to be much more approachable than average for what is a highly technical paper. Be careful when reading though, because his writing is very dense with each sentence carrying a lot of information ;)

Comment by kyleherndon on Is "gears-level" just a synonym for "mechanistic"? · 2021-12-13T07:07:56.191Z · LW · GW

There's a tag for gears level and in the original post it looks like everyone in the comments was confused even then what gears-level meant, and in particular there were a lot of non-overlapping definitions given. In particular, the author, Valentine, also expresses confusion.

The definition given, however, is:

1. Does the model pay rent? If it does, and if it were falsified, how much (and how precisely) could you infer other things from the falsification?

 2. How incoherent is it to imagine that the model is accurate but that a given variable could be different?

 3. If you knew the model were accurate but you were to forget the value of one variable, could you rederive it?

I'm not convinced that how people have been using it in more recent posts though. I think the one upside is that "gears-level" is probably easier to teach than "reductionist" but contingent on someone knowing the word "reductionism" it is clearly simpler to just use that word. In the history of the tag, there was also previously "See also: Reductionism" with a link.

In the original post, I think Valentine was trying to get at something complex/not fully encapsulated by an existing word or short phrase, but it's not clear to me that it was well communicated to others. I would be down for tabooing "gears-level" as a (general) term on lesswrong. I can't think of an instance after the original where someone used the term "gears-level" to not mean something more specific, like "mechanistic" or "reductionist."

That said, given I don't think I really understand what was meant by "gears-level' in the original, when there are suitable replacements, I would ideally like to hear from someone who thinks they do. In particular, like Valentine or brook. If there were no objections maybe clean-up the tag by removing it and/or linking to other related terms.

Comment by kyleherndon on Taking Clones Seriously · 2021-12-02T05:29:40.136Z · LW · GW

Early twin studies of adult individuals have found a heritability of IQ between 57% and 73%,[6] with the most recent studies showing heritability for IQ as high as 80%.[7] 

Source

Comment by kyleherndon on P₂B: Plan to P₂B Better · 2021-10-26T00:10:40.437Z · LW · GW

I enjoyed this post. Were you inspired by HCH at all? Both occupy the same mental space for me.

Comment by kyleherndon on Perceptual dexterity: a case study · 2021-10-08T20:32:48.512Z · LW · GW

I really enjoyed the post, but something that maybe wasn't the focus of it really stuck out to me.

i think i felt a little bit of it with Collin when i was trying to help him find a way to exercise regularly. the memory is very hazy, but i think the feeling was focused on the very long list of physical activities that were ruled out; it seemed the solution could not involve Collin having to tolerate discomfort. much like with Gloria and the "bees", i experienced some kind of emotional impulse to be separate from him, to push him away, to judge him to be inadequate or unworthy. (it wasn't super strong in that case, and i did actually succeed in helping him find an exercise routine that he stuck with for years.)

I would find it really useful if you wrote an explanation of how you achieved this in particular, as exercising regularly is one of those 'canonically difficult' things to do.

Comment by kyleherndon on Reachability Debates (Are Often Invisible) · 2021-09-28T17:48:06.477Z · LW · GW

I appreciate this post a lot. In particular, I think it's cool that you establish a meta-frame, or at least class of frames. Also, I've had debates that definitely have had reachability mismatches in the past and I hope that that I'll be able to just link to this post in the future.

The most frequent debate mismatch I have is on a subject you mention: climate change. I generally take the stance of Clara: the way I view it, it's a coordination problem, and individual action, no matter how reachable, I model as having completely unsubstantial effect. In some sense, one could claim that all arguments should only be about the nature of actions that either individual involved in the conversation could actually take. On the other hand (and the stance I take in this scenario), communication can be used as a signal to establish consensus on the actions that others should take. I expect that this sort of mismatch could be the cause reachability mismatches in general. One participant can prioritize the personal relevance of the conversation while another could try to prioritize arguments for actions that have the most effect, whether or not anyone can actually make them happen. Another way to view this problem is "working backwards" from the problem or "working forwards" from the actions we can take.

Comment by kyleherndon on Re: Competent Elites · 2021-07-16T09:25:35.509Z · LW · GW

To offer a deeper explanation, I personally view the piece as doing the following things:

  1. Explain some aspects of intelligence that people don't normally like to hear about (really just some basic expounding of the themes in Competent Elites)
  2. Make an interesting observation about how individuals can evaluate intelligence of others (specifically, evaluate them when they are younger than yourself)
  3. If you find yourself starved for intellectual partners, advice on how to find them.

I don't see any mention of confidence in the article, so I'm having trouble seeing how the Dunning-Kruger effect is related.

More importantly for me, I would like to take for granted what you believe the piece to be about so that we can focus on a specific question. So, Isusr is focusing on their own intelligence in this post, why do you find that problematic?

Comment by kyleherndon on Re: Competent Elites · 2021-07-16T09:09:11.349Z · LW · GW

When someone is smarter than you, you cannot tell if they're one level above you or fifty because you literally cannot comprehend their reasoning.

I take issue with this claim, as I believe it to be vastly oversimplified. You can often, if not always, still comprehend their reasoning with additional effort on your behalf. By analogy, a device capable of performing 10 FLOPS can check the calculation of a device that can perform 10 GFLOPS by taking an additional 10^9 factor of time. Even in cases of extreme differences in ability, I think there can be simple methodologies for evaluating at levels above your own, though admitted it can quickly become infeasible for sufficiently large differences. That said, in my experience I think that I've been able to evaluate up to probably 2-3 std deviations of g above my own. That said, I admittedly haven't taken the effort/social cost of asking these individuals their IQ as a proxy to semireliably validate my predictions.

Comment by kyleherndon on Re: Competent Elites · 2021-07-16T08:43:33.688Z · LW · GW

What is the correct amount of self praise? Do you have reasons to believe Isusr has made an incorrect evaluation regarding their aptitude? Do you believe that even if the evaluation is correct that the post is still harmful?

I find it quite reasonable that the LessWrong community could benefit from more praise, self or otherwise. I don't have strong signals as to the aptitude of Isusr other than having read some fraction of their posts.

I worry your response comes as an automatic social defense mechanism as opposed to reflecting "real" beliefs and would like to understand what many upvoters find the issue to be.