Posts

How likely is our view of the cosmos? 2021-05-27T16:35:25.683Z
Preparing to land on Jezero Crater, Mars: Notes from NASEM livestream 2021-02-17T19:35:01.198Z
What are the unwritten rules of academia? 2020-12-25T15:33:48.470Z
How a billionaire could spend their money to help the disadvantaged: 7 ideas from the top of my head 2020-12-04T06:09:56.534Z
Yitz's Shortform 2020-12-03T23:13:00.587Z
What could one do with truly unlimited computational power? 2020-11-11T10:03:03.891Z
Null-boxing Newcomb’s Problem 2020-07-13T16:32:53.869Z
God and Moses have a chat 2020-06-17T18:34:42.809Z
looking for name/further reading on fallacy I came across 2020-05-28T18:01:34.692Z

Comments

Comment by Yitz (yitz) on A Layman’s Guide to Recreational Mathematics Videos · 2021-09-06T02:35:39.690Z · LW · GW

thanks for the excellent recommendations! I wasn't aware of a number of these YouTubers :)

Comment by Yitz (yitz) on An Open Letter To Myself On How To Not Get Any Work Done. · 2021-08-23T22:03:37.740Z · LW · GW

thanks for the tips—I'm even doing some of them right now!

Comment by Yitz (yitz) on Why did we wait so long for the threshing machine? · 2021-06-30T07:57:12.251Z · LW · GW

Thanks for the excellent read! I hadn't seriously thought much about the role of infrastructure in the industrial revolution before, but now that you've brought it up, it seems obvious that must have at the very least played a significant role.

Comment by Yitz (yitz) on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:12:23.006Z · LW · GW

I'm curious why this response is downvoted. (I don't have enough knowledge on this topic to judge the quality of responses here)

Comment by Yitz (yitz) on Covid 12/24: We’re F***ed, It’s Over · 2020-12-25T15:10:10.888Z · LW · GW

this is a really good point, and I haven't seen it mentioned anywhere either.

Comment by Yitz (yitz) on How a billionaire could spend their money to help the disadvantaged: 7 ideas from the top of my head · 2020-12-25T13:51:31.098Z · LW · GW

thanks for your insightful feedback!

I've been thinking about the responses I've received a lot the past few days, and have somewhat changed my opinions written here, though not entirely. It really deserves a second essay, but it seems to me that EA (as normally practiced in this community) has a number of potentially dangerous blindspots, most notably in areas where it is hard to determine in advance how effective a given cause will be, or in general in areas that are hard to compute the value of using any currently known formal utilitarian systems. I think too much weight is currently being given by the EA community into our ability to formally calculate the value of a given good, and additionally, there needs to be greater willingness to fund more diverse actions, in my opinion. I know I'm not explaining my case very well here, but I would like to go back to this at some point and expand on it.

Comment by Yitz (yitz) on Writing to think · 2020-12-06T01:45:58.822Z · LW · GW

Reading through it now, thank you for the excellent work!

Comment by Yitz (yitz) on November 2020 gwern.net newsletter · 2020-12-04T04:51:40.650Z · LW · GW

ah, okay.

Comment by Yitz (yitz) on Yitz's Shortform · 2020-12-03T23:13:01.035Z · LW · GW

Thinking about how a bioterrorist might use the new advances in protein folding for nefarious purposes. My first thought is that they might be able to more easily construct deadly prions, which immediately brings up the question—just how infectious can prions theoretically become? If anyone happens to know the answer to that, I'd be interested to hear your thoughts

Comment by Yitz (yitz) on November 2020 gwern.net newsletter · 2020-12-03T22:50:46.076Z · LW · GW

Uh, there's no text here. Is there supposed to be?

Comment by Yitz (yitz) on My Fear Heuristic · 2020-12-02T07:27:47.941Z · LW · GW

Seconding this, examples would be extremely helpful here (they could be anonymized if you don't want to share personal details)

Comment by Yitz (yitz) on Being Productive With Chronic Health Conditions · 2020-12-02T07:22:18.553Z · LW · GW

thank you for this excellent post. Somewhat ironically, it took me a few days to read through it due to the current state of my mental health. I really liked the 5-minute concept, which I haven't heard of before, and could be useful in my daily life.

Comment by Yitz (yitz) on Writing to think · 2020-11-19T02:17:03.289Z · LW · GW

please do that and post your results here! That seems like an incredible use of time, and a potentially excellent resource for the rationalist community

Comment by Yitz (yitz) on Writing to think · 2020-11-19T02:15:47.166Z · LW · GW

seconding that; I often find that I'm forced to confront problems with world-models that I've long assumed were true when I try to write them down, and in fact trying to write down a list of reasons for why I believed what I did ended up leading me to the conclusion that Orthodox Judaism is not self-consistent, which eventually brought me here.

Comment by Yitz (yitz) on How to get the benefits of moving without moving (babble) · 2020-11-17T01:55:51.413Z · LW · GW

this post is pure gold. If I could give more Karma, I would give it :)

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-16T04:11:05.332Z · LW · GW

You're a lot braver than me!

I'd be absolutely terrified of trying to create anything anywhere near superhuman AI (as in AGI, of course; I'd be fine with trying to exceed humans on things like efficient protein folding and that sort of stuff), due to the massive existential risk from AGI that LessWrong loves talking about in every other post.

Personally, I would wait to get the world's leading AI ethics experts's unanimous approval before trying anything like that, and that only after at least a few months of thorough discussion. An exception to that might be if I was afraid that the laptop would fall into the hands of bad actors, in which case I'd probably call up MIRI and just do whatever they tell me to do as fast as humanly possible.

I do agree with you though; it probably would be perfectly possible to develop superhuman AI within a day, given such power.

It is worth asking what sort of algorithm you might use, and perhaps more importantly, what would you define as the "win condition" for your program? Going for something like a massively larger version of GPT3 would probably pass the Turing test with relative ease, but I'm not sure it would be capable of generating smarter-than-human insight, since it will only attempt to resemble what already exists in its training data. How would you go about it, if you weren't terribly concerned about AI safety?

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-16T03:44:10.684Z · LW · GW

I really like your first three ideas, and would definitely consider doing that if I was in this position (although now that I'm thinking about it, I wouldn't want to accidentally alert any powerful actors against me so early on in my journey, for fear of getting the laptop confiscated/stolen, so I'd be very careful before doing anything that could potentially be traced back to me online). :)

As for "calculating out how the human body works," I'm not sure it would be that simple to pull off, at least not at first. Taking your statement literally would mean having the laptop simulate an entire human, brain and all, which is discussed later, so for practical purposes I'm assuming what you meant by that is calculating how a typical human cell works; say, a single neuron. You could definitely solve protein folding and probably simulate most chemical interactions fairly trivially, as long as you can express the physics involved as finitely computable functions (which I'm not sure has been proven possible for all of chemistry/quantum mechanics, though I may be mistaken on that). However, in order to figure out how things actually work inside of an entire human cell, you'll not only need to be able to formally express physics and chemistry, but will also need to know what that cell is chemically composed of in the first place (in order to simulate it properly and not just be given fallible guesses by the computer). In order to make this work, you either already have a pretty much complete formal understanding of a human cell, or have figured out a way to specify your goal so precisely that only a manageable number of valid possibilities are given using the known rules of physics, which seems incredibly hard to do, if not totally impossible with our current tools.

More broadly, the same problem comes up when trying to write a program simulating the human brain. The best neurosurgeons in the world are still in the dark about how most of the brain's functions are actually performed, and currently have to make do with incredibly generalized and high-level assumptions. In order to simulate a human brain (rather than "simply" create a generalized non-human AI), you would need a level of knowledge about our own inner workings that is not currently available. Thankfully, you might not need to know the exact workings of an adult human brain to make one, but without that knowledge at the very least you will need to be able to fully simulate the growth of an embryonic brain, and be able to properly "feed" it appropriate outside stimulation, which could plausibly be reduced to the problem of perfectly simulating the working of a single embryonic cell, then letting the simulation proceed smoothly from there.

Regardless, both goals reduce to the general problem that in order to simulate a complex system, we must already have at least some amount of "base knowledge" of that system, or to put it more precisely, we must know at least as much information as is contained in its Kolmogorov complexity.  (please correct me if I'm wrong about this btw, I'm fairly confident in saying this, but I may have messed up somewhere due to the complexity (heh) of the issue)

That's what I think makes this hypothetical so interesting to me—the thought that even with unbounded finite computational abilities, some of our most important problems would still require a tremendous amount of physical fieldwork, and would certainly still require thinking intelligently about how to code for the solutions we want.

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-16T02:45:27.579Z · LW · GW

that's a really good point, and I'm mentally kicking myself right now for not having thought of that. In answer to another comment below yours, I suggested only allowing primitive recursive functions to be entered as input in the second box, which I think would solve the problem (if possibly creating another computational limit at the growth rate of the fastest growing primitive recursive function possible [which I haven't studied at all tbh, so if you happen to have any further reading on that I'd be quite interested]). Going a bit further with this, while I suggested God limiting us to one bounded language like BlooP to use on the second field, if He instead allowed for any language to be used, but only primitive recursive functions to be run, would we be able to exploit that feature for hypercomputation?

Also, while we're discussing it, what would be in the space of problems that could be uniquely solved by 2^O(n) compute, but not by "normal" O(n) compute?

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-16T02:24:24.994Z · LW · GW

I'll go with Taran's idea there, I think. Something like Douglas Hofstadter's BlooP language, perhaps, which only allows primitive recursive functions. Would BlooP allow for chained arrow notation, or would it be too restrictive for that? More generally, what is the fastest growing primitive recursive function possible, and what limits would that give us in terms of the scope of problems that can be solved by our magic box?

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-13T04:56:12.810Z · LW · GW

Your assumption that you could fully simulate the magic box inside itself is a pretty massive one, and I honestly wouldn't expect it to be true in this hypothetical universe. After all, after receiving the input parameters, the machine simulates a Universal Turing Machine of arbitrary finite size, which by definition cannot perform any non-Turing-computable functions, and certainly couldn't simulate a machine more complex than itself. In order for your magic "Zeno's paradox-destroyer" function to work given the parameters outlined in the story, it would need to be able to call the machine running it from the inside, and God (in His infinite wisdom 🙃) hasn't given us any "escape" api functions for us to do that.

(note that I really do appreciate your thoughts here, and I'm not trying to dunk on them, just trying to get a better understanding of unbounded finite computational systems and want to plug any potential holes in the story before expanding on this hypothetical universe in the future)

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-13T04:34:06.891Z · LW · GW

Thanks for the fascinating response!

If you don't mind, I might try playing around with some of the ideas you mentioned in future write-ups here; there's a lot of interesting theoretical questions that could be explored along those lines.

Comment by Yitz (yitz) on What could one do with truly unlimited computational power? · 2020-11-11T23:47:32.677Z · LW · GW

How would that work exactly? Let's say you get the output to give you the largest possible number given the number of computations currently allowed, which as long as the "computation speed" parameter is finite, will be finite as well, albeit incredibly large. Every step taken will only increase the computation speed by a finite amount, so how would you reach infinity in a finite time?

Comment by Yitz (yitz) on Litany of a Bright Dilettante · 2020-07-15T02:46:54.850Z · LW · GW

Consider me a fan

Comment by Yitz (yitz) on Null-boxing Newcomb’s Problem · 2020-07-13T22:56:30.049Z · LW · GW

Thanks for the happy ending :)

Comment by Yitz (yitz) on Null-boxing Newcomb’s Problem · 2020-07-13T22:55:07.800Z · LW · GW

That would be an excellent solution—from the unnamed trickster god’s perspective. Personally though, I’m more interested in what Maxwell should do once the rules are already set.

Comment by Yitz (yitz) on Null-boxing Newcomb’s Problem · 2020-07-13T16:38:41.455Z · LW · GW

There was a weird glitch posting this where it appeared as three separate copies of the same post; I deleted the other two, so hopefully that wasn’t too much of a problem.

Comment by Yitz (yitz) on Null-boxing Newcomb’s Problem · 2020-07-13T16:38:12.259Z · LW · GW

There was a weird glitch posting this where it appeared as three separate copies of the same post; I deleted the other two, so hopefully that wasn’t too much of a problem.

Comment by Yitz (yitz) on God and Moses have a chat · 2020-06-18T22:41:55.883Z · LW · GW

That’s perfectly reasonable, it can be very hard at times to put those sort of experiences to words. Wishing you success!

Comment by Yitz (yitz) on God and Moses have a chat · 2020-06-18T17:56:57.760Z · LW · GW

Thanks! Do you mind if I ask what that update was?

Comment by Yitz (yitz) on God and Moses have a chat · 2020-06-18T00:22:59.114Z · LW · GW

Oh wow, that was an excellent read! Thanks for the link. :) It seems like Jesus in that story reaches the opposite conclusion of Moses in mine. Out of curiosity, who do you think made the most reasonable decision and why?

Comment by Yitz (yitz) on God and Moses have a chat · 2020-06-18T00:20:36.347Z · LW · GW

Yeah, I was thinking along those lines when writing this, along with the issues around Pascal’s Mugging/muggle. I still need to do a lot more research on this, as I’m still not sure what the correct thing to do would actually be in such a seemingly convincing situation. It doesn’t seem quite reasonable to say that no evidence whatsoever can possibly prove the existence of God, as that seems to make atheism unfalsifiable. On the other hand, what could possibly count as enough evidence for such an exotic possibility?

Comment by Yitz (yitz) on Open & Welcome Thread - June 2020 · 2020-06-09T03:16:42.495Z · LW · GW

Hi, I joined because I was trying to understand Pascal’s Wager, and someone suggested I look up “Pascal’s mugging”... next thing I know I’m a newly minted HPMOR superfan, and halfway through reading every post Yudkowsky has ever written. This place is an incredible wellspring of knowledge, and I look forward to joining in the discussion!

Comment by Yitz (yitz) on What does “torture vs. dust specks” imply for insect suffering? · 2020-06-09T01:39:16.837Z · LW · GW

I don’t wish to directly argue the question at the moment, but let’s say insect suffering is in fact the highest-priority issue we should consider. If so, I’m fairly sure that practically, little would be changed as a result. X-risk reduction is just as important for insects as it is for us, so that should still be given high priority. The largest effect we currently have on insect suffering—and in fact an X-risk factor in itself for insects—is through our collective environmental pollution, so stopping human pollution and global warming as much as possible will be paramount after high-likelihood X-risk issues. In order to effectively treat issues of global human pollution of the environment, some form of global political agreement must be reached about it, which can be best achieved by [Insert your pet political theory here]. In other words, whatever you believe will be best for humans long-term will probably also be best for insects long-term.

Comment by Yitz (yitz) on Self-Keeping Secrets · 2020-06-07T23:53:13.325Z · LW · GW

I find a similar phenomenon occurs with extreme depression. When I’m in that state, I literally cannot remember what it feels like to be happy, though I remember acting in ways consistent with happiness. Likewise, every single time I go into an extremely depressed state, it feels like the worst experience I’ve ever had, even if I know intellectually that it’s been worse before (ie not feeling suicidal, not screaming uncontrollably, etc., when I have before), which leads me to believe that my brain is somehow blocking the extent of the pain I’ve experienced from my memory. Once the experience is over, there is something about it that is inaccessible from my current perspective.

Comment by Yitz (yitz) on God is irrelevant · 2020-05-28T23:37:40.373Z · LW · GW

One possible use for God I can think of outside of what you mentioned is to serve as a source of the otherwise seemingly unnecessary consciousness, if one believes in dualism.

Comment by Yitz (yitz) on What are examples of perennial discoveries? · 2020-05-28T20:34:10.232Z · LW · GW

purported “cures” for autism, depression, anxiety, and ADHD have been crossing my newsfeed practically every day for decades now, without any significant practical advancement for any of those.

Comment by Yitz (yitz) on Beyond the Reach of God · 2020-05-28T20:23:36.849Z · LW · GW

It’s interesting that you say that a Good God wouldn’t destroy a soul, as one of the biggest issues I’m currently finding myself having with Orthodox Judaism is that according to the Talmud at least, there have been a number of historical cases of souls being completely destroyed, which seems rather incompatible with the rest of Orthodox Jewish morality....I don’t know about the Christian or Muslim God, but they do both seem to believe that some people burn in hell forever, which is arguably worse than simply not existing. I really don’t get how this isn’t discussed more often in conventional theism...

Comment by Yitz (yitz) on What was your reasoning for deciding to have 'your own'/ natural-birthed children? · 2020-05-26T15:04:21.049Z · LW · GW

The same, I’d think—most people would rather exist in an overpopulated world than not exist at all, so it would still be morally worth it, in my opinion. Many of my friends are the grandchildren of holocaust survivors, who had children while stuck in the objectively terrible and overcrowded post-war camps, and I am glad they did have children despite the horror surrounding them, and the uncertainty of if their children would ever escape it.

Comment by Yitz (yitz) on What was your reasoning for deciding to have 'your own'/ natural-birthed children? · 2020-05-26T03:39:14.558Z · LW · GW

I do not yet have any children (as I’m 19 years old, unmarried, and I do not believe myself to be nearly mature enough yet for such responsibility), but I do plan to have kids one day. My ethical reasoning for this is that I believe that for the vast majority of humans, we find it better on the whole to exist than to not exist, proof of that being that most of us don’t wish to commit suicide (for the most part), even in extremely trying situations. Even if the world were falling apart (which admittedly it sometimes feels like it is), most of us would still fight to stay alive, because we value our own existence, and the existence of others. As such, I see it as a strong moral positive to bring more people into existence, and having biological children is an excellent way to go about doing that.

Comment by Yitz (yitz) on What is your internet search methodology ? · 2020-05-26T03:13:30.725Z · LW · GW

There are many different reasons to be searching something online, but when specifically searching to find the answer to some specific question I have, in general I tend to search in two different patterns—searching with keywords based on how I expect other people to ask the question I want answered (which will likely return results sourced from discussion boards, Quora, etc.), or searching with keywords based on how I’d expect someone to talk about the topic if writing a personal blog post or something. In general, the latter tends to use industry-specific wording, and either assumes deep familiarity with the topic or none at all, which can be hard to read through, while the former tends to use more generalized wording, which can sometimes make it harder to find what I’m looking for, but is also normally from users with a similar knowledge base, which can make the solution easier to understand.

Comment by Yitz (yitz) on Chapter 46: Humanism, Pt 4 · 2020-05-22T22:09:00.726Z · LW · GW

I've been binge-reading this series for the past few days, and I must say, I don't think I've ever read a fanfic as good as this one before. I have absolutely no idea where this is about to go, and am on the edge of my seat right now (metaphorically speaking)!