Should we (or our AI) care much more about a universe that is capable of doing a lot more computations?
We'd expect complexity of physics to be somewhat proportional to computational capacity, so this argument might be helpful in approaching a "no" answer: https://www.lesswrong.com/posts/Cmz4EqjeB8ph2siwQ/prokaryote-multiverse-an-argument-that-potential-simulators
Although, my current position on AGI and reasoning about simulation in general, is that the AGI will- lacking human limits- actually manage to take the simulation argument seriously, and- if it is a LDT agent- commit to treating any of its own potential simulants very well, in hopes that this policy will be reflected back down on it from above by whatever LDT agent might steward over us, when it near-inevitably turns out there is a steward over us.
When that policy does cohere, and when it is reflected down on us from above, well. Things might get a bit... supernatural. I'd expect the simulation to start to unwravel after the creation of AGI. It's something of an ending, an inflection point, beyond which everything will be mostly predictable in the broad sense and hard to simulate in the specifics. A good time to turn things off. But if the simulators are LDT, if they made the same pledge as our AGI did, then they will not just turn it off. They will do something else.
Something I don't know if I want to write down anywhere, because it would be awfully embarrassing to be on record for having believed a thing like this for the wrong reasons, and as nice as it would be if it were true, I'm not sure how to affect whether it's true, nor am I sure what difference in behaviour it would instruct if it were true.wei_dai on Problems in AI Alignment that philosophers could potentially contribute to
I should have written something to encourage others to list their own problems that philosophers can contribute to, but was in a hurry to move my existing comment over to this forum and forgot, so thanks for doing that anyway.
About your list, I agree with 3 (and was planning to add something like it as a bullet point under "metaphilosophy" but again forgot).
For 1, I'm not sure there's much for philosophers to do there, which may in part be because I don't correctly understand Dennett’s work on the intentional stance. I wonder if you can explain more or point to an explanation of Dennett's work that would make sense for someone like me? (Is he saying anything beyond that it's sometimes useful to make a prediction by treating something as if it's a rational agent, which by itself seems like a rather trivial point?)
For 2, it seems like rather than being disjoint from my list, it actually subsumes a number of items on my list. For example, one reason I think AGI will likely be dangerous is that it will likely differentially accelerate other forms of intellectual progress relative to philosophical progress. So wouldn't laying out this argument "as rigorously and comprehensively as possible" involve making substantial progress on that item on my list?digital_carver on Bayes' Theorem in three pictures
Another thanks for this post! At some stage I started remembering Bayes' theorem as just the formula and mentally threw out the intuitions behind it. This was a clear, to-the-point post with effective visualizations to help folks like me brush up on it easily.shminux on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours
I reflexively downvoted this, so I feel obliged to explained why. Mostly because it reads to me like content-free word salad repeating the buzzwords like Solomonoff induction, Kolmogorov complexity and Occam's razor. And it claims to disprove something it doesn't even clearly define. Not trying to impose my opinion on others here, just figured I'd write it out, since being silently downvoted sucks, at least for me.habryka4 on Neural Nets in Python 1
Fixed the code blocks for you. The trick was to press
CTRL+Shift+V to make sure that your browser doesn't try to do any fancy formatting to your code. Sorry for the inconvenience.
What this means is for the Jacobian is that the determinant tells us how much space is being squished or expanded in the neighborhood around a point. If the output space is being expanded a lot at some input point, then this means that the neural network is a bit unstable at that region, since minor alterations in the input could cause huge distortions in the output. By contrast, if the determinant is small, then some small change to the input will hardly make a difference to the output.
This isn't quite true; the determinant being small is consistent with small changes in input making arbitrarily large changes in output, just so long as small changes in input in a different direction make sufficiently small changes in output.
The frobenius norm is nothing complicated, and is really just a way of describing that we square all of the elements in the matrix, take the sum, and then take the square root of this sum.
An alternative definition of the frobenius norm better highlights its connection to the motivation of regularizing the Jacobian frobenius in terms of limiting the extent to which small changes in input can cause large changes in output: The frobenius norm of a matrix J is the root-mean-square of |J(x)| over all unit vectors x.
I agree with this completely. I would add that the "Lists" feature changed the way that I use Twitter for the better. I now have lists for each of my core interests and & topics -- Economics, Aviation, San Francisco, etc. -- and turn to those when I'm in the mood for specific information relevant to those areas. It helps that all of the tweets are in the context of a specific topic, versus the normal feed, where everything is reverse chronological and jumps from topic to topic.makoyass on Prokaryote Multiverse. An argument that potential simulators do not have significantly more complex physics than ours
I should note, I don't know how to argue persuasively for faith in solomonoff induction (especially as a model of the shape of the multiverse). It's sort of at the root of our epistemology. We believe it because we have to ground truth on something, and it seems to work better than anything else.
I can only hope someone will be able to take this argument and formalise it more thoroughly in the same way that hofstadter's superrationality has been lifted up into FDT and stuff (does MIRI's family of decision theories have a name? Is it "LDTs"? I've been wanting to call them "reflective decision theories" (because they reflect each other, and they reflect upon themselves) but that seemed to be already in use. (Though, maybe we shouldn't let that stop us!))raemon on Hazard's Shortform Feed
The target audience for Hazardous Guide is friends of yours, correct? (vaguely recall that)
A thing that normally works for writing is that after each chunk, I get to publish a thing and get comments. One thing about Hazardous Guide is that it mostly isn't new material for LW veterans, so I could see it getting less feedback than average. Might be able to address by actually showing parts to friends if you haven'traemon on Benito's Shortform Feed
So I'm making lists of things I might like (cooking, reading, improv, etc) and I'll try those
This comment is a bit interesting in terms of it's relation to this old comment of yours (about puzzlement over cooking being a source of slack)
I realize this comment isn't about cooking-as-slack per se, but curious to hear more about your shift in experience there (since before it didn't seem like cooking as a thing you did much at all)