Posts
Comments
This makes me think "tulpamancy-lite". Not that that's a bad thing - perhaps it's like a safer tulpamancy. Some thoughts:
(It's just such little mannerisms that allow a shoulder advisor to be "really real"—to bring it to life, give it a personality separate from, and not dependent on, your brain's main central personality. Again, I don't have a sound explanation of the mechanics, but it works.) Would it be useful to have a shoulder-advisor not constrained by having to relate to a real example? Or perhaps, without that link it will just tend to become more and more like you, or otherwise drift into some territory outside reasonable personality-space. Although, authors seem to be able to write stories just fine.
How hard would it be to just create a shoulder-advisor from scratch? - I mean, people can do that with tulpas, and shoulder-advisors definitely seem like less work than tulpas (no need for hallucinating them).
Do shoulder-advisors have moral value?
I find that I get self-conscious when in public without close friends, and I've wanted (but have had neither the time nor motivation) to create a tulpa, with the idea that it will make everything a lot less stressful. They wouldn't really have any actual input on anything, but just sort of raise my baseline positivity and self-esteem. If there were two of me, I'd be a lot less afraid of doing anything.
How many solutions are there that we overlook because they seem childish or "cringe"? Maybe that's just something I notice, since I notice myself avoiding "cringe" things too much. I think being averse to cringe is not entirely a bad thing, because it helps rule out solutions that probably wouldn't work.
Here's a link that seems to confirm what I wrote: https://chandra.harvard.edu/graphics/resources/handouts/lithos/bullet_lithos.pdf
I think what's happening is basically that the pink shows where the visible mass is, but the purple shows where the mass should be according to gravitational lensing. Dark matter should pass straight through, and that is what we see according to lensing, even though the pink lags behind because it can collide (since it's mostly the hot plasma).
At least, I think that's what's happening... I myself am really confused and am pretty unconfident in that explanation.
I'm also confused as to what modified gravity predicts, and how bullet clusters disprove it. I guess what we'd see is that modified gravity would alter the gravity around the visible mass, not just make it magically act like it just passed through. Ie, a lot about gravity would have to change for such a drastic difference between the mass as perceived through x-rays and the mass as perceived through gravitational lensing.
This reminds me of lukeprog's post on motivation (I can't seem to find it though...; this should suffice). Your model and TMT kind of describe the same thing: you have more willpower for things that are high value and high expectancy. And the impulsiveness factor is similar to how there are different "kinds" of evidence: eg you are more "impulsive" to play video games or not move, even though they aren't high value+expectation logically.
I see. Thanks.
What does he do specifically? It's very unclear just from reading the Amazon description. Or is it like an entire program. I'm skeptical: I have never heard of this anywhere else, so it seems like one of those $100-bill-on-the-subway-floor type things.
it needed me to also have an active project I was working on that I actually enjoyed. I think otherwise I would have found other ways to distract myself and eventually undermined it to the point that I gave up.
Same with me. Although it's still better than nothing: the usual distractions are more habit than actually fun, and I've found that I read more interesting things instead of just mindlessly browsing social media.
I like to use the add-on LeechBlockNG (I don't know if you can use it on mobile). You can use it to outright block sites, but also delay access to the site before you actually enter, and also put time limits. The delay is something I haven't seen other apps/add-ons use a similar feature, and it's kind of a deal-breaker, since it solves the problem of "instant gratification" that makes social media (etc.) addicting.
I really agree with this. I have been thinking that we should "default to privacy", because if we think we have to share it, we will change our thoughts because of the social anxieties/pressures. (It's similar to that experiment that demonstrated people make better decisions if they didn't have to come to a solution first (I just remember this from reading HP:MOR).) Only after we reach the answer, (socially) unbiased, then we can decide to share it.
I don't think privacy means dishonesty. I personally really dislike lying, and I think it's because acting with false information sort of takes away their free will, and more practically, this creates a lot of uncertainty. But I think you can be honest about how you withhold information, to an extent: instead of lying, you can just say, "I won't tell you" or something like that. (I'm not sure how much that is based on the practicality of it and how much is it is a like a terminal value.)
I'm sort of confused by radical honesty. Is it really, truly, "radical"? Literally everyone has intrusive thoughts, and I personally sometimes have intrusive thoughts about raping or killing or saying racial slurs. I guess that's just a nitpick, because I can easily see how to be "maximally" honest (compared to normal communication).
I feel like I feel a similar thing, but with regards to effective altruism and learning intellectual things. I sometimes ask myself, "are my beliefs around EA and utilitarianism just 'signaling'", especially since I'm only in high school and don't really have any immediate plans. But I'm also not a very social person, and when I do talk to others I don't usually talk about EA. I guess I'm not a very conscientious person: I like the idea of "maximizing utility" and learning cool things, but my day-to-day fun things (outside of "addictions": social media, games) are just like reading books and essays and listening to music. It's as if, I "want to like" EA and learning. Like, I don't really see any point in being rich and famous, unless you're going to do good with it, and so just doing to minimum to enjoy my life and not be a net negative in utils seems fine. (That is usually a thought I have when I'm sad+unmotivated.) Does it even make sense to say that EA/utilitarianism is just signaling? Is there any reason for me to make my actions inline with my "true" preferences (i.e. egoism)? As in: if it wasn't, "should" I listen to my "true" preferences? grouchymusicologist seems to think, no: whether or not we get those preferences from repetition and socialization, they're still our preferences.
Why don't you just use MathJax? Maybe this wasn't the case when you wrote this comment, but there should be a button that just applies the formatting, and Ankidroid can render it.
Really interesting post. To me, approaching information with mathematics seems like a black box - and in this post, it feels like magic.
I'm a little confused by the concept of cost: I understand that it takes more data to represent more complex systems, which grows exponentially faster than than the amount of bits. But doesn't the more complex model still strictly fit the data better? - is it just trying to go for a different goal than accuracy? I feel like I'm missing the entire point of the end.
Even with quantum uncertainty, you could predict the result of a coin flip or die roll with high accuracy if you had precise enough measurements of the initial conditions.
I'm curious about how how quantum uncertainty works exactly. You can make a prediction with models and measurements, but when you observe the final result, only one thing happens. Then, even if an agent is cut off from information (i.e. observation is physically impossible), it's still a matter of predicting/mapping out reality.
I don't know much about the specifics of quantum uncertainty, though.
Since national pride is decreasing, the pride of scientific accomplishments seem to be mainly relegated to, well, the scientists themselves - geeks and nerds.
That reminds me of this Scott Aaronson post (https://www.scottaaronson.com/blog/?p=87). Unless the science "culture" changes or everyone else does, it seems like there will be a limit on the amount of people who would be willing to celebrate technical achievements.
It doesn't optimize for "you", it optimizes for the gene that increases the chance of cheating. The "future" has very little "you".
This seems mainly to be about the importance of compromise: that something is better than nothing. Refusing only makes sense when there are "multiple games", like the Prisoner's Dilemma; if you can't find an institution that is similar enough, then don't do it.
But I think there is some risk to joining a cause that "seems" worth it. (I can't find it, but) I remember an article on LessWrong about the dangers of signing petitions, which can influence your beliefs significantly despite the smallness of the action.
Reminded me of this blog post by Nicky Case, where they said "Trust, but verify". Emotions are often a good heuristic for truth: if we didn't feel pain, that would be bad.
I don't know anything about Go. But the fact that following it helps you reminds me of In praise of fake frameworks: while "good shape" isn't fully accurate at calculating the best move, it's more "computationally useful" for most situations (similar to calculating physics with Newton's laws vs general relativity and quantum mechanics). (The author also mentions "ki", which makes no sense from a physics perspective, to get better at aikido.)
I think it's just important to remember that the "model" is only a map for the "reality" (the rules of the game).
I don't really doubt that increasing value while preserving values is nontrivial, but I wonder just how nontrivial it is: are the regions of the brain for intelligence and values separate? Actually, writing that out, I realize that (at least for me) values are a "subset" of intelligence: the "facts" we believe about science/math/logic/religion are generated in basically the same way as our moral values; the difference to us humans seems obvious, but it really is, well, nontrivial. The paper clip maximizing AI is a good example: even if it wasn't about "moral values"--even if you wanted to maximize something like paper clips--you'd still run into trouble
You could make a habit of checking LW and EA at a certain time each day/week/etc. I don't know how easy that would be maintain, or if it's really practical depending on your situation.