Posts
Comments
As I said, having high status = people feel the same way they would feel if they owed you something in real life/you were giving them things in real life.
I don't think this is quite right. In my experience, the sensation that someone is higher status than me induces a desperate desire to be validated by them, abstractly. It's not the same as 'gratitude' or anything like that; it's the desire to associate with them in order to acquire a specific pleasurable sensation -- one of group membership, acceptance, and worth.
Just want to echo: thanks for doing this. This is awesome.
Your post got me thinking about some stuff I've been dealing with, and I think helped me make some progress on it almost instantly. I don't think the mechanisms are quite the same, but thinking about your experience induced me to have useful realizations about myself. I'll share in case it's useful to someone else:
It sounds like your self-concept issue was rooted in "having a negative model of yourself deeply ingrained", which induced neuroses in your thoughts/behaviors that attempted to compensate for it / search around for ways to convince yourself it wasn't true. And that the 'fix', sorta, was revisiting where that model came from and studying it and realizing that it wasn't fair or accurate and that the memories in which it was rooted could be reinterpreted or argued against.
I thought about this for a while, and couldn't quite fit my own issues into the model. So instead I zoomed out a bit and tried this approach: it seems like the sensation of shame, especially when no one else is around, must be rooted in something else, and when I feel shame I ought to look closer and figure out why, as it's a huge >>LOOK RIGHT HERE<< to a destructive loop of thoughts (destructive because, well, if I'm feeling shame about the same thing for years on end, and yet it's not changing, clearly it's not helping me in any way to feel that way -- so I ought, for my health, to either fix it or stop feeling it).
[Aside: in my experience, the hallmark thought pattern of depression is loops: spirals of thoughts that are negative and cause anxiety / self-loathing, but don't provide a fix and have no mechanism for 'going away', so they just continue to spiral, negatively affecting mood and self-esteem etc, and probably causing destructive behaviors that give you more to feel anxious / hateful about. And I've observed that it's very hard to go through rational calculations with regard to yourself in private, for me at least, and so talking to people (therapists, friends, strangers, whatever) and being forced to vocalize and answer questions about your thought spirals can cause you to see logical ways to 'step out of them' that never seem clear when you're just thinking by yourself. Or whatever -- I could probably write about how this works for hours.]
So I looked closer at where my shame from, and found that it wasn't that I had a negative self-concept on its own (something like "I am X", where X is negative), but rather that it was that I was constantly seeing in my world reminders of someone I felt like I should have been, in a sense. I felt like I had been an extremely smart, high-potential kid growing up, but at some point, video game addiction + sleep deprivation + irresponsibility + depression had diverted me off that path, and ever since I have been constantly reminded of that fact and feel shame for not being that person. So I guess I had (have) a self-concept of 'being a failed version of who I could have been', or 'having never reached my potential'.
For some concrete examples:
- When I saw my reflection in things, I would criticize myself for seeming not-normal, goofy, or not.. like.. masculine enough? for a mid-20s male. not that I wanted to be, like, buff, but I want to be a person who wouldn't strike others as goofy looking, but I always see my bad posture from computer use and my haircut that I'm never happy with, and get stuck in loops looking at myself in the mirror and trying to figure out what I need to work on to fix it (work out this or that muscle, do yoga, figure out how to maintain a beard, whatever).
- A lot of times when I read really brilliant essays, on LW or other blogs or etc, about subjects I'm into, especially by autodidact/polymath types, I'd feel really bad because I felt like I could have been one of those people, but had failed to materialize. So I'd be reminded that I need to study more math, and write more, and read more books, and all these others things, in order to get there.
These are thoughts I have been having dozens of times a day.
The second big realization: that motivation borne out of shame is almost completely useless. Seeing your flaws and wanting to change them causes negative emotions in the moment, but it doesn't really lead to action, ever. A person who feels bad about being lanky doesn't often go to the gym, because that's not coming from a positive place and the whole action is closely coupled to negativity and self-loathing. And a person feeling bad for not being a clever polymath doesn't.. become one.. from negativity; that takes years of obsession and other behaviors that you can't curate through self-loathing.
(Well, it's possible that shame can induce motivation for immediate fixes, but I'm sure it doesn't cause long-term changes. I suspect that requires a desire to change that comes from a positive, empowered mindset.)
I'm not entirely sure what the 'permanent' fix for this is -- it doesn't seem to be as simple as redefining my self-concept to not want to be these people. But realizing this was going on in this way seemed like a huge eye-opening realization and almost immediately changed how I was looking at my neurotic behaviors / shames, and I think it's going to lead to progress. The next step, for now, I think, is focusing on mindfulness in an effort to become more able to control and ignore these neurotic shame feelings, now that I've convinced myself that I understand where they're coming from, and that they're unfair and irrational.
TLDR
- feelings of shame / neurotic spirals = places to look closely at in your psyche. They're probably directly related to self-concept issues.
- it's possible for negativity to come, rather than directly from your self-concept, from your concept of who you 'should have' or 'could have been'.
- shame-induced motivation is essentially useless. For me, at least. I've been trying to channel it into lifestyle changes for years to essentially 0 results.
Yeah, that's the exact same conclusion I'm pushing here. That and "you should feel equipped to come to this conclusion even if you're not an expert." I know.. several people, and have seen more online (including in this comment section) who seem okay with "yeah, it's negative one twelfth, isn't that crazy?" and I think that's really not ok.
My friend who's in a physics grad program promised me that it does eventually show up in QFT, and apparently also in nonlinear dynamics. Good enough for me, for now.
The assumed opinions I'm talking about are not the substance of your argument; they're things like "I think that most of these reactions are not only stupid, but they also show that American liberals inhabit a parallel universe", and what is implied in the use of phrases like 'completely hysterical', 'ridiculous', 'nonsensical', 'proposterous', 'deranged', 'which any moron could have done', 'basically a religion', 'disconnected from reality', 'save the pillar of their faith', etc. You're clearly not interested in discussion of your condemnation of liberals, and certainly not rational discussion. You take it as an obvious fact that they are all stupid, deranged morons.
So when you write "I’m also under no delusion that my post is going to have any effect on most of those who weren’t already convinced", I think you are confused. People who don't already agree with you won't be convinced because you obviously disdain them and are writing with obviously massive bias against them. Not because their opinions are "basically a religion, which no amount of evidence can possibly undermine."
I think your post would be much stronger if you just removed all your liberal-bashing entirely, quoted an example of someone saying hate crimes had gone up since trump's election, and then did the research. I'm totally opposed to polemics because I think they have no good results. Especially the kind that is entirely pandering to one side and giving the finger to the other. (I also think you're wildly incorrect about your understanding of liberals, as revealed by some of your weird stereotypes, but this is not the place to try to convince you otherwise.) But I guess if that's the way people write in a certain community and you're writing for that community, you may as well join in. I prefer to categorically avoid communities that communicate like that - I've never found anything like rational discussion in one.
I also think such obvious bias makes your writing weaker even for people on your side. It's hard to take writing seriously that is clearly motivated by such an agenda and is clearly trying to get you to rally with it in your contempt for a common enemy.
It's true that politics is generally discouraged around here. But, also -- I'm the person who commented negatively on your post, and I want to point out that it wasn't going to be well-received, even if politics was okay here. You wrote in a style that assumed a lot of opinions are held by your readers, without justification, and that tends to alienate anyone who disagrees with you. Moreover, you write about those opinions as if they are not just true but obviously true, which tends to additionally infuriate anyone who disagrees with you. So I think your post's style was a specific example of the kind of 'mind-killing' that should be avoided.
I appreciate exhaustive research of any kind, and the body of your post was good for that. But the style of the frame around it made it clear that you had extremely low opinions of a large group of people and wanted to show it, and.. well, I personally don't think you should write that way ever, but especially not for this forum.
With an opening like
The idea that liberal elites are disconnected from reality has been a major theme of post-election reflections. Nowhere is this more obvious than in academia, where Trump’s victory resulted in completely hysterical reactions.
It's clear that this is written for people who already believe these things. The rest, unsurprisingly, confirms that. I thought LW tried to avoid politics? And, especially, pointless politically-motivated negativity. "liberal-bashing" isn't very interesting, and I don't think there's a point in linking it on this site. Unfortunately downvoting is still disabled here.
I would like to remind people of some basic facts, which hopefully will bring them back to reality, although it probably won’t.
Either the author is writing for people who agree with them, in which case, petty jabs are just signaling, or they're trying to convince people to agree with them, in which case petty jabs make them less convincing, not more.
Also, the author should probably give a source for the claim that there was unleashed a wave of hate crimes. I personally haven't heard that said anywhere, in real life or online. "almost everyone on Facebook was apparently convinced that buckets of mostly unverified anecdotes" is useless. Sure, it's okay to write about a phenomenon one personally observes but can't put numbers on -- but when the argument is "look how stupid these people are for a thing they did", it's important that everyone agrees they actually did it. Otherwise we can just invent actions for groups we don't like and then start taking shots at them.
It doesn't count in the discussions of coloring graphs, such as in the four color map theorem, and that's the kind of math this is most similar to. So you really need to specify.
Are you just wondering what 'pushing' means in this context? Or speculating about the existence of anti-gravity?
I'm pretty sure that this is just interpreting as region of low density as 'pushing' because it 'pulls less' than a region of average density would.
This is similar to how electron 'holes' in a metal's atomic lattice can be treated as positive particles.
Don't you think there's some value of doing a more controlled study of it?
No, because it's not a possibility that when you thought you were doing math in the reals this whole time, you were actually doing math in the surreals. Using a system other than the normal one would need to be stated explicitly.
You had written
"I really want a group of people that I can trust to be truth seeking and also truth saying. LW had an emphasis for that and rationalists seem to be slipping away from it with "rationality is about winning"."
And I'm saying that LW is about rationality, and rationality is how you optimally do things, and truth-seeking is a side effect. And the truth-seeking stuff in the rationality community that you like is because "a community about rationality" is naturally compelled to participate in truth-seeking, because it's useful and interesting to rationalists. But truth-seeking isn't inherently what rationality is.
Rationality is conceptually related to fitness. That is, "making optimal plays" should be equivalent to maximizing fitness within one's physical parameters. More rational creatures are going to be more fit than less rational ones, assuming no other tradeoffs.
It's irrelevant that creatures survive without being rational. Evolution is a statistical phenomenon and has nothing to do with it. If they were more rational, they'd survive better. Hence rationality is related to fitness with all physical variables kept the same. If it cost them resources to be more rational, maybe they wouldn't survive better, but that wouldn't be keeping the physical variables the same so it's not interesting to point that out.
If you took any organism on earth and replaced its brain with a perfectly rational circuit that used exactly the same resources, it would, I imagine, clobber other organisms of its type in 'fitness' by so incredibly much that it would dominate its carbon-brained equivalent to the point of extinction in two generations or less.
I didn't know what "shared art" meant in the initial post, and I still don't.
Interleaving isn't really the right way of getting consistent results for summations. Formal methods like Cesaro Summation are the better way of doing things, and give the result 1/2 for that series. There's a pretty good overview on this wiki article about summing 1-2+3-4.. .
I know about Cesaro and Abel summation and vaguely understand analytic continuation and regularization techniques for deriving results from divergent series. And.. I strongly disagree with that last sentence. As, well, explained with this post, I think statements like "1+2+3+...=-1/12" are criminally deceptive.
Valid statements that eliminate the confusion are things like "1+2+3...=-1/12+O(infinity)", or "analytic_continuation(1+2+3+)=-1/12", or "1#2#3=-1/12", where # is a different operation that implies "addition with analytic continuation", or "1+2+3 # -1/12", where # is like = but implies analytic continuation. Or, for other series, "1-2+3-4... #1/4" where # means "equality with Abel summation".
The massive abuse of notation in "1+2+3..=-1/12" combined with mathematicians telling the public "oh yeah isn't that crazy but it's totally true" basically amounts to gaslighting everyone about what arithmetic does and should be strongly discouraged.
Well. We should probably distinguish between what rationality is about and what LW/rationalist communities are about. Rationality-the-mental-art is, I think, about "making optimal plays" at whatever you're doing, which leads to winning (I prefer the former because it avoids the problem where you might only win probabilistically, which may mean you never actually win). But the community is definitely not based around "we're each trying to win on our own and maximize our own utility functions" or anything like that. The community is interested in truth seeking and exploring rationality and how to apply it and all of those things.
Evolution doesn't really apply. If some species could choose the way they want to evolve rationally over millions of years I expect they would clobber the competition at any goal they seek to achieve. Evolution is a big probabilistic lottery with no individuals playing it.
If you're trying to win at "achieve X", and lying is the best way to do that, then you can lie. If you're trying to win at "achieve X while remaining morally upright, including not lying", or whatever, then you don't like. Choosing to lie or not parameterizes the game. In either game, there's a "best way to play", and rationality is the technique for finding it.
Of course it's true that model-building may not be the highest return activity towards a particular goal. If you're trying to make as much money as possible, you'll probably benefit much more from starting a business and re-investing the profits asap and ignoring rationality entirely. But doing so with rational approaches will still beat doing so without rational approachs. If you don't any particular goal, or you're just generally trying to learn how to win more at things, or be generally more efficacious, then learning rationality abstractly is a good way to proceed.
"Spending time to learn rationality" certainly isn't the best play towards most goals, but it appears to be a good one if you get high returns from it or if you have many long-term goals you don't know how to work on. (That's my feeling at least. I could be wrong, and someone who's better at finding good strategies will end up doing better than me.)
In summary, "rationality is about winning" means if you're put in situations where you have goals, rational approaches tend to win. Statistically. Like, it might take a long time before rational approaches win. There might not be enough time for it to happen. It's the 'asymptotic behavior".
An example: if everyone was caring a lot about chess, and your goal was to be the best at chess, you can get a lot of the way by playing a lot of chess. But if someone comes along who also played a lot of games, they might start beating. So you work to beat them, and they work to beat you, and you start training. Who eventually wins? Of course there are mental faculties, memory capabilities, maybe patience, emotional things at work. But the idea is, you'll become the best player you can become given enough time (theoretically) via being maximally rational. If there are other techniques that are better than rationality, well, rationality will eventually find them - the whole point is that finding the best techniques is precisely rational. It doesn't mean you will win, there are cosmic forces against you. It means you're optimizing your ability to win.
It's analogous to how, if a religion managed to find actually convincing proof of a divine force in the universe, that force would immediately be the domain of science. There are no observable phenomena that aren't the domain of science. So the only things that can be religious are things you can't possible prove occur. Equivalently, it's always rational to use the best strategy. So if you found a new strategy, that would become the rationalists' choice to. So the rationalist will do at least as well as you, and if you're not jumping to better strategies when they come along, the rationalist will win. (On average, over all time.)
Interesting, I've never looked closely at these infinitely-long numbers before.
In the first example, It looks like you've described the infinite series 9(1+10+10^2+10^3...), which if you ignore radii of convergence is 9*1/(1-x) evaluated at x=10, giving 9/-9=-1. I assume without checking that this is what Cesaro or Abel summation of that series would give (which is the technical way to get to 1+2+3+4..=-1/12 though I still reject that that's a fair use of the symbols '+' and '=' without qualification).
Re the second part: interesting. Nothing is immediately coming to mind.
Fixed the typo. Also changed the argument there entirely: I think that the easy reason to assume we're talking about real numbers instead of rationals is just that that's the default when doing math, not because 0.999... looks like a real number due to the decimal representation. Skips the problem entirely.
Well - I'm still getting the impression that you're misunderstanding the point of the virtues, so I'm not sure I agree that we're talking past each other. The virtues, as I read them, are describing characteristics of rational thought. It is not required that rational thinkers appear to behave rationally to others, or act according to the virtues, at all. Lying very well may be a good, or the best, play in a social situation.
Appearing rational may be a good play. Demonstrating rationality can cause people to trust you and your ability to make good decisions not swayed by whim, bias, or influence. But there are other effective social strategies (politicians, for instance, tend to get by much more on rhetoric than reasoning).
So if you're talking about 'rationality as winning', these virtues are characteristics of the mental program you run to win better. They may or may not correlate with how rational you appear to others. If you're trying to find ways to appear more rational, then certainly, look at the virtues as a list of "things to display". But if you're trying to behave rationally, ignore the display part and focus on how you reason when confronted with optimization problems in your life (in which 'winning' is 'playing optimally', or more optimally than you otherwise would).
They're all sort of intrinsic to good reasoning, too, though in the Yudkowsky post this is heavily concealed under grandiloquent language.
I guess I'd put it like this: consider the simplistic model where human minds consist of:
- a bundle of imperfect computing machinery that makes errors, is swayed by biases and emotional response, etc, and
- a bundle of models for looking at the world that you get to make your decisions according to, of which several seem reasonable and all can change over time
And you're tasked with trying to optimize the way you play in real-world situations. Well, the best players are the ones who optimize their computing machinery, use their machinery to correctly parse and sift through models, and accurately update their models to reflect the way the world appears to (probably) work, because they will over time come to the best determinations of plays in reality and thus, probabilistically, 'win'.
So optimizing your imperfect computing machinery is "perfectionism", "precision", and "scholarship", and the unnamed one which I would dub "playing to win" (letting winning be your metric, not your own reasoning). Correctly managing your models (according to, like, information theory, which should be optimal) is "lightness", "relinquishment", "evenness", "empiricism", and "simplicity". And then, knowing that you don't have all the models or that your models may yet change, and having, to play optimally anyway, you compute that you need to actively seek new models ('curiosity'), play as though your data may still be wrong ('humility'), and let the world challenge your models and give you data that your observations do not ('argument').
And the idea is that these abilities are intrinsic to winning, if this is a good approximation of humans (which I think it is). So they describe virtues of how humans manage this game of imperfect machinery and models, which may only correlate with external behavior.
Ah, of course, my mistake. I was trying to hand-wave an argument that we should be looking at reals instead of rationals (which isn't inherently true once you already know that 0.999...=1, but seems like it should be before you've determined that). I foolishly didn't think twice about what I had written to see if it made sense.
I still think it's true that "0.999..." compels you to look at the definition of real numbers, not rationals. Just need to figure out a plausible sounding justification for that.
This reminds me of an effect I've noticed a few times:
I observe that in debates, having two (or more) arguments for your case is usually less effective than having one.
For example, if you're trying to convince someone (for some reason) that "yes, global warming is real", you might have two arguments that seem good to you:
- scientists almost universally agree that it is real
- the graphs of global temperature show very clearly that it is real
But if you actually cite both of these arguments, you start to sound weaker than if you picked one and stuck with it.
With one argument your stance is "look, this is the argument. you either need to accept this argument or show why it doesn't work -- seriously, I'm not letting you get passed this". And if they find a loophole in your argument (maybe they find a way to believe the data is totally wrong, or something), then you can bust out another argument.
But when you present two arguments at once, it sounds like you're just fishing for arguments. You're one of those people who's got a laundry list of reasons for their side, which is something that everyone on both sides always has (weirdly enough), and your stance has become "look how many arguments there are" instead of "look HOW CONVINCING these arguments are". So you become easier to disbelieve.
As it happens, there are many good arguments for the same point, in many cases. That's a common feature of Things That Are True -- their truth can be reached in many different ways. But as a person arguing with a human, in a social setting, you often get a lot more mileage out of insisting they fight against one good argument instead of just overwhelming them with how many arguments you've got.
The weak arguments mentioned in the linked article multiply this effect considerably. In my mind there's like, two obvious arguments against theism that you should sit on and not waver from: "What causes you you to think this is correct (over anything else, or just over 'we don't know')" and, if they cite their personal experience / mental phenomenon of religious feelings, "Why do you believe your mental feelings have weight when human minds are so notoriously unreliable?"
Arguments about Jesus' existence are totally counterproductive - they can only weaken your state, since, after all, who would be convinced by that that wasn't already convinced by one of the strong arguments?
Thanks! Validation really, really helps with making more. I hope to, though I'm not sure I can churn them out that quickly since I have to wait for an idea to come along.
That's a good approach for things where there's a 'real answer' out there somewhere. I think it's often the case that there's no good answer. There might be a group of people saying they found a solution, and since there no other solutions they think you should fully buy into theirs and accept whatever nonsensities come packaged with it (for instance, consider how you'd approach the 1+2+3+4+5..=-1/12 proof if you were doing math before calculus existed). I think it's very important to reject seemingly good answers on their own merits even if there isn't a better answer around. indeed, this is one of the processes that can lead to finding a better answer.
Well, Numberphile says they appear all over physics. That's not actually true. They appear in like two places in physics, both deep inside QFT, mentioned here.
QFT uses a concept called renormalization to drop infinities all over the place, but it's quite sketchy and will probably not appear in whatever final form physics takes when humanity figures it all out. It's advanced stuff and not, imo, worth trying to understand as a layperson (unless you already know quantum mechanics in which case knock yourself out).
If it helps -- I don't understand what the second half (from the part about Youtube videos onwards) has to do with fighting or optimizing styles.
I also didn't glean what an 'optimizing style' is, so I think the point is lost on me.
Regardless of your laundry list of reasons not to edit your post, you should read "I'm confused about what you wrote" comments, if you believe them to be legitimate criticisms, as a sign that your personal filter on your own writing is not catching certain problems, so you might be highly benefitted by taking it as an opportunity to work on your filter so you can see what we see. Upgrading your filter on your own work leads to systematic improvement across all of your work instead of just improvements to the one we're talking about.
If you're worried about responsiveness, you might get further by just asking for more detail before making changes instead of explaining, approximately, "I don't feel like making changes because I'm not convinced that it'll be a good use of my time or that I'll get more responses to make it successful". (I won't fault you for lacking motivation, of course not, that's the battle we all fight -- but I also suspect that you'd profit considerably from finding that motivation, since it might lead to systematic improvement of your writing.)
tbh I haven't figured out how to use Arbital yet. I think it's lacking in the UX department. I wish the front page discriminated by categories or something, because I find myself not caring about anything I'm reading.
I think you've subtly misinterpreted each of the virtues (not that I think in terms the twelve-virtue list is special; they're just twelve good aspects of rational thought).
The virtues apply to your mental process for parsing and making predictions about the world. They don't exactly match the real-world usages of these terms.
Consider these in the context of winning a game. Let's talk about a real-world game with social elements, to make it harder, rather than something like chess. How about "Suppose you're a small business owner. How do you beat the competition?"
1) Curiosity: refers to the fact that you should be willing to consider new theories, or theories at all instead of intuition. You're willing to consider, say, that "customers return more often if you make a point to be more polite". The arational business owner might lose because they think they treat people perfectly fine, and don't consider changing their behavior.
2-4) Relinquishment/lightness/evenness refers to letting your beliefs be swayed by the evidence, without personal bias. In your example: seeing a woman appear to be cut in half absolutely does not cause you to think she's actually cut in half. That theory remains highly unlikely. But it does mean that you have to reject theories that don't allow the appearance of that, and go looking for a more likely explanation. (If you inspect the whole system in detail and come up with nothing, maybe she was actually cut in half! But extraordinary claims require extraordinary evidence, so you better ask everyone you know, and leave some other very extreme theories (such as 'it's all CGI') as valid, as well.)
In my example, the rational business-owner acts more polite to see if it helps retain customers, and correctly (read: mathematically or pseudo-mathematically) interprets the results, being convinced only if they are truly convincing, and unconvinced if they are truly not. The arational business owner doesn't check, or does and massages the results to fit what they wanted to see, or ignores the results, or disbelieves the results because they don't match their expectations. And loses.
5) Argument - if you don't believe that changing your behavior retains customers, and your business partner or employee says they do, do you listen? What if they make a compelling case? The arational owner ignores them, still trusting their own intuition. The rational owner pays attention and is willing to be convinced - or convince them of the opposite, if there's evidence enough to do so. Argument is on the list because it's how two fallible but truth-seeking parties find common truth and check reasoning. Not because arguing is just Generally A Good Idea. It's often not.
6) Empiricism - this is about debating results, not words. It's not about collecting data. Collecting data might be a good play, or it might not. Depends on the situation. But it's still in the scope of rationalism to evaluate whether it is or not.
7) Simplicity - this doesn't mean "pick simple strategies in life". This means "prefer simple explanations over complex ones". If you lower your prices and it's a Monday and you get more sales, you prefer the lower prices explanation over "people buy more on Mondays" because it's simpler - it doesn't assume invisible, weird forces; it makes more sense without a more complex model of the world. But you can always pursue the conclusion further if you need to. It could still be wrong.
8) Humility - refers to being internally willing to be fallible. Not to the social trait of humility. Your rational decision making can be humble even if you come across, socially, as the least humble person anyone knows. The humble business owner realizes they've made a mistake with a new policy and reverses it because not doing so is a worse play. The arational business owner keeps going when the evidence is against them because they still trust their initial calculation when later evidence disagrees.
9-10) Perfectionism/Precision: if it is true that in social games you don't need to be perfect, just better than others, then "perfect play" is maximizing P(your score is better than others), not maximizing E(your score). You can always try to play better, but you have to play the right game.
And if committing N resources to something gets a good chance of winning, while committing N+1 gets a better chance but has negative effects on your life in other ways (say, your mental health), then it can be the right play to commit only N. Perfect and precise play is about the larger game of your life, not the current game. The best play in the current game might be imperfect and imprecise, and that's fine.
11) Scholarship - certainly it doesn't always make sense when weighed against other things. Until it does. The person on the poverty line who learns more when they have time gains powers the others don't have. It may unlock doors out that others can't access. As with everything else, it must be weighed against the other exigencies of their life.
(Also, by the way, I'm not sure what your title means. Maybe rephrase it?)
I strongly encourage you to do it. I'm typing up a post right now specifically encouraging people to summarize fields in LW discussion threads as a useful way to contribute, and I think I'm just gonna use this as an example since it's on my mind..
This is helpful, thanks.
In the "Rationality is about winning" train of thought, I'd guess that anything materially different in post-rationality (tm) would be eventually subsumed into the 'rationality' umbrella if it works, since it would, well, win. The model of it as a social divide seems immediately appealing for making sense of the ecosystem.
Any chance you could be bothered to write a post explaining what you're talking about, at a survey/overview level?
I disagree. The point is that most comments are comments we want to have around, and so we should encourage them. I know that personally I'm unmotivated to comment, and especially to put more than a couple minutes of work into a comment, because I get the impression that no one cares if I do or not.
One general suggestion to everyone: upvote more.
It feels a lot more fun to be involved in this kind of community when participating is rewarded. I think we'd benefit by upvoting good posts and comments a lot more often (based on the "do I want this around?" metric, not the "do I agree with this poster" metric). I know that personally, if I got 10-20 upvotes on a decent post or comment, I'd be a lot more motivated to put more time in to make a good one.
I think the appropriate behavior is, when reading a comment thread, to upvote almost every comment unless you're not sure it's positive keeping it around - then downvote if you're sure it's bad, or don't touch it if you're ambivalent. Or, alternatively: upvote comments you think someone else would be glad to have read (most of them), don't touch comments that are just "I agree" without meat, and downvote comments that don't belong or are poorly crafted.
This has the useful property of being an almost zero effort expenditure for the users that (I suspect) would have a larger effect if implemented collectively.
I only heard this phrase "postrationality" for the first time a few days ago, maybe because I don't keep up with the rationality-blog-metaverse that well, and I really don't understand it.
All the descriptions I come across when I look for them seem to describe "rationality, plus being willing to talk about human experience too", but I thought the LW-sphere was already into talking about human experience and whatnot. So is it just "we're not comfortable talking about human experience on in the rationalist sphere so we made our own sphere"? That is, a cultural divide?
That first link writes "Postrationality recognizes that System 1 and System 2 (if they even exist) have different strengths and weaknesses, and what we need is an appropriate interplay between the two.". Yet I would imagine everyone on LW would be interested in talking about System 1 and how it works and anything interesting we can say about it. So what's the difference?
Why do you think there is nothing wrong with your delivery? Multiple people have told you that there was. Is that not evidence that there was? Especially because it's the community's opinions that count, not yours?
Rude refers to your method of communicating, not the content of what you said. "I mean that you do not know of the subject, and I do. I can explain it, and you might understand" is very rude, and pointlessly so.
Why do you think you know how much game theory I know?
edit: I edited out the "Is English your first language" bit. That was unnecessarily rude.
I'm not trying to welcome you, I'm trying to explain why your posts were moved to drafts against your will.
I'm not arguing with or talking about Nash's theory. I'm telling you that your posts are low quality and you need to fix that if you want a good response.
My point in the last paragraph is that you are treating everyone like dirt and coming across as repulsive and egotistical.
"You are incorrect" was referring to "No, you can't give me feedback.". Yes, we can. If you're not receptive to feedback, you should probably leave this site. You're also going to struggle to socialize with any human beings anywhere with that attitude. Everyone will dislike you.
Keep in mind that it's irrelevant how smart or right you are if no one wants to talk to you.
How could you possibly know what a random person knows of? Why are you so rude?
Re this post: http://lesswrong.com/lw/ogp/a_proposal_for_a_simpler_solution_to_all_these/
You wrote something provocative but provided no arguments or explanations or examples or anything. That's why it's low-quality. It doesn't matter how good your idea is if you don't bother to do any legwork to show anyone else. I for one have no why your idea would and don't care to do work to figure it out because the only reason I have to do work is that you said so.
Also, you might want to tackle something more concrete than "all these difficult observations and problems". First, it's definitely true that your 'solution' doesn't solve all the problems. Maybe it helps with some. So which ones? Talk about those.
Also, your writing is exhaustingly vague ("I also value compression and time in this sense, and so I think I can propose a subject that might serve as an "ideal introduction" (I have an accurate meaning for this phrase I won't introduce atm)."). This is really hard not to lose interest in while reading, and it's only two random sample sentences.
Re http://lesswrong.com/lw/ogt/do_we_share_a_definition_for_the_word_ideal/, you're going to have to do more work to make an interesting discussion. It's not like "Oh, Flinter, good point, you and (all of us) might have different meanings for 'ideal'!" is going to happen. It's on you to show why this is interesting. What made you think the meanings are different? What different results come from that? What's your definition? What do you think other peoples' are, and why are they worse?
I agree with Vaniver that those two posts in their current form should have been at least heavily downvoted. Though that doesn't happen much in practice here since traffic is low. I'm not sure what the removal policy is but I guess it probably applied.
Also, if you keep writing things like "No, you can't give me feedback. It's above you. I have come here to explain it to you. I made 3 threads, and they are equally important." you're going to be banned for being an ass, no question. You're also wildly incorrect, but that's another matter.
I'm not asking for people not to talk about problems they have. I'm just criticizing the specifically extra-insensitive way of doing it in the comment I replied to. There are nicer, less intentionally hurtful ways to say the exact same thing.
http://www.nature.com/news/google-reveals-secret-test-of-ai-bot-to-beat-top-go-players-1.21253
While I think it's fine to call someone out by name if nothing else is working, I think the way you're doing it is unnecessarily antagonistic and seemingly intentionally spiteful or at least utterly un-empathetic, and what you're doing can (and in my opinion ought to) be done empathetically, for cohesion and not hurting people excessively and whatnot.
Giving an excuse about why it's okay that you, specifically, are doing it, and declaring that you're "naming and shaming" on purpose, makes it worse. It's already shaming the person without saying that you're very aware that it is; you ought to be taking a "I'm sorry I have to do this" tone instead of a "I'm immune to repercussions, so I'm gonna make sure this stings extra!" tone.
At least, this is how it would work in the several relatively typical (American) social groups that I'm familiar with.
No, markets only work for services whose costs are high enough to participants to care and model their behavior accordingly. In my observation, specifically, these people behave this way for reasons other than their personal comfort, and the costs aren't high enough (or they're not aware that they're high enough) to influence their behavior.
The 'reason to speculate' is that it's interesting to talk about it. That's all.
I think you get more of that in Texas and the southeast. It (by my observation - very much a stereotype) correlates with driving big trucks, eating big meals, liking steak dinners and soda and big desserts, obesity, not caring about the environment, and taking strong unwavering opinions on things. And with conservatism, but not exclusively.
I distinctly remember driving in my high school band director's car once, maybe a decade ago, and he was blasting the AC at max when it maybe needed to be on the lowest setting, tops -- it seemed to reflect a mindset that "I want to get cold NOW" when it's hot, to the point of overreaction. Maybe a mindset that - if the sun is bright and on my face, I need a lot of cold air, even if the rest of me doesn't need it? Or maybe, 'it feels hot in the world so I want a lot of cold air'. Certainly there was no realization that it was excessive, and he didn't seem bothered by the unnecessary use of resources. I've noticed this same mindset a lot ever since, and I still don't understand it.
Is there an index of everything I ought to read to be 'up-to-date' in the rationalist community? I keep finding new stuff: new ancient LW posts, new bloggers, etc. There's also this on the Wiki, which is useful (but is curiously not what you find when you click on 'all pages' on the wiki; that instead gets a page with 3 articles on it?). But I think that list is probably more than I want - a lot of it is filler/fluff (though I plan to at least skim everything, if I don't burn out).
I just want to be able to make sure, if I try to post something I think is new on here, that it hasn't been talked to death about already.
Thanks, this is useful.
I've been thinking about doing this - I'm trying to learn math (real/complex analysis, abstract algebra) for 'long term retention' as I'm not really using it right now but want to get ahead of learning it later, and struggling with retention of concepts and core proofs.
Do you think it's going to be useful to share decks for this purpose? I feel like there are many benefits to making my own cards and adding them as I progress through the material, and being handed a deck for the whole subject at once will be overwhelming.
Here's an opinion on this that I haven't seen voiced yet:
I have trouble being excited about the 'rationalist community' because it turns out it's actually the "AI doomsday cult", and never seems to get very far away from that.
As a person who thinks we have far bigger fish to fry than impending existential AI risk - like problems with how irrational most people everywhere (including us) are, or how divorced rationality is from our political discussions / collective decision making progress, or how climate change or war might destroy our relatively-peaceful global state before AI even exists - I find that I have little desire to try to contribute here. Being a member of this community seems to requiring buying into the AI-thing, and I don't so I don't feel like a member.
(I'm not saying that AI stuff shouldn't be discussed. I'd like it to dominate the discussion a lot less.)
I think this community would have an easier time keeping members, not alienating potential members, and getting more useful discussion done, if the discussions were more located around rationality and effectiveness in general, instead of the esteemed founder's pet obsession.
This is interesting, but I don't understand your questions at end. What simulation theory are you talking about?
By the way, one of your links is broken and should be http://file.scirp.org/pdf/OPJ_2016063013301299.pdf .
Keep in mind that there is a significant seasonal variation in emissions from the sun, such as neutrinos which can easily penetrate into any experimental apparatus on earth. This is simple to rationalize: the sun emits massive numbers of neutrinos, which pass through areas at a shallower angle in the winter and thus have lower flux.
By far the first thing to rule out would be neutrinos affecting nuclear decay, before we start wondering about dark matter or anything like that. Everyone in the business has thought of this, of course: http://physicsworld.com/cws/article/news/2008/oct/02/the-mystery-of-the-varying-nuclear-decay .
Occam's razor would suggest you should be extremely skeptical of any suggestion that it's more likely that something besides neutrinos is responsible for this effect, even if the mechanism hasn't been figured out yet.
I double majored in physics and computer science as an undergrad at a pretty good school.
My observation is this:
The computer science students had a much easier time getting jobs, because getting a job with mediocre software engineering experience is pretty easy (in the US in today's market). I did this with undeservedly little effort.
The physics students were, in general, completely capable of putting in 6 months of work to become as employable as the computer science students. I have several friends who majored in things completely non-technical, but by spending a few months learning to program were able to get employed in the field. The physics students from my classes were easily smart enough to do this, though most did not.
To maximize the ease of getting a job while in physics, take a few programming courses on the side. If you apply yourself and are reasonably talented it should be doable.
I think the 'right' approach (for maximizing happiness and effectiveness) is to major in what you find the most enjoyable and do the due diligence to become employable on the side. And maximize any synergies between the two (do programming in physics internships, etc).
I watched #3 again and I'm pretty convinced you're right. It is strange, seeing it totally differently once I have a theory to match.
I strongly disagree with the approaches usually recommended online, which involve some mixture of sites like CodeAcademy and looking into open source projects and lots of other hard-to-motivate things. Maybe my brain works differently, but those never appealed to me. I can't do book learning and I can't make myself up and dedicate to something I'm not drawn to already. If you're similar, try this instead:
- Pick a thing that you have no idea how to make.
- Try to make it.
Now, when I say "try"... new programmers often envision just sitting down and writing, but when they try it they realize they have no idea how to do anything. Their mistake is that, actually, sitting down and knowing what to do is just not what coding is like. I always surprise people who are learning to code with this fact: when I'm writing code in any language other than my main ones (Java, mostly..), I google something approximately once every two minutes. I spend most of my time searching for how to do even the most basic things. When it's time to actually make something work, it's usually just a few minutes of coding after much more time spent learning.
You should try to make the "minimum viable product" of whatever you want to make first.
If it's a game, get a screen showing - try to do it in less than an hour. Don't get sidetracked by anything else; get the screen up. Then get a character moving with arrow keys. Don't touch anything until you have a baseline you can iterate on, because every change you make should be immediately reflected in the product. Until you can see quick results from your hard work you're not going to get sucked in.
If it's a website or a product, get the server running in less than an hour. Pick a framework and a platform and go - don't get caught on the details. Setting up websites is secretly easy (python -m SimpleHTTPServer !) but if you've never done it you won't know that. If you need one set up a database right after. Get started quickly. It's possible with almost every architecture if you just search for cheat sheets and quick-start guides and stuff. You can fix your mistakes later, or start again if something goes wrong.
If you do something tedious, automate it. I have a shell script that copies some Javascript libraries and Html/JS templates into a new Dropbox folder and starts a server running there so I can go from naming my project to having an iterable prototype with some common elements I always reuse in less than five minutes. That gets me off the ground much faster and in less than 50 lines of script.
If you like algorithms or math or whatever, sure, do Project Euler or join TopCoder - those are fun. The competition will inspire some people to be fantastic at coding, which is great. I never got sucked in for some reason, even though I'm really competitive.
If you use open source stuff, sure, take a look at that. I'm only motivated to fix things that I find lacking in tools that I use, which in practice has never lead to my contributing to open source. Rather I find myself making clones of closed software so I can add features to it..
Oh, and start using Git early on. It's pretty great. Github is neat too and it basically acts as a resume if you go into programming, which is neat. But remember - setting it up is secretly easy, even if you have no idea what you're doing. Somehow things you don't understand are off-putting until you look back and realize how simple they were.
Hmm, that's all that comes to mind for now. Hope it helps.
Yeah, it can definitely be done for cheaper. In my case going through college and such I got new frames every year or two (between breaking them or starting to hate the style..). The bigger expense was contacts, which we either didn't have insurance for or it didn't cover, coming out to 100-150/year depending on how often I lost or damaged them.