Posts

How do you identify complex systems? 2016-12-29T04:39:51.637Z · score: 4 (4 votes)
Positive vs. Normative Rationality 2016-12-16T06:50:36.270Z · score: 1 (2 votes)
Traditions and Rationality. 2016-12-10T05:08:09.506Z · score: 7 (7 votes)
"What is Wrong With our Thoughts" -David Stove (1991) 2016-12-08T01:27:35.931Z · score: 12 (9 votes)
Unfortunate Information 2016-12-06T05:31:07.099Z · score: 8 (9 votes)
Nassim Taleb on Election Forecasting 2016-11-26T19:06:05.308Z · score: 8 (8 votes)

Comments

Comment by natasharostova on What are you surprised people don't just buy? · 2017-02-13T23:48:53.138Z · score: 0 (0 votes) · LW · GW

This.

Comment by natasharostova on A question about the rules · 2017-02-04T01:13:28.039Z · score: 0 (0 votes) · LW · GW

Yeah that's totally cool. LW style is different though. Not necessarily in a good way, and this difference might even be why it's less popular than other sites these days. But it's different in that LW doesn't, as far as I have observed, want lots of people from different sides. It wants an almost algorithmic approach to reality, where more colorful language is viewed as disrupting the truth by inflaming tribal parts of your brain.

Everything you're saying is totally reasonable for someone who doesn't understand the very very specific thing LW is trying to accomplish, and the idiosyncratic rules of engagement for this site.

Personally I don't come here as often as other sites in part due to these rules, at times, feeling stringent shrug

Comment by natasharostova on A question about the rules · 2017-02-02T18:17:42.176Z · score: 2 (2 votes) · LW · GW

The style of your blog is very very much at odds with the style of Less Wrong. I would never submit anything here that classified as a cohesive set what 'liberals' think, and then attacked that classification. Writing here should map more to statistical estimation and modeling, where every word and claim is scrutinized, thoughtful, and attempts to avoid invoking needless emotion. That last point is harder to nail down. It is of course possible to have an excitable tone that runs orthogonal to the strict argument, but it's pretty hard to do right.

I do, sometimes, write more in line with the tone you choose on my personal blog, but I wouldn't ever submit it here. And as I write more and think more, I've become increasingly convinced it's not the best way to think. Trying to be persuasive, methodological, and charitable, is more fun since it's much more challenging.

You'd have better luck in the reactosphere, which probably gets more readers than LW anyway.

Comment by natasharostova on Odds ratios and conditional risk ratios · 2017-01-25T05:07:54.867Z · score: 1 (1 votes) · LW · GW

I thought it was interesting -- and frustrating for you. I haven't invested the time into proving to myself you're right, but in the case that you are I hope you're able to get someone to verify and lend you their credibility.

Why do you think two senior biostats guys would disagree with you if it was obviously wrong? I have worked with enough academics to know that they are far far from infallible, but curious on your analysis of this question.

Comment by natasharostova on Metrics to evaluate a Presidency · 2017-01-24T20:55:29.705Z · score: 0 (0 votes) · LW · GW

Coming up with criteria and metrics on the economy is pretty easy

I agree. In fact, I think coming up with criteria and metrics on the economy is profoundly challenging within the US context. We know there are right tail events (inflation, unemployment, etc) that are very strong. But when they are all generally stable, or within the realm of stability, but the variation within demographics and geographies of the US is huge, the value of the metrics can start to dramatically collapse IMO.

Comment by natasharostova on Metrics to evaluate a Presidency · 2017-01-24T20:53:51.034Z · score: 1 (1 votes) · LW · GW

It's hard to think of how one could do a lit review on that without, like, a thousand sources to try and characterize the general scope of the problem.

Comment by natasharostova on Could a Neuroscientist Understand a Microprocessor? · 2017-01-22T23:03:44.867Z · score: 1 (1 votes) · LW · GW

This is pretty cool. It reminds me of an article I read on brain surgery recently (https://www.nytimes.com/2016/01/03/magazine/karl-ove-knausgaard-on-the-terrible-beauty-of-brain-surgery.html). Where the surgeon keeps the patient awake, and zaps different parts of the brain to see what they map to. They don't even try to pretend they understand the system, but try to map simple correlations.

Comment by natasharostova on 0.999...=1: Another Rationality Litmus Test · 2017-01-22T06:50:22.658Z · score: 1 (1 votes) · LW · GW

[Note, after rereading your post my comment is tangential]

I have always been empathetic to the argument, from people first presented with this, that they are different. Understanding how math deals with infinity basically requires having the mathematical structure supporting it already known. I'm not particularly gifted at math, but the first 4 weeks of real analysis really changed the way I think, because it was basically a condensed rapid upload of centuries of collaborative work from some of the smartest men to ever exist right into my brain.

Otherwise, at least in my experience, we operate in a discrete world that moves through time. So, what I predict is happening, is that when you ask that question to people their best approximation is a discrete world ticking through time.

Is 0.999...=1? Well, each tick of time another set of [0.0...9]'s is added, when the question is finally answered the time stops. You're then left with some finite number [0.0..01]. In their mind it's a discrete algo running through time.

The reality that it's a limit that operates absent of time, instantaneously, is hard to grasp, because it took brilliant men centuries to figure out this profoundly unintuitive result. We understand it because we learned it.

Comment by natasharostova on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-20T18:27:31.839Z · score: 0 (0 votes) · LW · GW

Can we elect a dictator?

Comment by natasharostova on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-19T20:55:28.985Z · score: 1 (1 votes) · LW · GW

Sincere question: Do you think the SSC comments section accomplishes politics while filtering out foam, spittle etc? (or perhaps the comments section there is more robust to simply ignoring bad comments, which isn't the same on a forum?)

Having no moderator experience, I guess there is probably a lot on that end that I don't know.

Comment by natasharostova on Thoughts on "Operation Make Less Wrong the single conversational locus", Month 1 · 2017-01-19T19:22:43.137Z · score: 4 (4 votes) · LW · GW

The things most people are interested in discussing are frowned upon/banned from discussion on LW. That's why they go to SSC. The world has changed in the past 10 years, and the conversational rules and restrictions of 2009 no longer make sense today.

The rationalsphere, if you expand it to include blogs like Marginal Revolution, is one of the few intellectual mechanisms left to disentangle complex information from the clusterf* of modern politics. Not talking about it here through a clear rationalist framework is a tragedy.

Comment by natasharostova on Marginal Revolution Thoughts on Black Lives Matter Movement · 2017-01-19T04:39:12.453Z · score: 1 (1 votes) · LW · GW

Well, different people understand it in different ways. Some are horrible people who understand it in the worst way. Others are great people who understand it in the best way. The entire group is willing to sacrifice clarity and a clear definition, in favor of something sufficiently vague to band together a collective action who overlap on certain dimensions.

I think for that reason though, trying to debate the definition or how it's understood is pointless. Sadly. I don't blame people who think it's a worthy cause anyway, maybe they are right. I personally can't stand associating with movements where the direction isn't clear, but that's just me.

Comment by natasharostova on Project Hufflepuff · 2017-01-18T22:43:02.420Z · score: 4 (4 votes) · LW · GW

Cool.

Comment by natasharostova on The substance of money · 2017-01-18T05:44:54.123Z · score: 9 (9 votes) · LW · GW

Hey,

I'm gonna give you sort of an unsatisfying answer. I had a similar interest, which resulted in me getting my MSc and working in research at the Fed for a few years, with the goal of sorting it out in my head (ended up going private sector instead of getting a PhD). As far as I have surveyed, there are different models of money, but it's scientifically an unsolved problem. There seems to be a level of complexity that arises as you increase the number of people on a monetary system, increase industries, increase geographical scale, add new countries and exchanges, and add complex financial systems. As this grows, filtering out what and how, exactly, money interacts with these systems, starts to get very messy.

As an example, during the financial crisis, trillions of dollars 'disappeared.' They disappeared because they only ever existed because we were borrowing from our future selves, then collectively lost faith in our future selves having that money, so the money ceased to exist today. Is that how a commodity behaves? Well, now we are trying to build classifications for what is and isn't a commodity. Of course, you could do the same thing on a gold standard if banks were allowed to issue demand deposits, which combined with fractional reserve banking leads to the same thing.

Monetarism, I firmly believe, isn't something you can reason through intuitively at a casual level. I decided it wasn't something I wanted to devote my life to, and even though I spent a couple years working daily in the field, I don't know that I understand that much (although I do know what I don't know, which definitely counts as real knowledge).

I think monetary economics is sort of a mind-killer, since trying to intuitively reason through monetarism can take you down many very different paths, all of which seemingly arise from an incredibly reasonable set of axioms and inferences. If you ever listen to really clever Austrians or Keynesians discuss their view, it's incredibly compelling. That sets off alarms in my favorite heuristic of undeterminism, when multiple models of the world fit the data equally well. It's super common for blogosphere denizens or naive rationalists to try their hand at monetary economics, convinced they've stumbled upon some key insight that means all econ professors are wrong.

I will say, while I didn't leave the fed enamored or anything, a subset of those economists are brilliant and humble. I notice this flawed reasoning so often, where independent researchers, or researchers in another field, will construct elaborate arguments against the most uncharitable readings of economists arguments. Often they won't have ever spoken to a notable economist in person. They don't ever have to present to peers, they never have to formalize their arguments mathematically, and they never bother engaging in the more advanced formulations of economic arguments that wind up in journals. Anyway, I'm getting off track here...

While as a rule I don't think mathematizing things necessarily makes them clearer, I am convinced it's the right way to proceed in monetary studies. It forces a strict structure, which prevents us from using words to overfit or get lost. Although the field is so complex, and it's intertwined with historical narratives that aren't always easily turned into data sets, so that can sometimes make it harder. The math often gets sorta complicated as well.

Of course, the actual monetary economy has real data. Most of which we can't collect. So the theoretical models are our way of trying to imagine what the structure would look like, even though they aren't empirical. Which gets to another problem, which is how confident can one be in theoretical economics? Sometimes the assumptions are incredibly robust, but the systems are often very complex.

One place I will say I think many economists act contrary to LW style rationality, is in choosing a side, rather than taking the rational view that there are many sides with equally valid claims to truth, and they should work together to expose what is correct. It has always struck me as being mind-killed when people state "Oh, I'm a neo-Keynsian so I believe XYZ, you're a non-Keynsian, so you reject ABC" (or whatever). I mean... maybe the Austrians are all right, and they have this unique perception of reality none of the Neo-Keynsian scholars have, because they have some more profoundly true insight into the mesh of reality that is lost on the other econo-plebs... But that doesn't seem like the most likely scenario to me.

Or maybe Paul Krugman really is right about everything, but still... I doubt it. He was once a smart young man that had some crucial insights on the theoretical mathematical structure behind international trade, which earned him an econ Nobel. I don't think he's in tune with empirical realities though. He's just a genius at imagining some elegant mathematical structure that characterizes an economy which might or might not map to reality, and then convincing himself it's actually exactly how reality operates. That's the big mistake, I think.

If you want to take a look down the rabbit hole, I'd suggest reading Milton Friedman's books on monetary history. Even his detractors tend to agree his insight and clarity on money is absolutely incredible. He also is great at explaining things without too much math, but still using ratios and dataseries in his books when appropriate.

For shorter term stuff, check out John Cochrane's stuff, he's my favorite social scientist, (http://faculty.chicagobooth.edu/john.cochrane/research/papers/cochrane_policy.pdf, http://johnhcochrane.blogspot.com/search/label/Monetary%20Policy). His blog -- second link -- is really great.

Comment by natasharostova on Welcome to Less Wrong! (11th thread, January 2017) (Thread B) · 2017-01-17T01:37:07.093Z · score: 2 (2 votes) · LW · GW

Good luck! I'm looking forward to reading your ebook on 5 easy tips on how to unlock my inner high-IQ potential.

Comment by natasharostova on Defusing Hate: A Strategic Communication Guide to Counteract Dangerous Speech · 2017-01-15T23:29:26.571Z · score: 4 (4 votes) · LW · GW

I think the most dangerous aspect of 'dangerous speech' is it is a shared meme to disregard certain types of arguments off-hand, regardless of how true or false they are. It becomes most dangerous when someone then, for some reason, decides to investigate further and realizes "Hey, some of this stuff is true! And I can't trust anyone anymore."

Comment by NatashaRostova on [deleted post] 2017-01-13T22:39:25.912Z

As someone who wants more top-notch rationalist politics on LW, without moderators removing it, I think moderators should remove this.

Comment by natasharostova on Dominic Cummings: how the Brexit referendum was won · 2017-01-13T22:37:19.335Z · score: 0 (0 votes) · LW · GW

I think it's fair to argue that elections that are won by a slim margin don't say much of significance about discrete narrative changes in the weeks leading up to the election. That could be false though, if for example we view Trump winning the election as a 'treatment' effect, which gives him a new discrete ability to change the narrative.

But more generally, I think an election such as Brexit does give us a significant story, not necessarily for the week leading up to it, but for the changing preferences of a population in the year or two leading up to it and the invocation of the election itself.

Comment by natasharostova on Open thread, Jan. 09 - Jan. 15, 2017 · 2017-01-10T03:56:48.299Z · score: 3 (3 votes) · LW · GW

I think there are some serious issues with the methodology and instruments used to measure heuristics & biases, which they didn't fully understand even ten years ago.

Some cognitive biases are robust and well established, like the endowment effect. Then there are the weirder ones, like ego depletion. I think a fundamental challenge with biases is clever researchers first notice them by observing other humans, as well as observing the way that they think, and then they need to try and measure it formally. The endowment effect, or priming, maps pretty well to a lab. On the other hand, ego depletion is hard to measure in a lab (in any sufficiently extendable way).

I think a lot of people experience, or think they experience, something like ego depletion. Maybe it's insufficiently described, or a broad classification, or too hard to pin down. So the original researcher noticed it in their experience, and formed a contrived experiment to 'prove' it. Everyone agreed with it, not because the statistics were compelling or it was a great research design, but because they all experience, or think they experience, ego depletion.

Then someone replicates it, and it doesn't replicate, because it's really hard to measure robustly. I think ego depletion doesn't work well in a lab, or without some sort of control or intervention, but those are hard things to set up for such a broad and expansive argument. And I guess you could build a survey, but that sucks too.

In the fundamental attribution error, I think that meta analysis is great, in that it shows that these studies suck statistically. They only work if you come to them with the strong prior evidence that "Hey, this seems like something I do to other people, and in the fake examples of attribution error I can think of lots of scenarios where I have done that." Of course, our memory sucks, so that is a questionable prior, but how questionable is it? In the end I don't know if it's real, or only real for some people, or too generalized to be meaningful, or true in some situations but not others, or how other people's brains work. Probably the original thesis was too nice and tidy: Here is a bias, here is the effect size. Maybe the reality is: Here is a name for a ton of strange correlated tiny biases, which together we classify as 'fundamental attribution', but which is incredibly challenging to measure statistically over a sample population in a contrived setting, as the best information to support it seems inextricably tangled up in the recesses of our brains.

(also most heuristics and biases probably do suck, and lack of replication shows the authors were charlatans)

Comment by natasharostova on If I must eat meat, I eat pork · 2017-01-09T20:43:10.701Z · score: 2 (2 votes) · LW · GW

Once that happened, I’d no longer be able to eat chickens. I could apply the same process to all animals, and so by induction I would be unwilling to eat any animal.

This is an interesting way to look at using induction, but I see it more as a willing reprogramming of your brain. In your case, you were able to simulate a case where eating chicken would disgust you (eating a pet) and that gave you impetus to stop eating chicken.

I am a big meat eater. I predict there is a 30-60% chance I would drastically reduce my meat eating if I was forced to run a slaughterhouse for my food, and see the suffering and kill the animals. Every time I wanted meat I'd need to take on the moral burden of killing an animal. If I may try my hand at some pop-historical analysis, I bet this is why past societies frequently held a reverence, often spiritual, with the killing of animals for food.

...And yet, I still eat lots of meat. Probably if someone took me on a tour of kids with malaria in Africa I'd donate more to those charities. Or if I was walked through a Russian sex trafficking brothel, I would support organizations to end those practices. Or if someone made a three hour movie on the tragedy of the homeless person who sleeps by my apartment, documenting their misfortune, I would go out and buy a coat and food and try to help them because it would unlock and develop emotions I don't currently have.

I sort of know if I went through these simulations it would change my outlook and behavior in life. These are also obviously topics I already am familiar with, but there are surely lots of topics I'm unfamiliar with that would change my view of the world. Of course I can't have all these experiences, and I'm not sure how I should try to adjust my behavior today on the expectations of how my behaviors would change if I were to have experiences that I'm not going to have, but plausibly could have.

Is it rational for me to eat less meat now, even though I enjoy it and don't feel guilty, because a plausible counter-factual me who had some experiences I don't have would tell me to? Or is it rational for me to eat meat because there is no counter-factual me who exists, and as it stands now I enjoy it and don't feel guilty?

Comment by NatashaRostova on [deleted post] 2017-01-09T01:49:09.675Z

I agree with this view. His abuse is more blase, that's definitely true.

Brash man with a working-class NYC disposition: "Obama literally founded ISIS" or "Obama is secretly a Muslim"

Sensible people everywhere recoil and roll their eyes. Understanding why that's absurd is pretty easy. The people who make those arguments aren't exactly an intellectual class, and currently lack an intellectual 'ruling caste.'

Refined person with an articulate tone of voice, and an Ivy league law degree: "Women are oppressed everywhere, and currently make 70 cents on the dollar of what a man makes."

Not horrific, stated by a well-educated person. Sounds reasonable, based on 'real research.' Comes from a sense of seemingly genuine concern and outrage for an injustice.

I used to take the stance that the first was much worse, as it is more brash and shameless. I'm not sure anymore how to measure these two against each other. I have, absolutely without a doubt, been mind-killed on this specific topic, because I personally hate charlatan lawyers who think they have the right to tell me how to live my life.

Comment by natasharostova on Against Compromise, or, Deciding as a Team without Succombing to Entropy · 2017-01-08T23:46:10.891Z · score: 0 (0 votes) · LW · GW

I've had a similar experience at [large tech firm]. It was becoming clear that an intersecting project with two teams wasn't working. The challenge though was it was stuck in a rotten equilibrium. Each team's true incentive was distinct and contrary to the other team. Yet the mandate was 'thou shalt have the same incentives.' Everyone kept publicly claiming we had aligned incentives, which you shouldn't have to publicly explain if it's actually true.

A lot of social choice theory guys tried to explain this in the context of voting, and the stability of outcomes. Arrow's impossibility theorem can be resolved if you have a dictator. In the end a strong and smart leader can solve so many issues of indecisiveness, and can take ownership of directing the deliberation, and adjusting for variables no one else owns (e.g. the cost of time in making a decision).

Still, I remember thinking about this all while intersecting teams were making obviously bad choices, and thinking that the best way out would just be to make someone sovereign and let them choose.

Leadership gets weird though. Despite all this theory and analysis over group choice dynamics, there is this transformative property of a great leader that seems to inspire people to buy into their vision and work incredibly hard. I don't understand how that works though, other than some handwaving and 'psychology.'

Comment by natasharostova on Rationality Considered Harmful (In Politics) · 2017-01-08T23:16:15.500Z · score: 3 (3 votes) · LW · GW

I think that's what most people who were or want to be part of the rationalist community want to work on now. That's what Scott Alexander does full time with SSC and his comments. Even on LW despite the weird and dated rules, everyone wants to discuss this stuff and work on slowly figuring it out. I don't think anyone really cares how a 22 year old has reinterpreted EY's post on cognitive biases or some new version of AI risk(and I say that having put all my faith in 22 year old engineering kids saving the world).

I'll probably just post on it more now here, and see what happens.

Comment by natasharostova on Rationality Considered Harmful (In Politics) · 2017-01-08T22:56:10.534Z · score: 3 (3 votes) · LW · GW

One problem I have with communicating this is that I was only able to pick up on it after lots of academic studying (degree doesn't matter so much as having read and understood the growth of Social Science knowledge and research), and reading blogs of academics who have run into trouble for years.

Whether it's InfoProc on genetic engineering, West Hunter on evolution, SSC on feminism, and so forth.

By time you read all this stuff and it starts coming together in your head, you realize you can't rationally discuss it with other people. If I'm at a party and someone mentions they are a feminist, I'm definitely not going to mention I'm an anti-feminist (or try to explain why I think the entire idea of flippant 'ism identification' is broken). Or even outside a party, it's a heavy discussion to bring up for no real gain.

There is no nice starting position, or clear argument on why rationality is often harmful in politics without contemporary and historical examples. It takes hours of conversations with close friends who are willing to have their mind changed, simply to explain that there is this entire world that no one is allowed to discuss. I have friends who get so frustrated by this, they decide to go full Alt-Right or Neoreaction, which I think is also a mistake.

One nice place to start though is in the past, where the institutions that many people view today as the most rational and truthful were completely wrong. That can at lease plant a seed of doubt. This interview with 20th century journalist Malcolm Muggeridge, who traveled through the Soviet Union during the Holodomor, is one of my favorites: (http://www.ukrweekly.com/old/archive/1983/228321.shtml)

Shortly before Mr. Muggeridge's articles appeared in the Guardian, the Soviet authorities declared Ukraine out of bounds to reporters and set about concealing the destruction they had wreaked. Prominent statesmen, writers and journalists - among them French Prime Minister Edouard Herriot, George Bernard Shaw and Walter Duranty of The New York Times - were enlisted in the campaign of misinformation.

Or point to guys like Walter Duranty (https://en.wikipedia.org/wiki/Walter_Duranty).

The problem though is unlike lots of EY-Rationality-Facts, you can't learn why rationality is often harmful in politics without loads of examples throughout time. And unlike cognitive biases, it's really hard to shortly explain.

Comment by NatashaRostova on [deleted post] 2017-01-08T22:36:40.529Z

Your document reads much more like "Rational Politics for Liberals." That's not necessarily a bad thing, but it's really clear that you tacitly oppose lots of dissident/alternative/reactionary right views. I'm pretty sympathetic to what you're trying to do, but I see it more as a concerted effort for thoughtful and rational discussion of how to solve the issues of alternative and neoreactionary right beliefs.

I don't think any level of rational calculations of per person terrorism risk will change the 20-50% of Americans who don't want Muslim immigration. Terrorism is the nicest and cleanest, discrete and emotionally actionable issue to focus around. The impression I get is more that many people prefer or like their own culture, and want to live in a culture with homogenous groups of people. There is at least enough research on the fact that homogenous groups are higher-trust and safer, to make the preference to exclude sufficiently different people as something that could be rational, or debatable in a rationalist framework (disclosure: I don't think people are necessarily irrational for having ingroup preferences or outsider-anxiety).

I have spent a lot of time trying to think about how we can reconcile these worldviews between the in-group preference and the more universalist/progressive political model. It's really hard, maybe impossible. Right now the intellectuals of each group (read: not the meme spammers) can't even honestly discuss the issues. Their views are so beyond the pale of one another, that they can't even find a shared platform to say a thing. Maybe one goal would be to get the intellectuals of both groups to take a stand against pointless meme arguments.

I'm not a huge fan of Ezra Klein, but his interview with Tyler Cowen gets at this (http://marginalrevolution.com/marginalrevolution/2016/10/conversation-ezra-klein-2.html). KLEIN: I strongly agree. We do not have a language for demographic anxiety that is not a language that is about racism. And we need one. I really believe this, and I believe it’s been a problem, particularly this year. It is clear, the evidence is clear. Donald Trump is not about “economic anxiety.”

I think a true rational politics project would work on developing this 'language for demographic anxiety.' If we can imagine a slightly better pundit news discussion:

Interviewer: We have Guest1 and Guest2. Guest1 prefers a more homogenous culture, which he argues is higher trust. Guest2 believes that the evolution of our culture as we know it has succeeded because we bring in different groups of people. Guest 1: Great to be here, I have lots of respect for Guest2, but I think we need to focus on bringing highly educated people who match on our core cultural dimensions. Guest 2: There is more than simply education, over time we have seen [previous groups] slowly absorb into America, and add their own cultural strategies into our melting pot.

...or something like that. Right now that conversation cannot happen. Or at least it can't happen without faux and real outrage, accusations of racism, and so forth. And in saying all this, it's probably clear where my political preferences rest. It's a testament to the challenge of this endeavor that any individuals model of what a 'Rational Politics Project' would look like secretly embeds the political views of its author.

Comment by natasharostova on How do you identify complex systems? · 2016-12-30T02:13:03.890Z · score: 0 (0 votes) · LW · GW

That could be part of it. I'd also say what is difficult is putting certain types of ideas into words. When people talk about scientific skepticism, for example, what exactly are they saying? Guys like Andrew Gelman or Scott Alexander (or plenty of other smart folks) are able to look through academic research across domains, and something sticks out to them as wrong. They can then go through and try to identify what claims, or assumptions, or statistics, are misguided. But prior to that there is this hunch or indicator that the author's scientific claim is off. They seem to develop this hunch by having a well developed heuristic of what is too complex to casually know, and what isn't.

You see the same thing in economic forecasting, or medicine, where over a long time skilled people start to develop a hunch for when something is off. I think part of this hunch is knowing what is knowable, and what is beyond the pale.

As a slightly contrived example, growing up I was more of a naive rationalist. In my late teen years I learned that almost everything I was taught about drugs was a lie. Not just scheduled drugs, but nootropics as well. My dad is a physician and told me without knowing any of the research, and having read less than me, that it's generally a bad idea to take drugs you don't need. He had no real argument, it was something he'd picked up over years of practicing medicine: Take as few drugs and as few treatments as possible, unless necessary.

Even though there is tons of research on medicine, it's hard to codify and explain the way certain clever established practitioners evaluate when we can rely on our inputs to lead to our desired outputs. These hunches are nonlinear and chaotic, which makes measuring them formally incredibly challenging. I'm probably bringing up medicine as an example due to the recent posts on depression networks from Slate Star Codex, where we also see Scott is getting an intuition or understanding of how these complex systems interact, and when and why the related research or classifications is misguided.

Comment by natasharostova on Ideas for Next Generation Prediction Technologies · 2016-12-23T21:33:28.524Z · score: 0 (0 votes) · LW · GW

How is a prediction market subsidized by someone with an interest in the information? As far as I'm aware, most of them make money on bid/ask spreads, and can be thought of as a future or Arrow–Debreu security.

As the current institutions stand there are differences. Prediction market sites and the Nasdaq are obviously different in a lot of institutional ways. In prediction markets you can't own companies. But in the more abstract way in which people trade on current information as a prediction, which is eventually realized, they are similar.

For example, a corporate bond is going to make a series of payments to the holder over its maturity. Market makers can strip off these payments and sell them as bespoke securities, so you could buy the rights to a single payment on debt from company X in 12 months. If you'd like, people can then write binary options on those such that they receive everything or nothing based on a specified strike price.

In the general security there is lots of information and dynamics, but with the right derivatives structure you can break it up into a state of a series of binary predictions.

The dynamic structure behind prediction markets and financial markets as trading current values built on models of future expectations is very similar, and I think identical.

Comment by natasharostova on Ideas for Next Generation Prediction Technologies · 2016-12-21T01:20:27.628Z · score: 0 (0 votes) · LW · GW

Literally the only difference in terms of prediction dynamics is that currently prediction markets include political/non-financial questions, which are only implicitly included in financial markets.

Comment by natasharostova on Open thread, Dec. 19 - Dec. 25, 2016 · 2016-12-20T01:10:26.264Z · score: 0 (0 votes) · LW · GW

I'll half-answer this, since it's sort of a tangent, but the metric I prefer to use is my variation of feeling over time. I don't know if other people are like this (probably), but my mood/emotion impacts my view on politics/policy.

Sometimes when I feel ill or in a bad mood some political event of class A will make me upset and convinced everything will turn out poorly. After I lift weights when I'm on my (perceived) good feeling Testosterone hormones, I feel confident that political event of class A won't be a big deal, and I'm confident in my ability to persevere. Usually I take this variance in my prediction of the future as evidence I'm being mind-killed.

Another strategy, if reading one or two articles on a topic you already know a fair amount on makes you feel strong emotions and strongly change your prediction of the future, you might be mind-killed.

A final strategy I tried was subscribing to different political meme pages on FB (libertarian, Ann Coulter-ish, Alt-right, progressive), and I'd notice how I sometimes would slowly change my view based on which ones I was looking at. I know admitting to subconsciously changing political views based on political memes is about as embarrassing as saying you went and bought a Taco Bell meal because of a Taco Bell commercial -- but as far as I can tell we are very perceptible to this stuff, even the stupidest memes. (Sometimes even if I hate them, I start substituting them deep in my mind for the 'other sides' actual argument).

Anyway, those are a few of my tactics.

Comment by natasharostova on Frequent sauna bathing protects men against dementia · 2016-12-17T23:56:16.564Z · score: 1 (1 votes) · LW · GW

I wonder what the statistical power of the study was.

With n = ~2000, and dementia rates being relatively low, and there either being no controls or some lame half-missing linear controls (even worse than no control, because it makes you think the control worked), and the treatment being seemingly arbitrary ,I basically am going to assume this is meaningless information.

It's turning an uncontrolled correlation in a low power sample into a causal story of protection.

Anyway, I didn't actually read the paper so maybe I'm being unfair. I somehow doubt that's the case though.

Comment by natasharostova on Positive vs. Normative Rationality · 2016-12-16T17:50:36.743Z · score: 0 (0 votes) · LW · GW

I think you're right that the distinction is typically clear cut and useful to make. What I want to avoid (although I'm not sure I was successful) is simply being nihilistic and making a refined version of the boring argument "what do words even mean?!".

The area I'm interested in is when that distinction grows blurry. Normative arguments always have to embed an accurate representation of reality, and a correct prediction that they will actually work. And positive arguments of reality frequently imply a natural or optimal result.

For example, some guy like Marx says "I've been thinking for a few decades, I have predicted the optimal state of human interaction. This map of the world clearly suggests we should move towards it." He then writes a manifesto to encourage it. The normative part of his argument seems to come trivially from the positive explanation of the world. So to that extent it's not like I can agree with his positive argument, but think his normative takes it too far, they are both equally wrong in the same way.

Or another way to say it, I think it's very rare that people share the same positive view of the world, but disagree normatively. Our normative disagreements often always come from a different map of the world, not from the same map but different preferences. Obviously I can't prove, or even test this, so I'm posting it here as an uncertain though. Not something I'm going to strongly defend. I know Aumann sort of proved it with his agreement theorem, well he modeled two Bayesian agents. So everything his model can't explain could be called normative I guess?

In reality it's still a useful distinction. As I said, I don't want to be annoyingly nihilistic or anything.

Note: Will read the rest of that paper later. Looks very interesting and relevant though, so thanks for sharing.

Comment by natasharostova on The Partial Control Fallacy · 2016-12-14T03:12:00.241Z · score: 2 (2 votes) · LW · GW

This also is why I find nearly all university/grad application/acceptance forums to be garbage. Naturally the topics that are discussed are the ones that are shared across the group, when the reality is it is the idiosyncratic aspects that are most up-for-optimization and least discussed.

Your post is very similar to lots of literature on game-theoretic signalling. The seminal paper being this one by a guy called Spence on job-market signalling (http://www.econ.yale.edu/~dirkb/teach/pdf/spence/1973%20job%20market%20signalling.pdf).

Hey, James Miller, you got any other good game theory signalling papers to recommend :P?

Please do read the first 2-3 paragraphs, since I think you'll really like it, even if taking an hour out of your life to read the game/set theory notation isn't on your list of priorities (unless you've already read this paper, which while unlikely, I don't want to assume you haven't!).

Comment by NatashaRostova on [deleted post] 2016-12-13T21:31:44.305Z

Based on their other comments (accusing a post of mine of being 'homophobic' -- nonsensically) , and their username, it's a troll.

Comment by natasharostova on Traditions and Rationality. · 2016-12-13T20:43:06.719Z · score: 1 (1 votes) · LW · GW

You're not saying my post was homophobic, are you? I don't think anyone here has been homophobic, or close.

Comment by natasharostova on Sane Thinking about Mental Problems · 2016-12-13T04:15:21.251Z · score: 3 (3 votes) · LW · GW

I think this is a great area to explore, and probably one of the areas a rationalist perspective can most help teenagers.

I know as a late teenager, and in my early 20s, my sadness, depression, and anxiety, were part of my identity. It was how I dressed, how I thought of myself, the music I listened to, drugs I took. To take a quote from classic t.v. show 'Bojack Horsemen' I fetishized my own sadness. Breakups felt like a beautiful soul-crushing torture.

As I studied more science, read more, and took more interest in the scientific world, I started viewing my interactions with my own emotions in a more evolutionary and scientific view. I was less interested in my 'artistic sadness' and saw it more as a sort of depressing failure of my brain and evolution -- one which I could try to hack by exercise and eating well.

I wish someone had explained to me that as a man there were special hacks I could use, like lifting weights, testosterone, boxing, fighting, and other activities that I'm programmed to find rewarding. Particularly as a guy who was nerdier growing up, I never realized that passing on sports wasn't just a personal choice, but could seriously hurt my own personal development and confidence.

Comment by natasharostova on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-12T22:39:15.242Z · score: 0 (0 votes) · LW · GW

It's a stupid question. It wouldn't be too hard to give 10 methodologists this question, then tell them the side to support, and watch them all build great cases. Obviously that's an assertion, I can't imagine evidence then claim it proves me right :P, but I strongly suspect this would be true.

The question is so dumb. Even if they got rid of the business story-line, and abstracted it to pure statistics, it's still stupid. What distribution characterizes it? If they got rid of the business, gave the data, AND gave info on the generative distribution, AND made it a numerical answer... Then I guess it's a fair question, but at that point it's just a pure stats question.

Comment by natasharostova on Traditions and Rationality. · 2016-12-10T21:16:48.919Z · score: 0 (0 votes) · LW · GW

I don't really understand what you're trying to say about homosexuality. I don't want to explore how traditional morality has advantages, because that's a hard question and not something I have any reason to think I'd be all that good at. I do think that morality and tradition is complicated, so we have to be careful not to assume that we can reason through or against certain phenomena.

It's always awkward talking about complex systems involving humans, because either you abstract away from individuals or you never talk about them. It's even more difficult when trying to discuss why an individual should/shouldn't take some self-interested action based on how other people may react, based on other peoples traditional views, which may be sensible or may be insane. Over time we enter an equilibrium based on the costs/benefits an abstract community places on certain actions. So, I think, the question becomes are individuals responsible for trying to keep a good equilibrium in place? And is it possible to easily predict how shifts will change the equilibrium for better or worse? I don't know the answer to that, or that there even is an answer.

I get the point you're making, and it's an important one that I did not address, and thinking back I should have thought about it and admitted I don't know the answer. That at the individual level people should weight their own cost/benefit with respect to their own life, not as part of some abstract group.

To your final point, I edited my post based on Daniel's comment to remove the facebook link. I didn't see it as a privacy violation, but since people think it's bad etiquette I'm happy to adjust.

Comment by natasharostova on Traditions and Rationality. · 2016-12-10T20:21:23.771Z · score: 2 (2 votes) · LW · GW

Yeah I should have phrased that better. It goes hand-in-hand with your last sentence. Lots of our impulses and feelings are based on a cultural programming that encourages us to build outcomes that's best for society.

I have a close friend for example who just finished his MD/PhD. He makes less money than all his peers who he went to Stanford (much less money) and works way harder hours. While his friends get piles of money and free lunches at Google, he sleeps on a cot in an old hospital with a broken AC unit doing 20 hour shifts. Why?!

Well, it's indisputable that his job is more prestigious. An MD/PhD at a top hospital in the world doing cancer research? That's about as high as you can go in terms of societal respect and prestige. We all were raised to understand his sacrifice, and as a result by entering that field he gets this 'bonus payment.' That bonus payment encourages our best and brightest to do really shitty things. Here we are able to decompose and understand why that's happening. Still, when he made the choice to get his MD/PhD I suspect he felt it was an individual choice, and it was, but it was also guided by this greater way society rewards him.

So why do women (and families) not want to go into or encourage prostitution? Surely they will get some benefit from it, but they avoid it because society places a huge cost in terms of shame/disappointment/self-hate. Now, maybe this is an outdated societal cost we're building, and it's time to try to get rid of it. That could totally be the case. But it could also be the case that this cost is serving a purpose that isn't obvious to us. You give one example that it could lead to more objectification, which societally we decide is bad.

Comment by natasharostova on Traditions and Rationality. · 2016-12-10T20:12:00.165Z · score: 1 (1 votes) · LW · GW

Good point. I agree. I think the point I was making could be abstracted away from it anyway, so I edited my post accordingly.

Comment by natasharostova on Traditions and Rationality. · 2016-12-10T20:11:21.053Z · score: 1 (1 votes) · LW · GW

I want to address your specific points, but let me first clarify what I'm not saying: I'm not saying it's necessarily a bad move, EY might be right that it's good and should be considered. Maybe it's true that the sexual habits of children are unimportant to parents, and if we reach a world where they are no longer considered that would be a better one. It's also probably true that all things constant, laws that forbid this type of prostitution hurt more people than they help by building black markets. I am not disagreeing with him on any of those points.

I'm trying to make a much more subtle point, which is that when thinking through the possibilities we are often unable to decompose or understand some tradition, which doesn't mean it isn't still founded on a real and still actively useful reason.

To go back to why parents should care about the sexual habits of their children, I don't think what I personally think matters. I think the reason parents care is due to very complex set of evolutionary and cultural systems, and they may be outdated and ready for us all to move past, or they may be achieving a purpose we aren't aware of. I don't think I can just invoke some philosophy and state "Based on my moral tenants of individual rights parents ought not to care about their children's sexuality." I think it's a question of measurement and the pros/cons of a counterfactual world where they care less or more, and how that turns out.

I agree that in the past it seemed to be common to be unhappy (to understate it) if your child was homosexual, and the world seems better the more accepting parents are of their gay children. But I don' think that's sufficient evidence to predict they shouldn't ever care.

In the "10,000 Year Explosion" Greg Cochran tracks how small selection pressures between genetics and culture resulted in crazy different outcomes for Ashkenaz Jews. It is possible small tweaks to complex systems can have outcomes nearly impossible to predict.

To go back to your point 2, as you note '[He] argues that there are women for whom doing this might be a good move, and aren't thinking enough about the possibility." That's what I was trying to disagree with. For some reason they aren't considering it, there is a taboo or a cultural more that blocks them from consideration. It could be true that this is based on an outdated view of sexual morality, and all would be better if it were removed. That, honestly, could be the case. It could also be true that there is a good reason, but it's embedded in this complex cultural system that doesn't reveal itself to us. The point I wanted to make was that we need to be very careful when decomposing tradition, because it's built on a complex system that makes understanding our impulses and societal moral institutions very hard.

Thanks for your comment though. I agree with you that I could have been clearer, and was not sufficiently charitable in arguing against and for his simpler and best points, and didn't expand on the scenarios where he could be right, and the points I agreed with. I didn't intend to make it seem as though he suggested all women should be considering this, but I do agree that's how it came across. I'm trying to improve as a writer, so I do sincerely appreciate your feedback.

Comment by natasharostova on Take the Rationality Test to determine your rational thinking style · 2016-12-10T01:57:14.453Z · score: 0 (0 votes) · LW · GW

What is this, Less Wrong: Cosmos Edition?

Comment by natasharostova on If Prison Were a Disease, How Bad Would It Be? · 2016-12-08T23:51:47.714Z · score: 2 (2 votes) · LW · GW

Alright, I think if you and I sat down and talked about the cost vs. benefit of incarceration and the war on drugs, we would be in almost complete agreement. The costs are in equilibrium with benefits, so it's sort of like trying to see where you can save the most utility a year by looking at your financial records: Sure, the more expensive items are more likely to have a high magnitude of savings, but they also could generate more utility. You haven't ever read anything I've written, but I've read your site, so you'll have to take my word on that :)

That means our disagreement here has more to do with our almost-legal interpretation of the article, which is sort of a boring thing to disagree on honestly. I'm willing to give her reading a more charitable interpretation, I am probably filling in the gaps in her article with my own reasoning, which perhaps is too charitable or incorrect. I still think you are not being charitable enough, but again, that's a boring thing to disagree on. So let's call it good.

Comment by natasharostova on If Prison Were a Disease, How Bad Would It Be? · 2016-12-08T23:14:51.721Z · score: 2 (2 votes) · LW · GW

I think you're being disingenuous and taking semantic laziness on Sarah's part as a fundamental flaw in the reasoning itself. I think it's fair to say she wasn't trying to dismiss any talk of deterrence as being cartoon villainy (I didn't see a super prefix there? But maybe it was an edit. Doesn't really matter). But was responding to your specific, separate from the argument of her post, comment noting that she wasn't willing to consider the benefits of, in the example you gave, deterrence based rape. Which is different from her considering deterrence in the original post, and saying it's only worth considering for 'cartoon supervillians.' Whether deterrence based implied prison rape is a benefit is a totally different beast.

I mean, the analogy might not be great. And the post might not seem useful to you (or others) if it's strictly studying costs. But I think the argument "This wasn't a useful post because it excluded benefits, which is something I think is integral to this study. It also wasn't useful because it used a lazy analogy that seems to misrepresent the reality, even if that wasn't your intention." Is different than what you're saying. I think you'll agree with me on that, no?

Comment by natasharostova on If Prison Were a Disease, How Bad Would It Be? · 2016-12-08T21:12:31.299Z · score: 0 (0 votes) · LW · GW

I didn't get that impression. Sarah wasn't stating: Here is a cost vs. benefit of Prisons. She was instead writing about how we could measure the costs of prison. If she doesn't write at all about benefits, there is no reason to infer that she is deliberately leaving them out to be misleading. Actually, I think this sort of inference towards what she is actually trying to say based on what you think she left out is misleading.

Gwern doesn't say anything interesting. He points out that you do, in fact, need to measure benefit for cost vs. benefit. I guess that would be an interesting point if there was any reason to suspect Sarah wasn't aware of this idea. Also, as far as what 'surprises' him as being a metric for reasonableness is useless. It's no secret to anyone who reads about this stuff that the incarceration system in the US has a rich history of being incredibly disturbing.

Evidently Gwern thinks that so long as it registers as 'bad', that's okay, because hey, bad is a deterrent! Whereas Sarah is taking the more methodical approach of actually measuring how bad it might be.

I don't think someone writing about only costs, or only benefits, is necessarily bad. I never go the impression the article was advocating for widespread release and abolition of the prison system. It was instead quantifying the measured cost. You can stop there, that's all the article intended to do.

Comment by natasharostova on Land war in Asia · 2016-12-08T03:10:23.288Z · score: 0 (0 votes) · LW · GW

Yeah, fair enough. Everything I know about that event comes from War and Peace and Wikipedia, so I won't argue on any specific ground. Tolstoy's bigger argument that there were lots of hidden, but crucial aspects, that determined the war, at the time, went against the traditional view of the time that it was all a function of Great Men. Or at least that's the impression I have.

Comment by natasharostova on "What is Wrong With our Thoughts" -David Stove (1991) · 2016-12-08T01:41:13.588Z · score: 7 (8 votes) · LW · GW

Throughout my academic/research experiences in the social sciences and economic forecasting, it's become clear that more complex models, whether it's more variables, dynamics, or nonlinearity, rarely ever perform well. For the vast majority of situations in forecasting, it's incredibly hard to beat a random-walk or an auto-regression (order 1).

There is no proof or explanation of why in an academic textbook, you just pick it up over time. Notable exceptions define entire subfields. The U.S. Term structure of debt is best modeled by using a set of ODEs to fit the cross-section, and stochastic dynamics to fit the time-series. The complexity there can grow enormously, and leads to lots of dense financial-math research, which actually does improve predictive accuracy in forecasting (still not by much, but it does consistently).

We actually see the same thing in economic analysis using words. While it's often shakier than economists would like, describing monopolistic dynamics in an essay seems to be a nice approximation of reality in terms of predictive performance. I know this isn't new to the LW crowd, but I always think of words as simply the painting of reality with non-linear dynamics in the way the human brain evolved to process information. That's why neural networks, which learn these dynamics, work best for processing language (I think).

It turns out that words, like non-linear equations, are great at fitting data. If you find a subset of reality where you truly can use non-linear models, words, or both, to classify what's going on, you're in a great spot for predictive accuracy. Empirically though, in the collective experience of my field, that's really hard to do. If your model diverges radically from a reduced form, random walk, or basic model, you need to be able to prove it wins.

Unfortunately, our brains do not seem to be good at detecting overfitting. The way I think about it, which like all evolutionary reasoning is questionable, is we evolved to learn nonlinear dynamics as we navigate our world, hunt, form relationships, and live in tribes. The complexity of a self-driving car is only a small subset of how we perceive reality. So, to us, it feels natural to use these words to paint nonlinear stories of reality, of the holy ghost, of Marxist theory, and all these advanced, nonsensical ideas.

Our thoughts suck because we overfit. If someone showed you a regression they fit, where they added a hundred transformations of the series of interest (squared, logged, cubed, etc), and their R2 was equal to 1, you'd tell them they are misguided.What's the problems pace of Marx? "I fit a series of nonlinear dynamics, using words, to centuries of human interaction, and will use it to forecast human interaction forever." Well, actually, you can do that. And it could be true. But it also might be garbage -- nonsense.

Comment by natasharostova on Land war in Asia · 2016-12-07T20:06:24.670Z · score: 3 (3 votes) · LW · GW

+1 for a novel/interesting original post.

I agree the idea that evil/irrationality go hand-in-hand is a commonly held, but silly idea. In a similar vein I see people thinking the line between good/evil is distinct and clear-cut throughout history. If we believe it was a clear distinction historically, it should follow the distinction would be clear today. And who is evil today? Our political opponents, of course (/s).

Not to suggest there weren't better/worse sides in the past, however, I recently read this book 'Human Smoke,' which is a collection of news paper clippings from the 1920s-1940s. It's incredible how a few years before WWII Churchill was gassing and decimating colonialist towns for disobeying imperial mandate. It is only one data point, of many, to show that this idea that good = rational, morally upstanding, our side. Bad = evil, irrational, crazy, their side. Is a vast oversimplification.

PS: Your post also reminds me of War and Peace, where Tolstoy makes the argument that attributing Russia's defeat of Napoléon as due to some grand strategic brilliance is nonsensical, and the reality was more mundane (One example, a combination of bureaucratic slowness leading to retreat, luckily paired with a colder than usual winter).

Comment by natasharostova on Unspeakable conversations · 2016-12-07T17:37:23.083Z · score: 4 (4 votes) · LW · GW

I read the entire article. What irks me about these type of debates, between a lawyer and a philosopher of ethics, is that they center around creating a consistent 'logical structure' or trying to define the right types of preferences purely from reason.

The author uses lots of lawyer arguments that focus on rhetoric, but are nonsensical. She is 'worse off' in the sense that she would probably prefer to not be disabled. Rationalizing that society would take care of disabled people, for pay (freeing the family from a life of caregiving) only side-steps the issue that resources still need to be expended that could be used on something else.

I think they could debate forever, because there is no right answer. It's a measurement question with a very flat optimization function. How do you measure the cost/benefit in such an incredibly high-dimensional and uncertain question?

Thankfully (hopefully), the real progress towards this solution is being worked on by bioengineers, not by these debates.

Comment by natasharostova on Unfortunate Information · 2016-12-07T03:17:11.062Z · score: 3 (3 votes) · LW · GW

The only downside is it tends to be correlated with an identity that people reject off hand. I know lots of alt-right/paleo-con sites use hatefacts, and sometimes play fast and loose with the term.

PS: Huge fan of your interview series. I've listened to them all!

Comment by natasharostova on Beware of identifying with school of thoughts · 2016-12-06T05:44:22.900Z · score: 0 (0 votes) · LW · GW

I'm going to risk going down a meaningless rabbit hole here of semantic nothingness --

But I still disagree with your distinction, although I do appreciate the point you're making. I view, and think the correct way to view, the human brain as simply a special case of any other computer. You're correct that we have, as a collective species, proven and defined these abstract patterns. Yet even all these patterns are based on observations and rules of reasoning between our mind and the empirical reality. We can use our neurons to generate more sequences in a pattern, but the idea of an infinite set of numbers is only an abstraction or an appeal to something that could exist.

Similarly, a silicon computer can hold functions and mappings, but can never create an array of all numbers. They reduce down to electrical on-off switches, no matter how complex the functions are.

There is also no rule that says natural numbers or any category can't change tomorrow. Or that right outside of the farthest information set in the horizon of space available to humans, the gravitational and laws of mathematics all shift by 0.1. It is sort of nonsensical, but it's part of the view that the only difference between things that feel real and inherently distinguishable is our perception of how certain they are to continue based on prior information.

In my experience talking about this with people before, it's not the type of thing people change their mind on (not implying your view is necessarily wrong). It's a view of reality that we develop pretty foundationally, but I figured I'd write out my thoughts anyway for fun. It's also sort of a self-indulgent argument about how we perceive reality. But, hey, it's late and I'm relaxing.