Posts

Announcement: AI alignment prize round 4 winners 2019-01-20T14:46:47.912Z · score: 80 (19 votes)
Announcement: AI alignment prize round 3 winners and next round 2018-07-15T07:40:20.507Z · score: 102 (29 votes)
How to formalize predictors 2018-06-28T13:08:11.549Z · score: 16 (5 votes)
UDT can learn anthropic probabilities 2018-06-24T18:04:37.262Z · score: 63 (19 votes)
Using the universal prior for logical uncertainty 2018-06-16T14:11:27.000Z · score: 0 (0 votes)
Understanding is translation 2018-05-28T13:56:11.903Z · score: 133 (44 votes)
Announcement: AI alignment prize round 2 winners and next round 2018-04-16T03:08:20.412Z · score: 153 (45 votes)
Using the universal prior for logical uncertainty (retracted) 2018-02-28T13:07:23.644Z · score: 39 (10 votes)
UDT as a Nash Equilibrium 2018-02-06T14:08:30.211Z · score: 34 (11 votes)
Beware arguments from possibility 2018-02-03T10:21:12.914Z · score: 13 (9 votes)
An experiment 2018-01-31T12:20:25.248Z · score: 32 (11 votes)
Biological humans and the rising tide of AI 2018-01-29T16:04:54.749Z · score: 55 (18 votes)
A simpler way to think about positive test bias 2018-01-22T09:38:03.535Z · score: 34 (13 votes)
How the LW2.0 front page could be better at incentivizing good content 2018-01-21T16:11:17.092Z · score: 38 (19 votes)
Beware of black boxes in AI alignment research 2018-01-18T15:07:08.461Z · score: 70 (29 votes)
Announcement: AI alignment prize winners and next round 2018-01-15T14:33:59.892Z · score: 166 (63 votes)
Announcing the AI Alignment Prize 2017-11-04T11:44:19.000Z · score: 1 (1 votes)
Announcing the AI Alignment Prize 2017-11-03T15:47:00.092Z · score: 155 (67 votes)
Announcing the AI Alignment Prize 2017-11-03T15:45:14.810Z · score: 7 (7 votes)
The Limits of Correctness, by Bryan Cantwell Smith [pdf] 2017-08-25T11:36:38.585Z · score: 3 (3 votes)
Using modal fixed points to formalize logical causality 2017-08-24T14:33:09.000Z · score: 3 (3 votes)
Against lone wolf self-improvement 2017-07-07T15:31:46.908Z · score: 30 (28 votes)
Steelmanning the Chinese Room Argument 2017-07-06T09:37:06.760Z · score: 5 (5 votes)
A cheating approach to the tiling agents problem 2017-06-30T13:56:46.000Z · score: 3 (3 votes)
What useless things did you understand recently? 2017-06-28T19:32:20.513Z · score: 7 (7 votes)
Self-modification as a game theory problem 2017-06-26T20:47:54.080Z · score: 10 (10 votes)
Loebian cooperation in the tiling agents problem 2017-06-26T14:52:54.000Z · score: 5 (5 votes)
Thought experiment: coarse-grained VR utopia 2017-06-14T08:03:20.276Z · score: 16 (16 votes)
Bet or update: fixing the will-to-wager assumption 2017-06-07T15:03:23.923Z · score: 26 (26 votes)
Overpaying for happiness? 2015-01-01T12:22:31.833Z · score: 32 (33 votes)
A proof of Löb's theorem in Haskell 2014-09-19T13:01:41.032Z · score: 29 (30 votes)
Consistent extrapolated beliefs about math? 2014-09-04T11:32:06.282Z · score: 6 (7 votes)
Hal Finney has just died. 2014-08-28T19:39:51.866Z · score: 33 (35 votes)
"Follow your dreams" as a case study in incorrect thinking 2014-08-20T13:18:02.863Z · score: 29 (31 votes)
Three questions about source code uncertainty 2014-07-24T13:18:01.363Z · score: 9 (10 votes)
Single player extensive-form games as a model of UDT 2014-02-25T10:43:12.746Z · score: 13 (11 votes)
True numbers and fake numbers 2014-02-06T12:29:08.136Z · score: 19 (29 votes)
Rationality, competitiveness and akrasia 2013-10-02T13:45:31.589Z · score: 14 (15 votes)
Bayesian probability as an approximate theory of uncertainty? 2013-09-26T09:16:04.448Z · score: 16 (18 votes)
Notes on logical priors from the MIRI workshop 2013-09-15T22:43:35.864Z · score: 18 (19 votes)
An argument against indirect normativity 2013-07-24T18:35:04.130Z · score: 1 (14 votes)
"Epiphany addiction" 2012-08-03T17:52:47.311Z · score: 52 (56 votes)
AI cooperation is already studied in academia as "program equilibrium" 2012-07-30T15:22:32.031Z · score: 36 (37 votes)
Should you try to do good work on LW? 2012-07-05T12:36:41.277Z · score: 36 (41 votes)
Bounded versions of Gödel's and Löb's theorems 2012-06-27T18:28:04.744Z · score: 32 (33 votes)
Loebian cooperation, version 2 2012-05-31T18:41:52.131Z · score: 13 (14 votes)
Should logical probabilities be updateless too? 2012-03-28T10:02:09.575Z · score: 12 (15 votes)
Common mistakes people make when thinking about decision theory 2012-03-27T20:03:08.340Z · score: 51 (46 votes)
An example of self-fulfilling spurious proofs in UDT 2012-03-25T11:47:16.343Z · score: 20 (21 votes)
The limited predictor problem 2012-03-21T00:15:26.176Z · score: 10 (11 votes)

Comments

Comment by cousin_it on Can we really prevent all warming for less than 10B$ with the mostly side-effect free geoengineering technique of Marine Cloud Brightening? · 2019-08-24T19:48:10.637Z · score: 8 (3 votes) · LW · GW

The 2015 report "Climate Intervention: Reflecting Sunlight to Cool Earth" says existing instruments aren't precise enough to measure albedo change from such a project, and measuring its climate impact is even more tricky. That also makes small-scale experimentation difficult. Basically you'd have to go to 100% and then hope that it worked. As someone who ran many A/B tests, that squicks me out, I wouldn't press the button until we had better ways to measure the impact.

Comment by cousin_it on how should a second version of "rationality: A to Z" look like? · 2019-08-24T09:39:58.907Z · score: 10 (4 votes) · LW · GW

I think most people already know that spending less time looking at screens is a good idea. What's missing is a way to do that without relapsing. Willpower and guilt don't work for everyone. Here's some things that work for me:

  1. Not carrying a smartphone. I never owned one, and never felt bad about it.

  2. Unfollowing everyone on Facebook, so my feed is empty, but my account still exists and people can contact me.

  3. Switching my laptop screen to grayscale. (An e-ink macbook would be even better, but they don't exist.)

Comment by cousin_it on Troll Bridge · 2019-08-24T00:08:15.611Z · score: 3 (1 votes) · LW · GW

Agree and strong upvote. I never understood why Abram and others take troll bridge so seriously.

Comment by cousin_it on Logical Optimizers · 2019-08-23T13:23:46.916Z · score: 6 (3 votes) · LW · GW

Isn't there a "telomere shortening" effect, where an optimizer using a strong theory can't prove good behavior of successors using the same theory and will pick a successor with weaker theory? Using logical inductors could help with that, but you'd need to spell it out in detail.

Comment by cousin_it on Why are people so optimistic about superintelligence? · 2019-08-23T09:32:16.205Z · score: 7 (4 votes) · LW · GW

Since randomly combining human genes was enough to create John von Neumann, and human neurons fire less than 1000 times per second, it seems like there should be a straight (if long) path to building an intelligence as strong as N Von Neumanns working together at M times speedup, for N and M at least 1000. That's super enough.

Comment by cousin_it on Thoughts from a Two Boxer · 2019-08-23T01:16:10.056Z · score: 6 (3 votes) · LW · GW

Well, there are other problems besides Newcomb. Something like UDT can be motivated by simulations, or amnesia, or just multiple copies of the AI trying to cooperate with each other. All these lead to pretty much the same theory, that's why it's worth thinking about.

Comment by cousin_it on Response to Glen Weyl on Technocracy and the Rationalist Community · 2019-08-23T00:33:42.509Z · score: 15 (6 votes) · LW · GW

Yeah. The econ part wasn't so bad - I lived through the shock therapy of 90s Russia, and Glen is spot on when he blames it on unaccountable technocratic governance. But when it comes to AI alignment, it seems like he hasn't heard of corrigibility and interpretability work by MIRI, FHI and OpenAI.

Comment by cousin_it on Paradoxical Advice Thread · 2019-08-21T22:09:09.377Z · score: 18 (6 votes) · LW · GW

I once saw a meme that said "If actions speak louder than words, why is the pen mightier than the sword?"

Comment by cousin_it on Unstriving · 2019-08-21T09:46:43.535Z · score: 11 (4 votes) · LW · GW

I agree that striving for success is likely to lead to disappointment. But life without striving also sounds disappointing. How about we let go of success, but keep doing challenging stuff anyway, just for the fun of it?

Comment by cousin_it on Open Thread June 2010, Part 3 · 2019-08-21T09:26:03.140Z · score: 5 (2 votes) · LW · GW

Coming back to this question after a few years, I was able to find a surprisingly simple Econ 101 answer in five minutes. To zeroth order, there's no change because the amount of goods and services in the economy stays the same. To first order, allowing a deal to be freely made usually increases total value in the economy, not just the value for those making the deal; so this deal is good for the economy iff both sides agree to it.

That sidesteps all complications like "the parents are happy to help their child", "the apartment might have facilities that the child doesn't need", etc. I guess reading an econ textbook has taught me to look for ways to estimate the total without splitting it up.

Comment by cousin_it on What authors consistently give accurate pictures of complex topics they discuss? · 2019-08-21T09:10:51.883Z · score: 8 (5 votes) · LW · GW

David MacKay's Sustainable energy without the hot air is old but really good. Here's an intro paragraph:

The first half of this book discusses whether a country like the United Kingdom, famously well endowed with wind, wave, and tidal resources, could live on its own renewables. We often hear that Britain’s renewables are “huge”. But it’s not sufficient to know that a source of energy is “huge”. We need to know how it compares with another “huge”, namely our huge consumption. To make such comparisons, we need numbers, not adjectives.

I wish there was a similar book about basic income, or maybe it exists and I just don't know about it.

Comment by cousin_it on A misconception about immigration · 2019-08-20T18:10:01.969Z · score: 3 (2 votes) · LW · GW

I think splitting it up only obscures the overall effect: for the economy as a whole, giving an external agent fewer goods and services for the service of cleaning is better than giving more for the same.

Comment by cousin_it on A misconception about immigration · 2019-08-20T07:38:40.720Z · score: 5 (2 votes) · LW · GW

It is instructive to consider robots in this context. They replace local human workers like immigrants, but unlike immigrants, they do not have the same demand profile as humans. In return for their work, they ask for energy, machinery and engineering. This type of demand undoubtedly creates fewer jobs for humans compared to an immigrant worker. So, when it comes to the health of the economy, you should fear robots much more than immigrants.

According to my beginner understanding of econ, this part seems wrong. In aggregate, a household or country will benefit more from a robot which cleans floors at the expense of a little electricity, than from an extra person who does the same job but also requires room and board.

Comment by cousin_it on Matthew Barnett's Shortform · 2019-08-15T08:50:50.784Z · score: 4 (2 votes) · LW · GW

Maybe commit to spending at least N minutes on any exercise before looking up the answer?

Comment by cousin_it on Matthew Barnett's Shortform · 2019-08-14T06:27:29.596Z · score: 5 (3 votes) · LW · GW

When I try to read a textbook cover to cover, I find myself much more concerned with finishing rather than understanding. I want the satisfaction of being able to say I read the whole thing, every page. This means that I will sometimes cut corners in my understanding just to make it through a difficult part. This ends in disaster once the next chapter requires a solid understanding of the last.

When I read a textbook, I try to solve all exercises at the end of each chapter (at least those not marked "super hard") before moving to the next. That stops me from cutting corners.

Comment by cousin_it on Occam's Razor: In need of sharpening? · 2019-08-06T22:53:04.702Z · score: 7 (3 votes) · LW · GW

Yeah, that's a good summary of my view (except maybe I wouldn't even persist into the fourth paragraph). Thanks!

Comment by cousin_it on No nonsense version of the "racial algorithm bias" · 2019-08-06T19:39:08.490Z · score: 3 (1 votes) · LW · GW

I don't think everyone should be parity-fair to everyone else - that's unfeasible. But I do think the government should be parity-fair. For example, a healthcare safety net shouldn't rely on free market insurance where the sick pay more. It's better to have a system like in Switzerland where everyone pays the same.

Comment by cousin_it on Occam's Razor: In need of sharpening? · 2019-08-06T19:14:41.858Z · score: 3 (1 votes) · LW · GW

Yeah, I agree it's unlikely that the equations of nature include a humanlike mind bossing things around. I was arguing against a different idea - that lightning (a bunch of light and noise) shouldn't be explained by Thor (a humanlike creature) because humanlike creatures are too complex.

Comment by cousin_it on How would a person go about starting a geoengineering startup? · 2019-08-06T18:42:18.299Z · score: 4 (2 votes) · LW · GW

Maybe chat with the folks from Marine Cloud Brightening Project and ask which obstacles they faced?

Comment by cousin_it on Just Imitate Humans? · 2019-08-06T11:55:20.761Z · score: 5 (2 votes) · LW · GW

Wow, when I click these links from greaterwrong.com, they go to arbital.greaterwrong.com which loads instantly. Thanks to Said for the nice work!

Comment by cousin_it on Occam's Razor: In need of sharpening? · 2019-08-06T07:03:33.972Z · score: 6 (3 votes) · LW · GW

Thor isn't quite as directly in the theory :-) In Norse mythology he's a creature born to a father and mother, a consequence of initial conditions just like you.

Sure, you'd have to believe that initial conditions were such that would lead to Thor. But if I told you I had a neighbor named Bob, you'd have no problem believing that initial conditions were such that would lead to Bob the neighbor. You wouldn't penalize the Bob hypothesis by saying "Bob's brain is too complicated", so neither should you penalize the Thor hypothesis for that reason.

The true reason you penalize the Thor hypothesis is because he has supernatural powers, unlike Bob. Which is what I've been saying since the first comment.

Comment by cousin_it on Occam's Razor: In need of sharpening? · 2019-08-05T20:28:33.922Z · score: 3 (3 votes) · LW · GW

In current physical theories (the Standard Model and General Relativity), the brains are not described in the math, rather brains are a consequence of the theories carried out under specific conditions.

Yeah. But not sure you got the point of my argument. If your brain is a consequence of theory+conditions, why should the hypothesis of another humanlike brain (Thor) be penalized for excessive complexity under the same theory+conditions?

Comment by cousin_it on Occam's Razor: In need of sharpening? · 2019-08-04T22:55:19.807Z · score: 24 (7 votes) · LW · GW

An illustrative example is that, when explaining lightning, Maxwell’s equations are simpler in this sense than the hypothesis that Thor is angry because the shortest computer program that implements Maxwell’s equations is much simpler than an emulation of a humanlike brain and its associated emotions.

I just realized that this argument, long accepted on LW, seems to be wrong. Once you've observed a chunk of binary tape that has at least one humanlike brain (you), it shouldn't take that many bits to describe another (Thor). The problem with Thor isn't that he's humanlike - it's that he has supernatural powers, something you've never seen. These supernatural powers, not the humanlike brain, are the cause of the complexity penalty. If something non-supernatural happens, e.g. you find your flower vase knocked over, it's fine to compare hypotheses "the wind did it" vs "a human did it" without penalizing the latter for humanlike brain complexity.

(I see Peter de Blanc and Abram Demski already raised this objection in the comments to Eliezer's original post, and then everyone including me cheerfully missed it. Ouch.)

Comment by cousin_it on Inversion of theorems into definitions when generalizing · 2019-08-04T19:20:27.215Z · score: 24 (9 votes) · LW · GW

Russian mathematician V.I. Arnold had a semi-famous rant against taking this inversion too far. Example quote:

What is a group? Algebraists teach that this is supposedly a set with two operations that satisfy a load of easily-forgettable axioms. This definition provokes a natural protest: why would any sensible person need such pairs of operations? "Oh, curse this maths" - concludes the student (who, possibly, becomes the Minister for Science in the future).

We get a totally different situation if we start off not with the group but with the concept of a transformation (a one-to-one mapping of a set onto itself) as it was historically. A collection of transformations of a set is called a group if along with any two transformations it contains the result of their consecutive application and an inverse transformation along with every transformation.

This is all the definition there is. The so-called "axioms" are in fact just (obvious) properties of groups of transformations. What axiomatisators call "abstract groups" are just groups of transformations of various sets considered up to isomorphisms (which are one-to-one mappings preserving the operations). As Cayley proved, there are no "more abstract" groups in the world. So why do the algebraists keep on tormenting students with the abstract definition?

Comment by cousin_it on Sleeping Beauty: Is Disagreement While Sharing All Information Possible in Anthropic Problems? · 2019-08-03T16:56:56.479Z · score: 7 (3 votes) · LW · GW

Thanks for writing this! To recap:

  1. Alice goes to sleep and a coin is flipped. Heads: wake up on both day 1 and day 2 with amnesia. Tails: wake up only on day 1.

  2. Bob goes to sleep and another coin is flipped. Heads: wake up on day 1. Tails: wake up on day 2.

  3. If Alice and Bob are awake on the same day, they meet and talk.

Now if Alice and Bob do meet, then Bob believes Alice's coin came up heads with probability 2/3. If Alice is a thirder, she agrees. But if Alice is a halfer, they have an unresolvable disagreement.

Here's another thought experiment I came up with sometime ago (first invented by Cian Dorr, I think):

  1. Alice goes to sleep and a coin is flipped. Heads: wake up on both day 1 and day 2 with amnesia. Tails: wake up only on day 1. Then she's told the coinflip result and goes home.

  2. In case of tails, when Alice gets home, she sets up her room to look the same as the experiment. Then she writes herself a note that she's not actually in the experiment, takes an amnesia pill, and goes to sleep.

Now Alice's situation is symmetric: in case of both heads and tails she wakes up twice. In case of tails, with probability 1/2 she finds the note and learns that she's not in the experiment. So if she doesn't find the note, she updates to 2/3 probability of heads.

Taken together, these experiments show that thirdism is robust both to perspective change and to giving yourself mini-amnesia on a whim. I don't know any such nice properties for halfism. So I'm gonna jump the gun and say thirdism is most likely just true.

Comment by cousin_it on When does adding more people reliably make a system better? · 2019-07-19T09:02:47.011Z · score: 12 (2 votes) · LW · GW

Are you sure the difference you've noticed actually exists? The financial system crashed and hurt a lot of people, but rewarded a few people greatly. The same thing can happen in companies or communities - they can fail overall, but reward a few people greatly.

Comment by cousin_it on Dialogue on Appeals to Consequences · 2019-07-18T14:03:05.712Z · score: 9 (5 votes) · LW · GW

Yes, having strong unfiltered evidence for X can justify suppressing evidence against X. But if suppression is already in effect, and someone doesn't already have unfiltered evidence, I'm not sure where they'd get any. So the share of voters who can justify suppression will decrease over time.

Comment by cousin_it on Dialogue on Appeals to Consequences · 2019-07-18T12:42:11.728Z · score: 3 (1 votes) · LW · GW

So if evidence against X is being suppressed, then people's belief in X is unreliable, so it can't justify suppressing evidence against X. That's a great argument for free speech, thanks! Do you know if it's been stated before?

Comment by cousin_it on Reclaiming Eddie Willers · 2019-07-16T12:25:49.418Z · score: 8 (4 votes) · LW · GW

in my mind, the point of having ideals isn’t at all that they will reward you

Maybe not the whole point, but if I have some ideal that's about helping people, and these people turn out ungrateful again and again, it's a sign that the ideal might be wrong.

Comment by cousin_it on No nonsense version of the "racial algorithm bias" · 2019-07-16T12:14:18.651Z · score: 5 (2 votes) · LW · GW

I'll side with ProPublica, because my understanding of fairness (equal treatment for everyone) seems to be closer to parity than calibration. For example, a test that always returns positive or always flips a coin is parity-fair but not calibration-fair.

Comment by cousin_it on Reclaiming Eddie Willers · 2019-07-15T09:39:17.612Z · score: 4 (5 votes) · LW · GW

This will sound horrible, but I think the view of idealists as fools has a grain of truth. There are many people, entities and ideas that, if you love them, will reward you generously. Why waste your love on something that won't reward you?

Comment by cousin_it on The AI Timelines Scam · 2019-07-12T10:28:08.416Z · score: 5 (2 votes) · LW · GW

Which is not to say that modeling such technical arguments is not important for forecasting AGI. I certainly could have written a post evaluating such arguments, and I decided to write this post instead, in part because I don’t have much to say on this issue that Gary Marcus hasn’t already said.

Is he an AI researcher though? Wikipedia says he's a psychology professor, and his arXiv article criticizing deep learning doesn't seem to have much math. If you have technical arguments, maybe you could state them?

Comment by cousin_it on Religion as Goodhart · 2019-07-10T15:32:53.163Z · score: 3 (3 votes) · LW · GW

Faith never recommends randomization, it justifies it. Like trusting the tea leaves to predict the future.

Don't know about other faiths, but Christianity is pretty strongly against divination in practice.

Comment by cousin_it on Religion as Goodhart · 2019-07-10T08:20:46.956Z · score: 3 (1 votes) · LW · GW

I can't think of any examples where religion recommends randomization. Also, randomness in physics is cheap, and nature uses randomization in many rock-paper-scissors games without requiring religion or even brains.

Comment by cousin_it on Self-consciousness wants to make everything about itself · 2019-07-06T10:29:19.573Z · score: 8 (4 votes) · LW · GW

It's not just about thrill-seeking though. A bit of courage can improve quality of life by making you less scared of events that happen, like the event habryka described.

Comment by cousin_it on Self-consciousness wants to make everything about itself · 2019-07-05T11:45:24.096Z · score: 10 (10 votes) · LW · GW

I think living with courage and dignity is more fun in any social circumstances.

Comment by cousin_it on Self-consciousness wants to make everything about itself · 2019-07-04T08:41:30.751Z · score: 7 (9 votes) · LW · GW

Saying "fuck you" and waiting ten seconds? There's a good chance they were trying to bait you into a reportable offense. That said, you should've used more courage in the moment, and your "friends" should've backed you up. If you find yourself in that situation again, try saying "no you" and improvise from there.

Comment by cousin_it on Conceptual Problems with UDT and Policy Selection · 2019-07-04T07:14:16.009Z · score: 5 (2 votes) · LW · GW

A “naturalistic” approach to game theory is one in which game theory is an application of decision theory (not an extension) -- there should be no special reasoning which applies only to other agents.

But game theory doesn't require such special reasoning! It doesn't care how players reason. They might not reason at all, like the three mating variants of the side-blotched lizard. And when they do reason, game theory still shows they can't reason their way out of a situation unilaterally, no matter if their decision theory is "naturalistic" or not. So I think of game theory as an upper bound on all possible decision theories, not an application of some future decision theory.

Comment by cousin_it on Instead of "I'm anxious," try "I feel threatened" · 2019-06-29T22:43:26.591Z · score: 6 (3 votes) · LW · GW

I've had some success overcoming everyday fears by trying to be courageous in my thoughts and actions. For me it works better than trying to think my way out of feeling afraid. But I never had chronic anxiety and don't know if that approach would work there - have people tried it?

Comment by cousin_it on Conceptual Problems with UDT and Policy Selection · 2019-06-29T22:15:20.936Z · score: 6 (3 votes) · LW · GW

UDT doesn’t give us conceptual tools for dealing with multiagent coordination problems.

I think there's no best player of multiplayer games. Or rather, choosing the best player depends on what other players exist in the world, and that goes all the way down (describing the theory of choosing the best player also depends on what other players exist, and so on).

Of course that doesn't mean UDT is the best we can do. We cannot solve the whole problem, but UDT carves out a chunk, and we can and should try to carve out a bigger chunk.

For me the most productive way has been to come up with crisp toy problems and try to solve them. (Like ASP, or my tiling agents formulation.) Your post makes many interesting points; I'd love to see crisp toy problems for each of them!

Comment by cousin_it on A simple approach to 5-and-10 · 2019-06-22T13:50:45.066Z · score: 3 (1 votes) · LW · GW

For the glider vs honeycomb maximizer, I think the problem is agreeing on what division of the universe counts as (C,C).

Comment by cousin_it on Paternal Formats · 2019-06-10T06:02:29.385Z · score: 9 (3 votes) · LW · GW

How about sequential vs random-access?

Comment by cousin_it on Paternal Formats · 2019-06-09T20:59:23.262Z · score: 7 (4 votes) · LW · GW

Or linear vs open-world, as in video games.

Comment by cousin_it on Quotes from Moral Mazes · 2019-05-31T12:36:09.874Z · score: 4 (2 votes) · LW · GW

Well, he was talking about marketing vs choice of market, and my comment was riffing on that :-) The book uses the quote to make a point about individual credit, but I'm not sure it fits - even if success depends only on choice of market, individuals can still deserve credit for choosing a great market.

Comment by cousin_it on Quotes from Moral Mazes · 2019-05-31T10:17:24.519Z · score: 4 (2 votes) · LW · GW

One of the top executives in Weft Corporation echoes this sentiment: I always say that there is no such thing as a marketing genius; there are only great markets.

Some markets are mainly propped up by marketing though, like diamonds. But I agree that it's more virtuous to fulfill needs that already exist.

Comment by cousin_it on Infinity is an adjective like positive rather than an amount · 2019-05-30T18:06:54.140Z · score: 5 (2 votes) · LW · GW

Another possible metaphor is to think of infinities as second class citizens. For example, in our world dragons don't exist, but if they existed they wouldn't be able to ride the subway as easily as humans, because that would pose practical problems for both dragons and humans. Same for infinities - in the world of numbers they don't really exist, but if they existed, it's not clear how we could extend them equal rights of addition and so on. It's up to politicians/mathematicians to imagine a world where dragons/infinities can live on equal terms with humans/numbers, and maybe such a world just can't be imagined in a way that makes sense.

Comment by cousin_it on Say Wrong Things · 2019-05-28T12:52:03.179Z · score: 9 (4 votes) · LW · GW

I think both overconfidence and underconfidence are widespread, so it's hard to tell which advice would do more good. Maybe we can agree that people tend to over-share their conclusions and under-share their evidence? That seems plausible overall; advising people to shift toward sharing evidence might help address both underconfidence (because evidence feels safer to share) and overconfidence (because people will notice if the evidence doesn't warrant the conclusion); and it might help with other problems as well, like double-counting evidence due to many people stating the same conclusion.

Comment by cousin_it on Micro feedback loops and learning · 2019-05-28T12:18:30.390Z · score: 5 (2 votes) · LW · GW

It's in equal temperament, right? Have you thought about using just intonation?

I had always bought the story that the two are close enough for most musical purposes, but a few weeks ago I unfretted my guitar and started playing music in JI, and it's night and day. For example, in ET the major third from C to E sounds kind of restless, while in JI it's peaceful with all harmonics overlapping as they should (4:5 frequency ratio). Same for the minor third (5:6), hearing the JI interval makes me feel like "ah, so that's what the ET interval was trying to hint at". And then there's the harmonic seventh chord (4:5:6:7), which sounds very musical but can't be imitated on an ET guitar or piano at all.

Comment by cousin_it on Von Neumann’s critique of automata theory and logic in computer science · 2019-05-28T11:56:18.370Z · score: 14 (7 votes) · LW · GW

I'm confused. It seems to me that if we already have a discrete/combinatorial mess on our hands, sprinkling some chance of failure on each little gear won't summon the analysis fairy and make the mess easier to reason about. Or at least we need to be clever about the way randomness is added, to make the mess settle into something analytical. But von Neumann's article sounds more optimistic about this approach. Does anyone understand why?

Comment by cousin_it on What is your personal experience with "having a meaningful life"? · 2019-05-24T11:20:58.636Z · score: 4 (2 votes) · LW · GW

I guess I simply noticed that when I spend a lot of time and attention on something, it becomes important to me.

For example, as a Russian person living abroad, I can choose every day to either read Russian domestic news or abstain. If I spend a few days reading news, they start feeling important to my life and kinda unpleasant. Then I stop and it fades away again.

So more generally, when something "meaningful" is making me miserable, I can spend time on something else, knowing that soon enough it will be the new "meaningful" thing. Sometimes it's hard, if I'm very locked in to the old thing, but I've been in that situation many times (miserable about something that feels super important in the moment) and it does get a bit easier with experience.