Posts

The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. 2015-11-28T11:07:40.531Z
Sidekick Matchmaking - How to tackle the problem? 2015-10-23T19:35:54.656Z
Effectively Less Altruistically Wrong Codex 2015-06-16T19:00:53.934Z
Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction 2015-06-08T16:37:02.177Z
Confession Thread: Mistakes as an aspiring rationalist 2015-06-02T18:10:08.901Z
Compilation of currently existing project ideas to significantly impact the world 2015-03-08T04:59:39.765Z
Sidekick Matchmaking 2015-02-19T00:13:30.498Z
An alarming fact about the anti-aging community 2015-02-16T17:49:28.384Z
Human Minds are Fragile 2015-02-11T18:40:17.275Z
Should EA's be Superrational cooperators? 2014-09-16T21:41:10.712Z
Compiling my writings for Lesswrong and others. 2014-07-22T08:11:21.095Z
Effective Writing 2014-07-18T20:45:09.961Z
Bragging Thread, July 2014 2014-07-14T03:22:21.404Z
Is there a way to stop liking sugar? 2014-06-11T20:21:11.465Z
Will Darkcoin Have Thiel's Last Mover Advantage 2014-05-23T21:39:41.483Z
Ergonomics Revisited 2014-04-22T21:57:55.873Z
My book: Simulating Dennett - This Wednesday in Sao Paulo 2014-03-17T08:15:43.933Z
On not getting a job as an option 2014-03-11T02:44:39.938Z
Should one be sad when an opportunity is lost? 2014-03-11T01:48:26.369Z
[Link] Cause Prioritization - Paul Christiano 1:15h 2014-02-07T06:23:58.436Z
How Not to Make Money 2014-01-24T20:36:07.078Z
January Monthly Bragging Thread 2014-01-06T21:44:36.664Z
In Praise of Tribes that Pretend to Try: Counter-"Critique of Effective Altruism" 2013-12-02T07:18:27.197Z
Making the chaff invisible, and getting the wheat ($200 prize too) 2013-11-29T01:19:02.419Z
Noticing something completely absurd about yourself 2013-11-26T19:24:09.578Z
Happiness and Productivity. Living Alone. Living with Friends. Living with Family. 2013-11-19T01:35:53.684Z
You are the average of the five people you spend most time with. 2013-09-04T23:02:42.482Z
Meetup : São Paulo, Transhumanist Manifestation, Lesswrongers invited 2013-06-30T19:01:55.912Z
From Capuchins to AI's, Setting an Agenda for the Study of Cultural Cooperation (Part2) 2013-06-28T10:20:48.519Z
From Capuchins to AI's, Setting an Agenda for the Study of Cultural Cooperation (Part1) 2013-06-27T06:08:30.001Z
[Link] Status Anxiety 2013-06-10T19:32:02.319Z
2013 June-August Life Hacks Thread 2013-06-04T14:29:27.435Z
Karma as Money 2013-06-02T01:46:53.931Z
Is there any way to avoid Post Narcissism? [with Video link] 2013-05-28T22:07:46.617Z
Research is polygamous! The importance of what you do needn't be proportional to your awesomeness 2013-05-26T22:29:39.239Z
Why is it rational to invest in retirement? I don't get it. 2013-05-16T01:28:42.000Z
Catching Up With the Present From the Developing World 2013-05-07T12:59:06.778Z
Using Evolution for Marriage or Sex 2013-05-06T05:34:57.443Z
Are there good reasons to get into a PHD (i.e. in Philosophy)? And what to optimize for in such case? 2013-04-27T15:34:57.764Z
Open Thread, April 15-30, 2013 2013-04-15T19:57:51.597Z
Pay Other Species to Pandemize Vegetarianism for You 2013-04-15T03:10:47.407Z
"I know what she has to offer already" is almost always false 2013-04-10T13:59:19.201Z
Drowning In An Information Ocean 2013-03-30T04:32:42.980Z
Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold 2013-03-28T17:14:46.884Z
Amending the "General Pupose Intelligence: Arguing the Orthogonality Thesis" 2013-03-13T23:21:16.887Z
Positive Information Diet, Take the Challenge 2013-03-01T14:51:45.444Z
Seize the Maximal Probability Moment 2013-02-28T11:22:11.463Z
Let's make a "Rational Immortalist Sequence". Suggested Structure. 2013-02-24T19:12:57.435Z
Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) 2013-02-22T01:09:16.987Z
Calibrating Against Undetectable Utilons and Goal Changing Events (part1) 2013-02-20T09:09:04.562Z

Comments

Comment by diegocaleiro on The sad state of Rationality Zürich - Effective Altruism Zürich included · 2018-03-01T19:07:30.890Z · LW · GW

Copied from the Heterodox Effective Altruism facebook group (https://www.facebook.com/groups/1449282541750667/):

Giego Caleiro I've read the comments and now speak as me, not as Admin:
It sems to me that the Zurich people were right to exclude Roland from their events. Let me lay out the reasons I have, based on extremely partial information:

1) IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.

2) The politeness of EAs is in great part the reason that some SJWs managed to infiltrate it. Having regulations and rules that determine who can be kicked out is bad, because it is a weapon that the SJWs have been known to wield with great care and precision. That is, I much prefer a group where people are kicked out without justification than one in which reason is given (I say this as someone who was kicked out of at least 2 physical spaces related to EA, so it does not come lightly). Competition drives out SJWs, so I would recommend to Roland to create a new meeting that is more valuable than it's predecessor, and attract people to it. (this community was created by me, with me as an admin, precisely for those reasons. I believed that I could legitimately help generate more valuable debate than previous EA groups, including the one that I myself created, but feared would be taken over by more SJWish types. This one is protected).

3) Another reason to be pro-kicking out: I and Tartre run a facebook chat group where I make a point of never explaining kicking anyone out. As far as I can tell, it has the best density of interesting topics of any facebook chat related to rationalists and EAs. It is necessary to be selective.

4) That said: Being excluded from social groups is horrible, it feels like dying to a lot of people, and it makes others fear it happening to them like the plague. So it allows for the kind of pernicious coordination in (DeScioli 2013) and full blown Girardian Scapegoating. There's a balance that needs to be struck to avoid SJWs from taking little bureocracies, then mobbing people out, thus tyrannizing others into condescention with whatever is their current day flavour of acceptable speech.

5) Because being excluded from social groups is horrible, HEAs need to create a welcoming network of warmth and kindness towards those who are excluded or accused. We don't want people to feel like they are dying, we don't want they hyppocampi compromised and their serotonin levels lowered. Why? Because this happens to a LOT of people when they transition from being politically left leaning to being politically right leaning (or when they take the sexual strategy Red Pill). If we, HEAs, side with the accusers, the scapegoaters, the mob, we will be one more member of the Ochlocracy. This is both anti-utilitarian, as the harm to the excluded party is nearly unbearable, and anti-heterodox, as in all likelihood at least in part a person was excluded for not sharing a belief or behavioral pattern with those who are doing the excluding. So I highly recommend that, on priors, HEAs come forth in favor of the person.

During my own little scapegoating event, Divia Caroline Eden was nice enough to give me a call and inquire about psychological health, make sure I wasn't going to kill myself and that sort of thing (people literally do that, if you have never been scapegoated, you cannot fathom what it is like, it cannot be expressed in words) and maybe 4 other people messaged me online showing equal niceness and appreciation.

Show that to Roland now, and maybe he'll return the favor when and if it happens to you. As an HEA, you are already in the group of risk.

Comment by diegocaleiro on On not getting a job as an option · 2018-02-01T02:50:20.315Z · LW · GW

Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)

Comment by diegocaleiro on Winning is for Losers · 2017-10-15T14:53:30.701Z · LW · GW

The Girardian conclusion, and general approach of this text make sense.
But the strategy that is best is forgiving 2 tits for tat, or something like that, worth emphasizing.
Also it seems you are putting some moral value in long term mating that doesn't necessarily reflect our emotional systems or our evolutionary drives. Short tem mating is very common and seen in most societies where there's enough resources to go around and enough intersexual geographical proximity. Recently there are more and stronger arguments emerging against female short term strategies. But it would be a far cry to claim that we already know decisively that the expected value for a female of short terming is necessarily negative. It may depend on fetal androgens, and it may be that the measurements made so far took biased samples to calculate the cost of female promiscuity. In the case of males, as far as I know, there is literally no data associating short terming with long term QALY loss, none. But I'd be happy to be corrected.
Notice also that the moral question is always about the sex you are not. If you are female, and data says it doesn't affect males, then you are free to do whatever. If you are male, and the data says short terming females become long term unhappy, then the moral responsibility for that falls on you, specially if there's information assymetry.

Comment by diegocaleiro on I Want To Live In A Baugruppe · 2017-03-17T03:37:35.247Z · LW · GW

This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.

It's not that relevant, so just if people are curious

Comment by diegocaleiro on Is The Blood Thicker Near The Tropics? Trade-Offs Of Living In The Cold · 2017-03-03T08:53:32.414Z · LW · GW

I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.

Comment by diegocaleiro on Calibrating Against Undetectable Utilons and Goal Changing Events (part2and1) · 2017-02-22T12:42:10.180Z · LW · GW

It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-12-05T04:46:57.475Z · LW · GW

Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.

I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-12-02T05:30:12.914Z · LW · GW

Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-30T14:04:18.806Z · LW · GW

You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.

Yes, I stand by those words even now (that I am happy).

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-30T06:13:39.774Z · LW · GW

I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Shhh, don't tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-30T04:21:12.204Z · LW · GW

He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.

The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations - same for the tonotopic auditory areas - as soon as V4, we are already completely lost (for those who don't know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-30T02:29:14.510Z · LW · GW

Oh, so boring..... It was actually me myself screwing up a link I think :(

Skill: being censored by people who hate censorship. Status: not yet accomplished.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T23:11:58.268Z · LW · GW

Wow, that's so cool! My message was censored and altered.

Lesswrong is growing an intelligentsia of it's own.

(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!)

Also fascinating that it was near instantaneous.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T23:05:43.262Z · LW · GW

No, that's if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don't care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.

But the reference class of Diego's thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T22:45:43.304Z · LW · GW

US Patent No. 4,136,359: "Microcomputer for use with video display"[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: "Controller for magnetic disc, recorder, or the like"[39] US Patent No. 4,217,604: "Apparatus for digitally controlling PAL color display"[40] US Patent No. 4,278,972: "Digitally-controlled color signal generation means for use with display"[41]

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T22:43:53.512Z · LW · GW

Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:

https://www.youtube.com/watch?v=w-7KAOOvhAk

For me cryonics is like soap bubbles and contact improv. I like it, but you don't need to waste your time knowing about it.

But since you asked: I've tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.

And me, for that matter. All my donors stopped earning to give, so I'm with no donor cashflow now, I might have to "retire" soon - Brazilian economy collapsed and they may cut my below life cost scholarship.EDIT: Yes, my scholarship was just suspended :( So I won't be just losing money, I'll be basically out of it, unfortunately. I also remind people that donating to individuals is way cheaper than to institutions - yes I think so even now that I'm launching another institution. The truth doesn't change, even if it becomes disadvantageous to me.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T22:29:49.090Z · LW · GW

See the link with a flowchart on 12.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T22:23:45.797Z · LW · GW

I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T11:04:21.182Z · LW · GW

Yes I am.

Step 1: Learn Bayes

Step 2: Learn reference class

Step 3: Read 0 to 1

Step 4: Read The Cook and the Chef

Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically

Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.

Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.

Step 8: Find someone who can give you a recording of Geoff Anders' presentation at EAGlobal.

Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T10:53:50.898Z · LW · GW

I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human's point of view.

We can make the rainbow, but we can't do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.

We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn't want you to hack happiness, so it put the damn NAcc right in the medial slightly frontal area of the brain, deep inside, where fMRI is really bad, where you can't insert electrodes correctly. Also, evolution made sure that each person's NAcc develops epigenetically into different target areas, making it very, very hard to tamper with it to make you smile. And boy, do I want to make you smile.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T10:47:10.441Z · LW · GW

Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.

The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don't have a single denominator, ofter superimposed in a robust way.

But for now, nobody who is publishing seems to know for sure.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T10:33:34.065Z · LW · GW

EA is an intensional movement.

http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/

I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:

My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.

That's how I see it anyway. Most of the arguments for it are in "Superintelligence" if you disagree with that, then you probably do disagree with me.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T07:06:06.490Z · LW · GW

Very sorry about that, I thought he held the patent for some aspect of computers that had become widespread, in the same way Wozniak holds the patent for personal computers. This was incorrect. I'll fix it.

Comment by diegocaleiro on The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism. · 2015-11-29T07:01:07.232Z · LW · GW

The text is posted at the EA forum too here, there all the links work.

Comment by diegocaleiro on Sidekick Matchmaking - How to tackle the problem? · 2015-10-23T19:40:47.986Z · LW · GW

I'm looking for a sidekick if someone feels that such would be an appropriate role for them. This is me for those who don't know me:

https://docs.google.com/document/d/14pvS8GxVlRALCV0xIlHhwV0g38_CTpuFyX52_RmpBVo/edit

And this is my flowchart/life;autobiography in the last few years:

https://drive.google.com/file/d/0BxADVDGSaIVZVmdCSE1tSktneFU/view

Nice to meet you! :)

Polymathwannabe asked: What would be your sidekick's mission?

R: It feels to me like that would depend A LOT on the person, the personality, our physical distance, availability and interaction type. I feel that any response I gave would only filter valuable people away, which obviously I don't want to do. That said, I had good experiences with people a little older than me, with general interest in EA and far future, and who have more than a single undergrad as academic background, mostly because I interact with academia all the time and many activities and ways of being are academia specific.

Comment by diegocaleiro on Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun" · 2015-08-28T15:27:47.116Z · LW · GW

see my comment.

Comment by diegocaleiro on Is my brain a utility minimizer? Or, the mechanics of labeling things as "work" vs. "fun" · 2015-08-28T15:26:36.671Z · LW · GW

My take is that what matters in fun versus work is where the locus of control is situated. That is, where does your subjective experience tell you the source of you doing that activity comes from.

If it comes from within, then you count it as fun. If it comes from the outside, you count it as work.

This explains your feeling, and explains the comments in this thread as well. When past-self sets goals for you, you are no longer the center of locus of control. Then it feels like negatively connoted work.

That's how it is for me anyway.

Comment by diegocaleiro on Effective Altruism from XYZ perspective · 2015-07-09T02:48:35.332Z · LW · GW

http://diegocaleiro.com/2015/05/26/effective-altruism-as-an-intensional-movement/

Comment by diegocaleiro on [link] FLI's recommended project grants for AI safety research announced · 2015-07-01T22:50:12.979Z · LW · GW

That is false. Bostrom thought of FAI before Eliezer. Paul thought of the Crypto. Bostrom and Armstrong have done more work on orthogonality. Bostrom/Hanson came up with most of the relevant stuff in multipolar scenarios. Sandberg/EY were involved in the oracle/tool/sovereign distinction.

TDT, which is EY work does not show up prominently in Superintelligence. CEV, of course, does, and is EY work. Lots of ideas on Superintelligence are causally connected to Yudkowksy, but no doubt there is more value from Bostrom there than from Yudkowsky.

Bostrom got 1.500.000 and MIRI, through Benja, got 250.000. This seems justified conditional on what has been produced by FHI and MIRI in the past.

Notice also that CFAR, through Anna, has received resources that will also be very useful to MIRI, since it will make potential MIRI researchers become CFAR alumni.

Comment by diegocaleiro on Effectively Less Altruistically Wrong Codex · 2015-06-16T21:09:39.099Z · LW · GW

My concern is that there is no centralized place where emerging and burgeoning new rationalists, strategists and thinkers can start to be seen and dinosaurs can come to post their new ideas.

My worry is about the lack of centrality, nothing to do with the central member being LW or not.

Comment by diegocaleiro on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-08T21:15:23.323Z · LW · GW

Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?

There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.

Comment by diegocaleiro on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-08T20:54:06.588Z · LW · GW

The solutions were bad in purpose so other people would come up with better solutions on the spot. I edited to clarify :)

Comment by diegocaleiro on FAI Research Constraints and AGI Side Effects · 2015-06-03T23:06:33.867Z · LW · GW

I just want to flag that despite simple, I feel like writings such as this one are valuable both as introductory concepts and so the new branches with more details are created by other researchers.

Comment by diegocaleiro on Confession Thread: Mistakes as an aspiring rationalist · 2015-06-03T16:58:11.694Z · LW · GW

You can carry it on by posting it monthly, there is no structure determining who creates threads. Like all else that matters in this world, it is done by those who show up for the job. I've made some bragging threads in the past noticing others didn't. Do the same for this :)

Comment by diegocaleiro on Confession Thread: Mistakes as an aspiring rationalist · 2015-06-02T19:02:03.664Z · LW · GW

True that.

Comment by diegocaleiro on Confession Thread: Mistakes as an aspiring rationalist · 2015-06-02T18:47:13.015Z · LW · GW

Arrogance: I caution you not to take this as advice for you to your own life, because frankly, arrogance goes a long, long loooooong way. Most rationalists are less arrogant in person than they should about their subject areas, and rationalist women who identify as females and are straight are even less frequently arrogant than the already low base rate. But some people are over-arrogant, and I am one of these. Over arrogance isn't about the intensity of arrogance, it is about the non-selectivity. The problem I have always had and been told again and again isn't generalized arrogance, it is leaking the arrogance into domains I'm not actually worth a penny. To see this with full clarity: that one should have a detailed model of when to be confident, when arrogant, and when humble took me a mere fourteen days, eleven months and twenty eight years, and counting.

Comment by diegocaleiro on Confession Thread: Mistakes as an aspiring rationalist · 2015-06-02T18:25:25.475Z · LW · GW

A Big Fish in a Small Pond: for many years I assumed it was better to be a big fish in a small pond than to try to be a big fish in the ocean. This can be decomposed into a series of mistakes, only part of which I learned to overcome so far.

1)It is based on the premise that social rankings matter more than they actually do. Most of day to day life is determined by environment, and being in a better environment, surrounded by better and different people is more valuable experientially and in terms of output than being a big fish in a small pond.

2)It encouraged blindspots. The more dimensions in which I was the big fish, the more dimensions nearby in vector space I failed to optimize. The most starking one: having a high linguistic IQ and large vocabulary made me care little about grammar and foreign languages.

3)One of the reasons for me to want to be big at a small pond was reading positive psychology showing most people prefer a 50k income in a 25k average world than a 75k in a 100k average. I was unable to disentangle "empirical study", which serves to inform me into two very distinct sets. "Empirical study about how people actually feel in different situations" and "Empirical study about how people judge abstract counterfactual situations with numbers attached to them". I was very proud of taking science seriously into my life (which in fact most people don't), but I was taking the part of science that is specifically about people being wrong without noticing, in my reckless youth.

4)It has a unidimensional function Max(deltaBigness) which doesn't capture the complexity and beauty of our actual multidimensional lives and feelings. There are millions of axes in which it is personally valuable to nudge, to push, to move, and to optimize, relative importance is a relatively unimportant one.

Comment by diegocaleiro on Concept Safety: What are concepts for, and how to deal with alien concepts · 2015-04-20T18:09:07.012Z · LW · GW

I much enjoyed your posts so far Kaj, thanks for creating them.

I'd like to draw attention, in this particular one, to

Viewed in this light, concepts are cognitive tools that are used for getting rewards.

to add a further caveat: though some concepts are related to rewards, and some conceptual clustering is done in a way that maps to the reward of the agent as a whole, much of what goes on in concept formation, simple or complex, is just the wire together, fire together old saying. More specifically, if we are only calling "reward" what is a reward for the whole individual, then most concept formation will not be reward related. At the level of neurons or neural columns, there are reward-like mechanisms taking place, no doubt, but it would be a mereological fallacy to assume that rewardness carries upward from parts to wholes.
There are many types of concepts for which indeed, as you contend, rewards are very important, and they deserve as much attention as those which cannot be explained merely by the idea of a single monolithic agent seeking rewards.

Comment by diegocaleiro on Status - is it what we think it is? · 2015-04-01T01:27:03.990Z · LW · GW

If you are particularly interested in sexual status, I wrote about it before here, dispelling some of the myth.

Comment by diegocaleiro on Status - is it what we think it is? · 2015-04-01T01:17:50.899Z · LW · GW

Usually dominance is related to a power that is maintained by agression, stress or fear.

The usual search route will lead you to some papers: https://scholar.google.com/scholar?q=prestige+dominance&btnG=&hl=en&as_sdt=0%2C5&as_ylo=2009

What I would do would be find some 2015 2014 papers and check their bibliography, or ask the principal investigator about which papers are more interesting on it.

I have a standing interest in other primates and cetaceans as well, so I'd look for attempts to show that others have or don't have prestige.

Comment by diegocaleiro on Status - is it what we think it is? · 2015-03-31T06:30:10.944Z · LW · GW

The technical academic term for (1) Is prestige and (2) Is Dominance. Papers which distinguish the two are actually really interesting.

Comment by diegocaleiro on Status - is it what we think it is? · 2015-03-31T06:29:21.483Z · LW · GW

Status isn't strictly zero sum. Some large subset of sexual status is. Also humans have many different concomitant status hierarchies.

Comment by diegocaleiro on Superintelligence 29: Crunch time · 2015-03-31T06:23:45.355Z · LW · GW

Should the violin players at Titanic have stopped playing the violin and tried to save more lives?

What if they could have saved thousands of Titanics each? What if there already was such a technology that could play a deep sad violin song on the background, and project holograms of violin players playing in deep sorrow as the ship sank.

At some point, it becomes obvious that doing the consequentialist thing is the right thing to do. The question is whether the reader believes 2015 humanity has already reached that point or not.

We already produce beauty, art, truth, humor, narratives and knowledge at a much faster pace than we can consume. The ethical grounds on which to act in any non-consequentialist ways have lost much of their strenght.

Comment by diegocaleiro on Superintelligence 29: Crunch time · 2015-03-31T06:15:20.568Z · LW · GW

Why not actual fields medalists?

Tim Ferris lays out a guide for how to learn anything really quickly, which involves contacting whoever was great at that ten years ago and asking them who is great that should not be.

Doing that for field medalists and other high achievers is plausibly extremely high value.

Comment by diegocaleiro on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-27T23:19:30.746Z · LW · GW

This would cause me to read Slate Star Codex and to occasionally comment. It may do the same for others.

This may be a positive outcome, though I am not certain of it.

Comment by diegocaleiro on Superintelligence 28: Collaboration · 2015-03-24T05:07:47.409Z · LW · GW

Hard Coded AI is less likely than ems, since ems which are copies or modified copies of other ems would instantly be aware that the race is happening, whereas most of the later stages of hard-coded AI could be concealed from strategic opponents for part of the period in which they would have made hasty decisions, if only they knew.

Comment by diegocaleiro on Superintelligence 28: Collaboration · 2015-03-24T05:05:39.342Z · LW · GW

There is a gender difference in resource constraint satisfaction worth mentioning: males in most primate species are less resource constrained than females, including humans. The main reason why females require fewer resources to be emotionally satisfied is that the upper bound on how many resources are required to attract the males with the best genes, acquire their genes and parenting resources, and have nearly as many children as possible, as well as taking good care of these children and their children is limited. For males however, because there is competitive bargaining with females where many males compete for reproductive access and mate-guarding, and because males can generate more offspring, there are many more ways in which resources can be fungible with reproductive prowess, such as fathering children without much interacting with their mother, but still providing resources for the kid, as well as paying some signaling cost to mate with as many apparently fertile and healthy females as possible. Accordingly, men are hard and softwired to seek fungible resources more frequently and more intensely than women.

Human satisfaction marginally decreases on resource quantity, but they have two clearly distinct clusters on level of marginal decrease.

Comment by diegocaleiro on Superintelligence 28: Collaboration · 2015-03-24T04:53:35.377Z · LW · GW

None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:

The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It's ok to mention when a helping hand is available, but it doesn't seem ok to argue that given a helping hand is available we should be less focused on the things that are separating us from a desirable future.

Comment by diegocaleiro on Superintelligence 28: Collaboration · 2015-03-24T04:52:14.941Z · LW · GW

None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:

The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It's ok to mention when a helping hand is available, but it doesn't seem ok to argue that given a helping hand is available we should be less focused on the things that are separating us from a desirable future.

Comment by diegocaleiro on Superintelligence 28: Collaboration · 2015-03-24T04:40:59.759Z · LW · GW

What are some more recent papers or books on the topic of Strategy and Conflict that take a Schellingian approach to the dynamics of conflict?

I find it hard to believe that the best book on any topic of relevance was written in 1981.