Posts
Comments
Copied from the Heterodox Effective Altruism facebook group (https://www.facebook.com/groups/1449282541750667/):
Giego Caleiro I've read the comments and now speak as me, not as Admin:
It sems to me that the Zurich people were right to exclude Roland from their events. Let me lay out the reasons I have, based on extremely partial information:
1) IF Roland brings back topics that are not EA, such as 9/11 and Thai prostitutes, it is his burden to both be clear and to justify why those topics deserve to be there.
2) The politeness of EAs is in great part the reason that some SJWs managed to infiltrate it. Having regulations and rules that determine who can be kicked out is bad, because it is a weapon that the SJWs have been known to wield with great care and precision. That is, I much prefer a group where people are kicked out without justification than one in which reason is given (I say this as someone who was kicked out of at least 2 physical spaces related to EA, so it does not come lightly). Competition drives out SJWs, so I would recommend to Roland to create a new meeting that is more valuable than it's predecessor, and attract people to it. (this community was created by me, with me as an admin, precisely for those reasons. I believed that I could legitimately help generate more valuable debate than previous EA groups, including the one that I myself created, but feared would be taken over by more SJWish types. This one is protected).
3) Another reason to be pro-kicking out: I and Tartre run a facebook chat group where I make a point of never explaining kicking anyone out. As far as I can tell, it has the best density of interesting topics of any facebook chat related to rationalists and EAs. It is necessary to be selective.
4) That said: Being excluded from social groups is horrible, it feels like dying to a lot of people, and it makes others fear it happening to them like the plague. So it allows for the kind of pernicious coordination in (DeScioli 2013) and full blown Girardian Scapegoating. There's a balance that needs to be struck to avoid SJWs from taking little bureocracies, then mobbing people out, thus tyrannizing others into condescention with whatever is their current day flavour of acceptable speech.
5) Because being excluded from social groups is horrible, HEAs need to create a welcoming network of warmth and kindness towards those who are excluded or accused. We don't want people to feel like they are dying, we don't want they hyppocampi compromised and their serotonin levels lowered. Why? Because this happens to a LOT of people when they transition from being politically left leaning to being politically right leaning (or when they take the sexual strategy Red Pill). If we, HEAs, side with the accusers, the scapegoaters, the mob, we will be one more member of the Ochlocracy. This is both anti-utilitarian, as the harm to the excluded party is nearly unbearable, and anti-heterodox, as in all likelihood at least in part a person was excluded for not sharing a belief or behavioral pattern with those who are doing the excluding. So I highly recommend that, on priors, HEAs come forth in favor of the person.
During my own little scapegoating event, Divia Caroline Eden was nice enough to give me a call and inquire about psychological health, make sure I wasn't going to kill myself and that sort of thing (people literally do that, if you have never been scapegoated, you cannot fathom what it is like, it cannot be expressed in words) and maybe 4 other people messaged me online showing equal niceness and appreciation.
Show that to Roland now, and maybe he'll return the favor when and if it happens to you. As an HEA, you are already in the group of risk.
Eric Weinstein argues strongly against returns being 20century level, and says they are now vector fields, not scalars. I concur (not that I matter)
The Girardian conclusion, and general approach of this text make sense.
But the strategy that is best is forgiving 2 tits for tat, or something like that, worth emphasizing.
Also it seems you are putting some moral value in long term mating that doesn't necessarily reflect our emotional systems or our evolutionary drives. Short tem mating is very common and seen in most societies where there's enough resources to go around and enough intersexual geographical proximity. Recently there are more and stronger arguments emerging against female short term strategies. But it would be a far cry to claim that we already know decisively that the expected value for a female of short terming is necessarily negative. It may depend on fetal androgens, and it may be that the measurements made so far took biased samples to calculate the cost of female promiscuity. In the case of males, as far as I know, there is literally no data associating short terming with long term QALY loss, none. But I'd be happy to be corrected.
Notice also that the moral question is always about the sex you are not. If you are female, and data says it doesn't affect males, then you are free to do whatever. If you are male, and the data says short terming females become long term unhappy, then the moral responsibility for that falls on you, specially if there's information assymetry.
This sounds cool. Somehow it reminded me of an old, old essay by Russell on architecture.
It's not that relevant, so just if people are curious
I am now a person who moved during adulthood, and I can report past me was right except he did not account for rent.
It seems to me the far self is more orthogonal to your happiness. You can try to optimize for maximal long term happiness.
Interesting that I conveyed that. I agree with Owen Cotton Barratt that we ought to focus efforts now into sooner paths (fast takeoff soon) and not in the other paths because more resources will be allocated to FAI in the future, even if fast takeoff soon is a low probability.
I personally work on inserting concepts and moral concepts on AGI because almost any other thing I could do there are people who will do better already, and this is an area that interpolates with a lot of my knowledge areas, while still being AGI relevant. See link in the comment above with my proposal.
Not my reading. My reading is that Musk thinks people should not consider the probability of succeding as a spacecraft startup (0% historically) but instead should reason from first principles, such as thinking what are the materials from which a rocket is made, then building the costs from the ground up.
You have correctly identified that I wrote this post while very unhappy. The comments, as you can see by their lighthearted tone, I wrote pretty happy.
Yes, I stand by those words even now (that I am happy).
I am more confident that we can produce software that can classify images, music and faces correctly than I am that we can integrate multimodal aspects of these modulae into a coherent being that thinks it has a self, goals, identity, and that can reason about morality. That's what I tried to address in my FLI grant proposal, which was rejected (by the way, correctly so, it needed the latest improvements, and clearly - if they actually needed it - AI money should reach Nick, Paul and Stuart before our team.) We'll be presenting it in Oxford, tomorrow?? Shhh, don't tell anyone, here, just between us, you get it before the Oxford professors ;) https://docs.google.com/document/d/1D67pMbhOQKUWCQ6FdhYbyXSndonk9LumFZ-6K6Y73zo/edit
He have non-confirmed simplified hypothesis with nice drawings for how microcircuits in the brain work. The ignore more than a million things (literally, they just have to ignore specific synapses, the multiplicity of synaptic connection etc... if you sum those things up, and look at the model, I would say it ignores about that many things). I'm fine with simplifying assumptions, but the cortical microcircuit models are a butterfly flying in a hurricane.
The only reason we understand V1 is because it is a retinotopic inverted map that has been through very few non-linear transformations - same for the tonotopic auditory areas - as soon as V4, we are already completely lost (for those who don't know, the brain has between 100-500 areas depending on how you count, and we have a medium guess of a simplified model that applies well to two of them, and medium to some 10-25). And even if you could say which functions V4 participates more in, this would not tell you how it does it.
Oh, so boring..... It was actually me myself screwing up a link I think :(
Skill: being censored by people who hate censorship. Status: not yet accomplished.
Wow, that's so cool! My message was censored and altered.
Lesswrong is growing an intelligentsia of it's own.
(To be fair to the censoring part, the message contained a link directly to my Patreon, which could count as advertising? Anyway, the alteration was interesting, it just made it more formal. Maybe I should write books here, and they'll sound as formal as the ones I read!)
Also fascinating that it was near instantaneous.
No, that's if you want to understand why a specific Lesswrong afficionado became wary of probabilistic thinking to the point of calling it a problem of the EA community. If you don't care about my opinions in general, you are welcome to take no action about it. He asked for my thoughts, I provided them.
But the reference class of Diego's thoughts contains more thoughts that are wrong than that are true. So on priors, you might want to ignore them :p
US Patent No. 4,136,359: "Microcomputer for use with video display"[38]—for which he was inducted into the National Inventors Hall of Fame. US Patent No. 4,210,959: "Controller for magnetic disc, recorder, or the like"[39] US Patent No. 4,217,604: "Apparatus for digitally controlling PAL color display"[40] US Patent No. 4,278,972: "Digitally-controlled color signal generation means for use with display"[41]
Basically because I never cared much for cryonics, even with the movie about me being done about it. Trailer:
https://www.youtube.com/watch?v=w-7KAOOvhAk
For me cryonics is like soap bubbles and contact improv. I like it, but you don't need to waste your time knowing about it.
But since you asked: I've tried to get rich people in contact with Robert McIntyre, because he is doing a great job and someone should throw money at him.
And me, for that matter. All my donors stopped earning to give, so I'm with no donor cashflow now, I might have to "retire" soon - Brazilian economy collapsed and they may cut my below life cost scholarship.EDIT: Yes, my scholarship was just suspended :( So I won't be just losing money, I'll be basically out of it, unfortunately. I also remind people that donating to individuals is way cheaper than to institutions - yes I think so even now that I'm launching another institution. The truth doesn't change, even if it becomes disadvantageous to me.
See the link with a flowchart on 12.
I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).
Yes I am.
Step 1: Learn Bayes
Step 2: Learn reference class
Step 3: Read 0 to 1
Step 4: Read The Cook and the Chef
Step 5: Reason why are the billionaires saying the people who do it wrong are basically reasoning probabilistically
Step 6: Find the connection between that and reasoning from first principles, or the gear hypothesis, or whichever other term you have for when you use the inside view, and actually think technically about a problem, from scratch, without looking at how anyone else did it.
Step 7: Talk to Michael Valentine about it, who has been reasoning about this recently and how to impart it at CFAR workshops.
Step 8: Find someone who can give you a recording of Geoff Anders' presentation at EAGlobal.
Step 9: Notice how all those steps above were connected, become a Chef, set out to save the world. Good luck!
I am particularly skeptical of transhumanism when it is described as changing the human condition, and the human condition is considered to be the mental condition of humans as seen from the human's point of view.
We can make the rainbow, but we can't do physics yet. We can glimpse at where minds can go, but we have no idea how to precisely engineer them to get there.
We also know that happiness seems tighly connected to this area called the NAcc of the brain, but evolution doesn't want you to hack happiness, so it put the damn NAcc right in the medial slightly frontal area of the brain, deep inside, where fMRI is really bad, where you can't insert electrodes correctly. Also, evolution made sure that each person's NAcc develops epigenetically into different target areas, making it very, very hard to tamper with it to make you smile. And boy, do I want to make you smile.
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I've merely read all of FHI, most of MIRI, half of AIMA, Paul's blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don't code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system's cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don't have a single denominator, ofter superimposed in a robust way.
But for now, nobody who is publishing seems to know for sure.
EA is an intensional movement.
http://effective-altruism.com/ea/j7/effective_altruism_as_an_intensional_movement/
I concur, with many other people that when you start of from a wide sample of aggregative consequentialist values and try to do the most good, you bump into AI pretty soon. As I told Stuart Russell a while ago to explain why a Philosopher Anthropologist was auditing his course:
My PHD will likely be a book on altruism, and any respectable altruist these days is worried about AI at least 30% of his waking life.
That's how I see it anyway. Most of the arguments for it are in "Superintelligence" if you disagree with that, then you probably do disagree with me.
Very sorry about that, I thought he held the patent for some aspect of computers that had become widespread, in the same way Wozniak holds the patent for personal computers. This was incorrect. I'll fix it.
The text is posted at the EA forum too here, there all the links work.
I'm looking for a sidekick if someone feels that such would be an appropriate role for them. This is me for those who don't know me:
https://docs.google.com/document/d/14pvS8GxVlRALCV0xIlHhwV0g38_CTpuFyX52_RmpBVo/edit
And this is my flowchart/life;autobiography in the last few years:
https://drive.google.com/file/d/0BxADVDGSaIVZVmdCSE1tSktneFU/view
Nice to meet you! :)
Polymathwannabe asked: What would be your sidekick's mission?
R: It feels to me like that would depend A LOT on the person, the personality, our physical distance, availability and interaction type. I feel that any response I gave would only filter valuable people away, which obviously I don't want to do. That said, I had good experiences with people a little older than me, with general interest in EA and far future, and who have more than a single undergrad as academic background, mostly because I interact with academia all the time and many activities and ways of being are academia specific.
see my comment.
My take is that what matters in fun versus work is where the locus of control is situated. That is, where does your subjective experience tell you the source of you doing that activity comes from.
If it comes from within, then you count it as fun. If it comes from the outside, you count it as work.
This explains your feeling, and explains the comments in this thread as well. When past-self sets goals for you, you are no longer the center of locus of control. Then it feels like negatively connoted work.
That's how it is for me anyway.
http://diegocaleiro.com/2015/05/26/effective-altruism-as-an-intensional-movement/
That is false. Bostrom thought of FAI before Eliezer. Paul thought of the Crypto. Bostrom and Armstrong have done more work on orthogonality. Bostrom/Hanson came up with most of the relevant stuff in multipolar scenarios. Sandberg/EY were involved in the oracle/tool/sovereign distinction.
TDT, which is EY work does not show up prominently in Superintelligence. CEV, of course, does, and is EY work. Lots of ideas on Superintelligence are causally connected to Yudkowksy, but no doubt there is more value from Bostrom there than from Yudkowsky.
Bostrom got 1.500.000 and MIRI, through Benja, got 250.000. This seems justified conditional on what has been produced by FHI and MIRI in the past.
Notice also that CFAR, through Anna, has received resources that will also be very useful to MIRI, since it will make potential MIRI researchers become CFAR alumni.
My concern is that there is no centralized place where emerging and burgeoning new rationalists, strategists and thinkers can start to be seen and dinosaurs can come to post their new ideas.
My worry is about the lack of centrality, nothing to do with the central member being LW or not.
Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?
There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.
The solutions were bad in purpose so other people would come up with better solutions on the spot. I edited to clarify :)
I just want to flag that despite simple, I feel like writings such as this one are valuable both as introductory concepts and so the new branches with more details are created by other researchers.
You can carry it on by posting it monthly, there is no structure determining who creates threads. Like all else that matters in this world, it is done by those who show up for the job. I've made some bragging threads in the past noticing others didn't. Do the same for this :)
True that.
Arrogance: I caution you not to take this as advice for you to your own life, because frankly, arrogance goes a long, long loooooong way. Most rationalists are less arrogant in person than they should about their subject areas, and rationalist women who identify as females and are straight are even less frequently arrogant than the already low base rate. But some people are over-arrogant, and I am one of these. Over arrogance isn't about the intensity of arrogance, it is about the non-selectivity. The problem I have always had and been told again and again isn't generalized arrogance, it is leaking the arrogance into domains I'm not actually worth a penny. To see this with full clarity: that one should have a detailed model of when to be confident, when arrogant, and when humble took me a mere fourteen days, eleven months and twenty eight years, and counting.
A Big Fish in a Small Pond: for many years I assumed it was better to be a big fish in a small pond than to try to be a big fish in the ocean. This can be decomposed into a series of mistakes, only part of which I learned to overcome so far.
1)It is based on the premise that social rankings matter more than they actually do. Most of day to day life is determined by environment, and being in a better environment, surrounded by better and different people is more valuable experientially and in terms of output than being a big fish in a small pond.
2)It encouraged blindspots. The more dimensions in which I was the big fish, the more dimensions nearby in vector space I failed to optimize. The most starking one: having a high linguistic IQ and large vocabulary made me care little about grammar and foreign languages.
3)One of the reasons for me to want to be big at a small pond was reading positive psychology showing most people prefer a 50k income in a 25k average world than a 75k in a 100k average. I was unable to disentangle "empirical study", which serves to inform me into two very distinct sets. "Empirical study about how people actually feel in different situations" and "Empirical study about how people judge abstract counterfactual situations with numbers attached to them". I was very proud of taking science seriously into my life (which in fact most people don't), but I was taking the part of science that is specifically about people being wrong without noticing, in my reckless youth.
4)It has a unidimensional function Max(deltaBigness) which doesn't capture the complexity and beauty of our actual multidimensional lives and feelings. There are millions of axes in which it is personally valuable to nudge, to push, to move, and to optimize, relative importance is a relatively unimportant one.
I much enjoyed your posts so far Kaj, thanks for creating them.
I'd like to draw attention, in this particular one, to
Viewed in this light, concepts are cognitive tools that are used for getting rewards.
to add a further caveat: though some concepts are related to rewards, and some conceptual clustering is done in a way that maps to the reward of the agent as a whole, much of what goes on in concept formation, simple or complex, is just the wire together, fire together old saying. More specifically, if we are only calling "reward" what is a reward for the whole individual, then most concept formation will not be reward related. At the level of neurons or neural columns, there are reward-like mechanisms taking place, no doubt, but it would be a mereological fallacy to assume that rewardness carries upward from parts to wholes.
There are many types of concepts for which indeed, as you contend, rewards are very important, and they deserve as much attention as those which cannot be explained merely by the idea of a single monolithic agent seeking rewards.
If you are particularly interested in sexual status, I wrote about it before here, dispelling some of the myth.
Usually dominance is related to a power that is maintained by agression, stress or fear.
The usual search route will lead you to some papers: https://scholar.google.com/scholar?q=prestige+dominance&btnG=&hl=en&as_sdt=0%2C5&as_ylo=2009
What I would do would be find some 2015 2014 papers and check their bibliography, or ask the principal investigator about which papers are more interesting on it.
I have a standing interest in other primates and cetaceans as well, so I'd look for attempts to show that others have or don't have prestige.
The technical academic term for (1) Is prestige and (2) Is Dominance. Papers which distinguish the two are actually really interesting.
Status isn't strictly zero sum. Some large subset of sexual status is. Also humans have many different concomitant status hierarchies.
Should the violin players at Titanic have stopped playing the violin and tried to save more lives?
What if they could have saved thousands of Titanics each? What if there already was such a technology that could play a deep sad violin song on the background, and project holograms of violin players playing in deep sorrow as the ship sank.
At some point, it becomes obvious that doing the consequentialist thing is the right thing to do. The question is whether the reader believes 2015 humanity has already reached that point or not.
We already produce beauty, art, truth, humor, narratives and knowledge at a much faster pace than we can consume. The ethical grounds on which to act in any non-consequentialist ways have lost much of their strenght.
Why not actual fields medalists?
Tim Ferris lays out a guide for how to learn anything really quickly, which involves contacting whoever was great at that ten years ago and asking them who is great that should not be.
Doing that for field medalists and other high achievers is plausibly extremely high value.
This would cause me to read Slate Star Codex and to occasionally comment. It may do the same for others.
This may be a positive outcome, though I am not certain of it.
Hard Coded AI is less likely than ems, since ems which are copies or modified copies of other ems would instantly be aware that the race is happening, whereas most of the later stages of hard-coded AI could be concealed from strategic opponents for part of the period in which they would have made hasty decisions, if only they knew.
There is a gender difference in resource constraint satisfaction worth mentioning: males in most primate species are less resource constrained than females, including humans. The main reason why females require fewer resources to be emotionally satisfied is that the upper bound on how many resources are required to attract the males with the best genes, acquire their genes and parenting resources, and have nearly as many children as possible, as well as taking good care of these children and their children is limited. For males however, because there is competitive bargaining with females where many males compete for reproductive access and mate-guarding, and because males can generate more offspring, there are many more ways in which resources can be fungible with reproductive prowess, such as fathering children without much interacting with their mother, but still providing resources for the kid, as well as paying some signaling cost to mate with as many apparently fertile and healthy females as possible. Accordingly, men are hard and softwired to seek fungible resources more frequently and more intensely than women.
Human satisfaction marginally decreases on resource quantity, but they have two clearly distinct clusters on level of marginal decrease.
None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:
The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It's ok to mention when a helping hand is available, but it doesn't seem ok to argue that given a helping hand is available we should be less focused on the things that are separating us from a desirable future.
None of Miles's arguments resonates with me, basically because one counterargument could erase the pragmatic relevance of his points in one fell swoop:
The vast majority of expected value is on changing policies where the incentives are not aligned with ours. Cases where the world would be destroyed no matter what happened, or cases where something is providing a helping hand - such as the incentives he suggests - don't change where our focus should be. Bostrom knows that, and focuses throughout on cases where more consequences derive from our actions. It's ok to mention when a helping hand is available, but it doesn't seem ok to argue that given a helping hand is available we should be less focused on the things that are separating us from a desirable future.
What are some more recent papers or books on the topic of Strategy and Conflict that take a Schellingian approach to the dynamics of conflict?
I find it hard to believe that the best book on any topic of relevance was written in 1981.