Posts

Comments

Comment by WhySpace_duplicate0.9261692129075527 on LW 2.0 Strategic Overview · 2017-09-15T08:29:07.524Z · LW · GW

I'm not really sure how shortform stuff could be implemented either, but I have a suggestion on how it can be used: jokes!

Seriously. If you look at Scott's writing, for example, one of the things which makes it so gripping is the liberal use of amusing phrasing, and mildly comedic exaggerations. Not the sort of thing that makes you actually laugh, but just the sort of thing that is mildly amusing. And, I believe he specifically recommended it in his blog post on writing advice. He didn't phrase his reasoning quite like this, but I think of it as little bits of positive reinforcement to keep your system 1 happy while your system 2 does the analytic thinking stuff to digest the piece.

Now, obviously this could go overboard, since memetics dictates that short, likeable things will get upvoted faster than long, thoughtful things, outcompeting them. But, I don't think we as a community are currently at risk of that, especially with the moderation techniques described in the OP.

And, I don't mean random normal "guy walks into a bar" jokes. I mean the sort of thing that you see in the comments on old LW posts, or on Weird Sun Twitter. Jokes about Trolley Problems and Dust Specks and Newcomb-like problems and negative Utilitarians. "Should Pascal accept a mugging at all, if there's even a tiny chance of another mugger with a better offer?" Or maybe "In the future, when we're all mind-uploads, instead of arguing about the simulation argument we'll worry about being mortals in base-level reality. Yes, we'd have lots of memories of altering the simulation, but puny biological brains are error-prone, and hallucinate things all the time."

I think a lot of the reason social media is so addictive is the random dopamine injections. People could go to more targeted websites for more of the same humor, but those get old quickly. The random mix of serious info intertwined with joke memes provides novelty and works well together. The ideal for a more intellectual community should probably be more like 90-99% serious stuff, with enough fun stuff mixed in to avoid akrasia kicking in and pulling us toward a more concentrated source.

The implementation implications would be to present short-form stuff between long-form stuff, to break things up and give readers a quick break.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-05-04T09:33:27.999Z · LW · GW

Note to self, in case I come back to this problem: the Vienna Circle fits the bill.

Comment by WhySpace_duplicate0.9261692129075527 on Neuralink and the Brain’s Magical Future · 2017-04-26T06:26:34.451Z · LW · GW

:)

Honestly, there are a bunch of links I don't click, because the 2 or 3 word titles aren't descriptive enough. I'm a big fan of the community norm on more technically minded subreddits, where you can usually find a summary in one of the top couple comments.

So, I'm doing what I can to encourage this here. But mostly, I thought it was important on the AI front, and wanted to give a summary which more people would actually read and discuss.

Comment by WhySpace_duplicate0.9261692129075527 on Neuralink and the Brain’s Magical Future · 2017-04-24T08:48:51.364Z · LW · GW

Here are some thoughts on the viability of Brain Computer Interfaces. I know nothing, and am just doing my usual reality checks and initial exploration of random ideas, so please let me know if I'm making any dumb assumptions.

They seem to prefer devices in the blood vessels, due to the low invasiveness. The two specific form factors mentioned are stents and neural dust. Whatever was chosen would have to fit in the larger blood vessels, or flow freely through all of them. Just for fun, let's choose the second, much narrower constraint, and play with some numbers.

Wikipedia says white blood cells can be up to 30 μm in diameter. (Also, apparently there are multiple kinds of white blood cells. TIL.) I'd guess that we wouldn't want our neural dust to be any larger than that if we want to be able to give it to someone and be able to reverse the procedure later without any surgery. The injection should be fine, but if you wanted to filter these things back out of your blood, you'd have to do something like giving blood, but with a magnet or something to filter out the neural dust. So, what could we cram into 30 μm?

Well, my first hit when searching "transistors per square mm" is an article titled "Intel Now Packs 100 Million Transistors in Each Square Millimeter", so let's go with that. I realize Elon's ~10 year time horizon would give us another ~6 Moore's law doublings, but if they did an entire run of a special chip just for this, then maybe they don't want to pay top dollar for state of the art equipent, so let's stick with 100m/mm^2. That'd give us on the order of 10k-100k transistors to work with, if we filled the entire area with transistors and nothing else.

But, looking at most electronics, they are more than just a chip. Arduinos and cellphones and motherboards may be built around a chip, but the chip itself has a relatively small footprint on the larger PCB. So, I'm probably missing something which would be incredibly obvious to someone with more hardware experience. (Is all the other stuff just for interfacing with other components and power supplies? In principle, could most of it be done within the chip, if you were willing to do a dedicated manufacturing run just for that one divice, rather than making more modular and flexible chips which can be encorporate into a range of devices?)

If we assume it'd be powered and transmit data electromagnetically, it'd also need an antenna, and an induction coil. I have a hunch that both of these suffer from issues with the square-cube law, so maybe that's a bad idea. The neural dust article mentioned that the (mm scale) devices both reported information and received power ultrasonically, so maybe the square-cube law is the reason. (If not, we might also run into the diffraction limit, and not have any wavelengths of light which were short enough to effect antenas that size, but still long enough to penetrate the skull without ionizing atoms.)

I like the idea of ultrasonic stuff because acoustic waves travel through tissue without depositing much energy. So, you get around the absorption problem photons have, and don't have to literally x-ray anyone's brain. Also, cranial ultrasounds are already a thing for infants, although they have to switch to transcranial Doppler for adults, because our skulls have hardened. Nearby pieces of neural dust would be monitoring the same neurons, and so would give off their signals at about the same time, boosting the signal but maybe smearing it out a little in time.

So, let's play with some numbers for piezoelectric devices instead. (I assume that's what their ultrasonic neural dust must be using, at least. They are switching between electricity and motion somehow, and piezoelectrichttps are the name for the solid state way of doing that. I can't picture them having tiny speakers with electromagnets on flexible speaker cones. The Wikipedia page on transducers doesn't mention other options.)

Quartz crystals are already used for timing in electronics, so maybe the semiconductor industry already has the ability to make transducers if they wanted to. (I'd be surprised if they didn't, since quartz is just ccrystaline silicon dioxide. Maybe they can't get the atomic lattice into the right orientation consistently, though.) If you couldn't transmit and receive simultaneously without interfering, you'd need a tiny capacitor to store energy for at least 1 cycle. I don't know how small quartz crystals could be made, or whether size is even the limiting factor. Maybe sufficiently small piezoelectric can't even put out strong enough pulses to be detectable on an ultrasound, or require too much power to be safely delivered ultrasonically? I don't know, but I'd have to play with a bunch of numbers to get a good feel.

I don't really know where to start, when discussing monitoring neuron firings. Could it be done electromagnetically, since they should make an instantaneous electromagnetic field? Or would the signal be too weak near a blood vessel? Apparently each neuron firing changes the concentration of Na, K, Cl, and Ca in the surrounding blood. Could one of these be monitored? Maybe spectrally, with a tiny LED of the appropriate wavelength, and a photo detector? I think such things are miniturizeable in principle, but I'm not sure we can make them with existing semiconductor manufacturing techniques, so the R&D would be expensive. We probably don't have anything which emits at the exact wavelength we need for spectroscopy though, and even if we did, I bet the LED would need voltage levels which would be hard to deliver without adding a voltage transformer or whatever the DC equivalent is.

Or, can we dump all the fancy electronics all together? Could we do something as simple as a clay particle (tiny rock) coated with a dispersent or other Surfactant, so that changes in the surrounding chemistry cause the collapse of the double layer), making the clay particles to flocculate together? Would such clumps of clay particles be large enough and have high enough density to show up on an ultrasound or other divice? Obviously this wouldn't let us force a neuron to fire, but it might be a cheap way of detecting them.

Maybe the electronics could be added later, if modifying surface charge and chemistry is enough to make a neuron fire. Neurotransmitrers affect neuron firings somehow, if I usnderstand correctly, so maybe chain a bunch of neurotransmitters to some neural dust as functional groups on the end of polymer chains, then change surface charge to make the chains scrunch up or fan out?

I only know just enough about any of this to get myself into trouble, so if it doesn't look like I know what I'm talking about, I probably don't.

(Sorry to spam comments. I'm separating questions out to keep the discussion tidy.)

Comment by WhySpace_duplicate0.9261692129075527 on Neuralink and the Brain’s Magical Future · 2017-04-24T03:34:03.443Z · LW · GW

The article only touches on it briefly, but suggests faster AI takeoff are worse, but "fast" is only relative to the fastest human minds.

Has there been much examination of the benefits of slow takeoff scenarios, or takeoffs that happen after human enhancements become available? I vaguely recall a MIRI fundraiser saying that they would start putting marginal resources toward investigating a possible post-Age of EM takeoff, but I have no idea if they got to that funding goal.

Personally, I don't see Brain-Computer Interfaces as useful for AI takeoffs, at least in the near term. We can type ~100 words per minute, but it takes more than 400 minutes to write a 40,000 word novel. So, we aren't actually I/O bound, as Elon believes. We're limited by the number of neurons devoted to a given task.

Early BCIs might make some tasks much faster, like long division. Since some other tasks really are I/O bound, they'd help some with those. But, we wouldn't be able to fully keep up with AI unless we had full-fledged upgrades to all our cognative architecture.

So, is almost keeping up with AI likely to be useful, or are slow takeoff just as bad? Are the odds of throwing together a FAI in the equivalent of a month any better than in a day? What % of those pannicked emergency FAI activities could be speed up by better computer user interfaces/text editors, personal assistants, a device that zapped your brain every time it detected Akrasia setting in, or by a RAM upgrade to the brain's working-memory?

(sorry to spam. I'm separating questions out to keep the discussion tidy.)

Comment by WhySpace_duplicate0.9261692129075527 on Neuralink and the Brain’s Magical Future · 2017-04-24T02:25:00.499Z · LW · GW

TL;DR of the article:

This piece describes a lot of why Elon Musk wanted to start Neurolink, and how Brain-Computer Interfaces (BCIs) currently work, and how they might be implemented in the future. It's a really, really broad article, and aims for breadth while still having enough depth to be useful. If you already have a grasp of evolution of the brain, Dual Process Theory, parts of the brain, how neurons fire, etc. you can skip those parts, as I have below.

AI is dangerous, because it could achieve superhuman abilities and operate at superhuman speeds. The intelligence gap would be much smaller if we also had access to such abilities. Therefore, we should attempt this if possible.

This might be possible, despite how extremely limited and highly invasive existing BCIs are. Opening the skull is obviously way too invasive for most people, but the blood vessels offer a possible minimally invasive solution. They are essentially a highway which goes directly to every neuron in the brain. Current methods monitor at most ~100 neurons, or have low temporal resolution. 1,000,000 neurons is probably the tipping point, where it would stop being an alternative to a keyboard/screen input/outputs, and start being transformative.

Neuralink is exploring many possibilities, and probably won't narrow to just one any time soon. However, options might include "neural dust", or stints in the blood vessels. Just as dies have made fine cell structures visible under microscopes, and genetically engineering bioluminescent genes into living animals has made cells glow when active, Neuralink would need a way for such a device to detect individual neuron firings on a large scale.

To do this, the inserts themselves only need to be able to:

  1. React differently to electrical discharge associated with a nearby neurons firing, or to other changes associated with neurons firing, like sodium and potassium levels.

  2. Have that difference be detectable from outside the skull. (I'd divide this into active methods, like emitting light in a wavenelgth which penetrates the skull, or passive changes in properties detectable from the outside, like radioactive isotopes which cluster together based on variables in blood flow.)

(The piece doesn't make this distiction, but I thought it would be useful for better discussion and understanding.)

Neuralink, of course, hasn't narrowed the specifics down very much (and will probably pivot several times, in my opinion). However, they will start out offering something largely similar to the sorts of BCIs available to people with paralysis or sensory problems. Elon hopes that if everything goes smoothly, in a decade they would have something which could provide a useful feature to someone without such disabilities, if the FDA would allow it.

They also hope to be able to eventually influence neural firings, so that we could supply information to the brain, rather than just reading information out. This would require something which could be influenced from the outside, and then influence nearby neurons. We can already put an electric field through the whole brain, to minimize seizures, but for meaningful inputs this would also have to be done at the neuron leven.

Why you should read it anyway:

It's >35,000 words. (For comparison, the cutoff for "short novel" is 40,000.) That said, it's a good read, and I recommend it if you want to understand why Elon Musk might think a BCI might increase our odds of surviving an AI takeoff scenario.

A lot of it is still hand-waving, and doesn't make it clear that we don't necessarily need full self-replicating autonomous nanobots or whatever. Since it doesn't provide a specific architecture, but just surveys what might be possible, I think it's easy to give an uncharatable reading. I've tried to steel-man the phrasing here, but I think if we focus on tangible, near-term concepts, it can be illustrative of what is possible.

If you read this with a critical eye, you'll just note that they haven't narrowed down to one architecture yet, and complain that their lack-of-an-architecture can't possibly work. The point is to convince lay people that this might even be possible, not to convince them that Neurolink will succeed, but the comments I've seen so far have just been skepticism of Neurolink.

Instead, I'd encourage you to read with an eye toward what could be done with a stint or neural dust, and then critically examine the more tangible challenge of how small each of those possible capabilities could be made. What could be done passively? What could be done if inductively powered? How small of blood vessels could various devices fit through? Will those shrink with Moore's law, or are they physics-constrained?

Such questions will generate the possible concrete architectures which you can then apply a critical lens to. Don't bother reading if you just want to be critical of the exploratory activity itself. It won't even put up a fight.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Apr. 17 - Apr. 23, 2017 · 2017-04-20T02:28:01.108Z · LW · GW

TL;DR: What are some movements you would put in the same reference class as the Rationality movement? Did they also spend significant effort trying not to be wrong?

Context: I've been thinking about SSC's Yes, We have noticed the skulls. They point out that aspiring Rationalists are well aware of the flaws in straw Vulcans, and actively try to avoid making such mistakes. More generally, most movements are well aware of the criticisms of at least the last similar movement, since those are the criticisms they are constantly defending against.

However, searching "previous " in the comments doesn't turn up any actual exemples.

Full question: I'd like to know if anyone has suggestions for how to go about doing reference class forcasting to get an outside view on whether the Rationality movement has any better chance of succeeding at it's goals than other, similar movements. (Will EA have a massive impact? Are we crackpots about Cryonics, or actually ahead of the curve? More generally, how much weight should I give to the Inside View, when the Outside View suggests we're all wrong?)

The best approach I see is to look at past movements. I'm only really aware of Logical Positivism, and maybe Aristotle's Lyceum, and I have a vague idea that something similar probably happened in the enlightenment, but don't know the names of any smaller schools of thought which were active in the broader movement. Only the most influential movements are remembered though, so are there good examples from the past ~century or so?

And, how self-critical were these groups? Every group has disagreements over the path forward, but were they also critical of their own foundations? Did they only discuss criticisms made by others, and make only shallow, knee-jerk criticisms, or did they actively seek out deep flaws? When intellectual winds shifted, and their ideas became less popular, was it because of criticisms that came from within the group, or from the outside? How advanced and well-tested were the methodologies used? Were any methodologies better-tested than Prediction Markets, or better grounded than Bayes' theorem?

Motive: I think on average, I use about a 50/50 mix of outside and inside view, although I vary this a lot based on the specific thing at hand. However, if the Logical Positivists not only noticed the previous skull, but the entire skull pile, and put a lot of effort into escaping the skull-pile paradigm, then I'd probably be much less certain that this time we finally did.

Comment by WhySpace_duplicate0.9261692129075527 on Avoiding Emotional Dominance Spirals · 2017-04-01T18:20:29.445Z · LW · GW

I'm not so sure. Would your underlying intuition be the same if the torture and death was the result of passive inaction, rather than of deliberate action? I think in that case, the torture and death would make only a small difference in how good or bad we judged the world to be.

For example, consider a corporate culture with so much of this dominance hierarchy that it has a high suicide rate.

Also:

Moloch whose buildings are judgment! ... Lacklove and manless in Moloch! ... Moloch who frightened me out of my natural ecstasy!

... Real holy laughter in the river! They saw it all! the wild eyes! the holy yells! They bade farewell! They jumped off the roof! to solitude! waving!

Meditations on Moloch/Howl

Doesn't seem like a difference of kind, and maybe not even of degree. (The suicide rate is a couple percent, and higher in industrialized countries if I recall. What percent of the citizens of Oceania are tortured to death? ~2%?) I think 1984 is mainly shocking because of status quo bias. (But I haven't read it, so I'm probably missing some stronger points against that world.)

Most of the badness seems to be from the general state of both worlds, rather than from the occasional person tortured to death on the side. That's just the tip of the iceberg. It's a small, but obvious, part of much deeper problems. That's why EA doesn't use suicide rate or incarceration rate as their primary metrics to optimize for. They're just symptoms.

Comment by WhySpace_duplicate0.9261692129075527 on ‘Crucial Considerations and Wise Philanthropy’, by Nick Bostrom · 2017-04-01T16:08:01.762Z · LW · GW

I'd add that it also starts to formalise the phenomenon where one's best judgement oscillates back and forth with each layer of an argument. It's not clear what to do when something seems a strong net positive, then a strong negative, then a strong positive again after more consideration. If the value of information is high, but it's difficult to make any headway, what should we even do?

This is especially common for complex problems like xrisk. It also makes us extremely prone to bias, since we by default question conclusions we don't like more than ones we do.

Comment by WhySpace_duplicate0.9261692129075527 on What's up with Arbital? · 2017-03-30T03:49:02.023Z · LW · GW

This is really sad. I'm sorry to hear things didn't work out, but I'm still left wondering why not.

I guess I was really hoping for a couple thousand+ word post-mortem, describing the history of the project, and which hypotheses you tested, with a thorough explanation of the results.

If you weren't getting enough math input, why do you think that throwing more people at the problem wouldn't generate better content? Just having a bunch of links to the most intuitive and elegant explanations, gathered in one place, would be a huge help to both readers and writers. Students trying to learn are already doing this through blind googling, so the marginal work to drop the links is low.

Pulling all the info together into a good explanation still requires one dedicated person, but perhaps that task can be broken down into chunks too. Like, once one version is written, translating it for non-mathy people should be relatively easy. Same for condensing things for mathy people.

But, why wouldn't adding more mathematicians mean a few would be good at and interested in writing new articles? Where did you do outreach? What did you do? There are entire communities, scattered across the web, who exist to try and learn and teach math. Have you tried partnering with any of them, or recruiting members?

If not, why do you think it won't work? Do you see promising alternative approaches, or are good explanations impossible even in principle?

Sorry for the flood of questions. I've just been waiting with baited breath for Arbital to stop pushing me away and start pulling people in. I even linked some people, but felt guilty about it for putting a strain on your overloaded servers before you were ready for the general public.

Comment by WhySpace_duplicate0.9261692129075527 on Could utility functions be for narrow AI only, and downright antithetical to AGI? · 2017-03-19T07:55:26.636Z · LW · GW

I don't see any reason why AI has to act coherently. If it prefers A to B, B to C, and C to A, it might not care. You could program it to prefer that utility function.*

If not, maybe the A-liking aspects will reprogram B and C out of it's utility function, or maybe not. What happens would depend entirely on the details of how it was programmed.

Maybe it would spend all the universe's energy turning our future light cone from C to B, then from B to A, and also from A to C. Maybe it would do this all at once, if it was programmed to follow one "goal" before preceding to the next. Or maybe different parts of the universe would be in different stages, all at the same time. Think of it like a light-cone blender on pure.

Our default preferences seem about that coherent, but we're able to walk and talk, so clearly it's possible. It explains a lot of the madness and incoherence of the way the world is structured, certainly. Luckily, we seem to value coherence, or at least are willing to sacrifice on having our cake and eating it too when it becomes clear that we can't have it both ways. It's possible an subtly incoherent AGI would operate at cross purposes for a long time before discovering and correcting it's utility function, if it valued coherence.

However, MIRI isn't trying to program a sane AGI, not explore all possible ways an AI can be insane. Economists like to simplify human motives into idealized rational agents, because they are much, much simpler to reason about. The same is true for MIRI, I think.

I've given this sort of thing a little thought, and have a Evernote note I can turn into a LW post, if there is interest.

* I use the term "utility function broadly, here. I guess "programming" would be more correct, but even an A>B>C>A AI bears some rough resemblance to a utility function, even if it isn't coherent.

Comment by WhySpace_duplicate0.9261692129075527 on Open Thread, March. 6 - March 12, 2017 · 2017-03-10T06:11:38.726Z · LW · GW

I rather like this way of thinking. Clever intuition pump.

What are we actually optimizing the level-two map for, though?

Hmmm, I guess we're optimizing out meta-map to produce accurate maps. It's mental cartography, I guess. I like that name for it.

So, Occam's Razor and formal logic are great tools of philosophical cartographers. Scientists sometimes need a sharper instrument, so they crafted Solomonoff induction and Bayes' theorem.

Formal logic being a special case of Bayesian updating, where only p=0 and p=1 values are allowed. There are third alternatives, though. Instead of binary Boolean logic, where everything most be true or false, it might be useful to use a 3rd value for "undefined". This is three-value logic, or more informally, Logical Positivism. You can add more and more values, and assign them to whatever you like. At the extreme is Fuzzy Logic, where statements can have any truth value between 0 and 1. Apparently there's also something which Bayes is just a special case of, but I can't recall the name.

Of all these possible mental cartography tools though, Bayes seems to be the most versatile. I'm only dimly aware of the ones I mentioned, and probably explained them a little wrong. Anyone care to share thoughts on these, or share others they may know? Has anyone tried to build a complete ontology out of them the way Eliezer did with Bayes? Are there other strong metaphysical theories from philosophy which don't have a formal mathematical corollary (yet)?

Comment by WhySpace_duplicate0.9261692129075527 on Open Thread, March. 6 - March 12, 2017 · 2017-03-06T19:36:49.818Z · LW · GW

True. Maybe we could still make celebrate our minor celebrities more, along with just individual good work, to avoid orbiting too much around any one person. I don't know what the optimum incentive gradient is between small steps and huge accomplishments. However, I suspect that on the margin more positive reinforcement is better along the entire length, at least for getting more content.

(There are also benefits to adversarial review and what not, but I think we're already plenty good at nitpicking, so positive reinforcement is what needs the most attention. It could even help generate more long thoughtful counterarguments, and so help with the better adversarial review, improving the dialectic.)

Comment by WhySpace_duplicate0.9261692129075527 on Open Thread, March. 6 - March 12, 2017 · 2017-03-06T18:21:43.658Z · LW · GW

Awesome link, and a fantastic way of thinking about how human institutions/movements/subcultures work in the abstract.

I'm not sure the quote conveys the full force of the argument out of that context though, so I recommend reading the full thing if the quote doesn't ring true with you (or even if it does).

Comment by WhySpace_duplicate0.9261692129075527 on Am I Really an X? · 2017-03-06T18:02:19.296Z · LW · GW

I agree that philosophy and neuroscience haven't confirmed that the qualia I perceive as red is the same thing as the qualia you experience when you look at something red. My red could be your blue, etc. (Or, more likely, completely unrelated sensations chosen randomly from trillions of possibilities.) Similarly, we can't know exactly what it's like to be someone else, or to be an animal or something.

However, it's perfectly reasonable to group all possible human experiences into one set, and group all possible things that an ant might experience in another. If you scanned the brains of a trillion ants and a trillion humans, and ran them as digital simulations, it would be easy for someone to look at them and know which was which.

Similarly, if you scanned 3^^^3 artists and 3^^^3 programmers, I'd bet that you could find certain patterns and systematic differences in how they think. After looking at all those minds, you could easily look at another one and tell whether they were an artist or an programmer. Same for men/women, or republicans/democrats, etc.

This is despite potentially huge differences in the internal subjective experiences of programmers. It's not that there's one single "what it's like to be an programmer" experience or anything, but there is a single set of all programmer minds. This includes qualia and programming methods of thought, and whatever else.

Maybe you could even measure these differences with even crude MRI scans of people's brains. It would be interesting to scan a thousand cis men after certain verbal prompts asking how they feel about their gender identity. If OP's hypothesis is true, then confident trans men should look pretty similar to confident cis men, and trans men worrying "am I really an X" should look a lot like cis men questioning their own gender identity.

You should get about the same result if you ran the experiment again on cis and trans women. Obviously there would be some confounders, like hormone levels and any physical differences between people born biologically male or female. However, this sort of thing seems easy enough to control for. The bigger issue I see is that all those MRIs would cost a fortune, and we may not even have sufficiently high resolution technology to even see the differences we're looking for.

But, doing philosophy is cheap, and it seems to me that hypotheses like these have decent odds of being true. I agree that reasoning about individual differences may be as hopeless as wondering what it’s like to be a bat, but reasoning about huge classes of mind states seems entirely valid.

Comment by WhySpace_duplicate0.9261692129075527 on On not getting a job as an option · 2017-02-28T16:28:13.991Z · LW · GW

that still rules out the globehopping, couchsurfing lifestyle.

Not necessarily. I'd be fine with it if my girlfriend decided to hitchhike around Europe for a month or two, and I'm pretty sure she'd be fine with me doing the same. There's no reason the one with the job couldn't take a vacation in the middle, too.

If the unemployed partner did this twice a year, for 2 months at a time, that'd be 1/3 of their time spent globetrotting. If they did this 3x a year, (2 months home, then 2 months exploring, then 2 months home again) that'd be pushing it, but might be stable long term if they could find ways to make sure the working party didn't feel used or left out.

Comment by WhySpace_duplicate0.9261692129075527 on The Semiotic Fallacy · 2017-02-23T20:58:58.915Z · LW · GW

This was a useful article, and it's nice to know the proper word for it. Let me see if I can add to it slightly.

Maybe a prisoner is on death row, and if they run away they are unlikely to suffer the consequences, since they'll be dead anyway. However, even knowing this, they may still decide to spend their last day on earth nursing bruises, because they value the defiance itself far more than any pain that could be inflicted on them. Perhaps they'd even rather die fighting.

It looks like you don't reflectively endorse actions taken for explicitly semiotic reasons, and lean toward more pure consequentialism. Based only on what you've said, semiotic actions aren't fallacious when they yield outside benefits in the long run, but are fallacious when they don't lead to other good things. (Because you treat semiotic acts as only instrumentally valuable, rather than as terminal values.)

However, it seems likely that some semiotic acts can be good in and of themselves. That is, we reflectively endorse them, rather than just doing them because evolution gave us an impulse to signal which we have a hard time fighting. Semiotic impulse is certainly a human universal, and therefore a part of our current utility function, and it seems plausible that it will survive intact in some form even after more careful examination of our values.

It seems like that sorts of the things we do for explicitly symbolic reasons are more likely to fall into this category than normal subconscious signaling. If we didn't endorse it to some degree, we'd just make sure not to be conscious of doing it, and then keep doing it anyway. To be aware that we're doing it, it can't conflict too much with our positive self-image, or societal values, or anything like that.

Of course, just because we naively support a semiotic act explicitly doesn't mean we still will after closer examination. Maybe we think engagement rings are a touching form of costly signaling at first, but once we understand more about the signaling dynamics at play making us do such things, we decide that conspicuous displays of consumption make society far worse off. You may then decide not to feed Moloch, and try to lessen the keeping up with the Joneses effect.

Personally, I'm rather a fan of the Apollo program, and the idea that long after humanity has killed itself off, the Voyager probe may still survive drifting among the stars, with our last surviving words inscribed in gold.

Comment by WhySpace_duplicate0.9261692129075527 on Increasing GDP is not growth · 2017-02-17T21:33:57.703Z · LW · GW

Agreed. I'd love to see even more of all of these sorts of things, but the low margin nature of the industry makes this somewhat difficult to attack directly, so there isn't anywhere near as much money being invested in that direction as I would like.

I believe NASA has gotten crop yields high enough that a single person can be fed off of only ~25 m^2 of land, (figure may be off, or may be m^3 or something, but that's what I vaguely recall.) but that would have been with fancy hydroponic/aquaponic/aeroponic setups or something, and extremely high crop density. It would be awesome to see fully automated vertical greenhouses pumping out GMO produce for almost 0 cost.

I recently saw someone joke about engineering GMO wheat as an invasive species to out-compete grass. If we wanted to, I suppose we could also replace all the planet's trees with fruit trees, and build ourselves a garden of Eden, with an absurd surplus of food, available for free. That's probably a little extreme, considering that some people are rather attached to nature as it is, but maybe we'll terraform other planets like that?

Just some musings and paradise engineering. It’s interesting to consider various post-scarcity economies where things we work hard for are as common as air.

Comment by WhySpace_duplicate0.9261692129075527 on Increasing GDP is not growth · 2017-02-17T21:07:07.067Z · LW · GW

A job is a cost

Agreed. When I said the "cost to local jobs" I was being informal, but referring to the (supposed) increase in unemployment as Walmart displaces local, less efficient small businesses.

Paying people to do a job which can be eliminated is like paying people to dig holes and fill them back in. I'd rather just give them the money than pay them to do useless work, but I'll take the second option over them being unemployed.

As an interesting side note, I think this might put me on the opposite side of the standard anti-Walmart argument. The meme argues that, Walmart not paying its workers a living wage and making it difficult to unionize forces the government to step in and provide aid, and that this is in effect subsidizing Walmart.

However, because Walmart sells mainly to the poor, I am in favor of subsidizing them in any way that passes through to the poor and doesn’t get skimmed off the top. Maybe that would mean I’d even be against a law forcing them to pay $10/hr or some such, if the benefits to the employees didn’t outweigh the net drawbacks to the customers.

Mainly I just find it depressing that all current political narratives seem to ignore these complexities, and boil down to “Walmart bad” or “markets good” or whatever. Maybe some more intelligent conversations happen behind closed doors, where no one can hear politicians make sane concessions to the other side.

Comment by WhySpace_duplicate0.9261692129075527 on Increasing GDP is not growth · 2017-02-17T19:44:49.584Z · LW · GW

these are called "recessions" and ... "depressions".

Ha, very good point. Our current society is largely built around growth, and when growth stops the negative effects absolutely do trickle down, even to people who don't own stocks. In fact, companies were counting on those increases, and so have major issues when they don't materialize, and need to get rid of workers to cut costs.

I will mention that through most of history and prehistory, the economic growth rate has been much, much, smaller. I haven't read it, so I can't vouch for its quality, but apparently the book The End of Growth: Adapting to Our New Economic Reality suggests that economic growth can't continue indefinitely due to physics limitations, and lays out a framework for transitioning to a post-growth economy. I have no idea how gentle or unpleasant such a transition might actually be. (Also, note that I am hopeful that we can avoid resource limitations by transitioning to a space based economy, and am nowhere near as pessimistic as I think the authors are likely to be.)

China.

China did indeed achieve massive benefits from industrialization. There's a lot of evidence that maximizing economic growth is an excellent way to play catch-up and obtain modern amenities for your population. Perhaps it's even the fastest theoretically possible way, since access to capital is the limiting factor for improved quality of life, and selling cheap stuff gets you lots of capital. I don't think developing countries should try communism or anything like that, unless for some reason they expect it to result in higher economic growth, since the data suggests that free markets are much better for them.

I would, however, suggest that the price of basic amenities appears to me to be a limiting factor in the quality of life of poor people in developed nations, and that increases in national wealth tend not to translate into proportional increases in purchasing power for them, although there is still some gain. (As I said before, I should really look into the details, though.) I see 2 basic classes of solutions:

  1. You can try to funnel more goods and money to them. This might be done through tax structures, aid programs, education, basic income, etc. Either you try and improve their earning potential, or give them things directly, but either way they wind up with more. The end result is that they can purchase more such amenities at the same price, or perhaps a little cheaper due to more economies of scale and more competition for those items.

  2. You can try to funnel more R&D into the sorts of things that the poor want than a free market would otherwise do. Most of the ways of doing this will cut into GDP somewhat, but maybe there are some public good type things that would out-preform the market, but where the benefits are difficult for one company to capture. A dollar spent on specific types of education, for example, may increase GDP by more than 1 dollar. However, since it can be difficult to capture a return on investment,^[1] we have a tragedy of the commons scenario, and government or some powerful entity has to step up and foot the bill for the common good, if we want things like that. (Note that I'm not sure that this is still true on the margin, just that if we cut all funding for education that the GDP would drop by more than the amount saved.)

No one

I was being a bit hyperbolic there, but you'll note that I followed it with 2 examples of startups which might in the future actually be cheaper than what the poor currently use. (3D printing might remove labor costs from construction, and Soylent has aspirations of making food into a utility. I probably should have said so specifically.)

Walmart is a good point. I’m not sure whether the benefits from cheaper goods outweigh the cost to local jobs, but I’m sure we’ve both heard the complaints. That’s getting dangerously close to talking politics, so I’d prefer to avoid getting into details, but I’d be interested if anyone knows of any academic research or cost-benefit analyses.

Uber may be cheaper than taxis, and AirBnB may be cheaper than hotels, but the poor don't use taxis or hotels. I am hopeful that self-driving cars will make transportation cheap enough that the poor benefit, though.

My point wasn't that the poor aren't any better off decade by decade. That appears to be false. My point is that they aren't 5% better off each year, even though the economic growth rate is about maybe 7.5-ish-percent with maybe 2.5% inflation. So, most (but not all) of that growth is going into sectors which don't benefit the poor much.


[1] Interestingly, this appears to be precisely what Signal Data Science's business model is. They teach you in exchange for a fraction of your future salary. However, perhaps due to irrationality, there doesn't seem to be a wider market for this sort of thing.

Comment by WhySpace_duplicate0.9261692129075527 on Increasing GDP is not growth · 2017-02-17T17:29:44.873Z · LW · GW

I'd like to take this a step further, even. If you are a utilitarian, for example, what you really care about is happiness, which is only distantly linked to things like GDP and productivity, at least in developed nations. People who have more money spend more money,^[citation_needed] so economic growth is disproportionately focused on better meeting the desires of those with more money. Maybe the economy doubles every couple decades, but that doesn't mean that the poor have twice as much too. I would be interested to know precisely how much more they actually do have, perhaps as measured by decreasing inflation adjusted rent prices, food cost, utility costs, used car prices, bus fare, etc.

Only a tiny fraction of startups are aimed at making rent cheaper or groceries affordable, although such benefits may sometimes trickle down. I'm somewhat hopeful for things like 3D printed luxury homes for the wealthy or Soylent for techies who want to countersignal.

If the rich really were utility monsters capable of enjoying each dollar of luxury items at least as much as the poor would enjoy the dollar, then there wouldn't be a problem according to a pure hedonic utilitarian. (Although, a desire for Fairness is apparently a human universal, according to moral foundations theory, so actual humans would be uncomfortable with this, even if their Liberty value was stronger, and overrode the concerns with fairness.)

It appears that money really can buy happiness for the poor, and that's still somewhat true for the middle class, but once you get to around upper middle class, it becomes extremely difficult to even measure additional gains in happiness. It's not clear to me that there even are gains beyond that point, but feel free to read the article on Happiness Economics yourself, particularly the sections on the effects of income on individuals, and the GDP/GNP section.

In fact, some countries appear to be much happier than you'd predict just based on knowing their GDP, while others are much less happy. This TED talk makes the case for optimizing for Social Progress Index rather than GDP, but I believe the field of Happiness Economics also uses things like Gross National Happiness, Satisfaction with Life Index, and the World Happiness Report, and I don't really know enough to have a strong opinion between them. I just think something in that general direction is a better metric than GDP or productivity.

Comment by WhySpace_duplicate0.9261692129075527 on How to not earn a delta (Change My View) · 2017-02-14T19:29:08.886Z · LW · GW

Here, have a related podcast from You Are Not So Smart.

TL;DR:

A scientist realized that the Change My View subreddit is basically a pile of already-structured arguments, with precisely what changed the person's mind clearly labeled with a "Δ". He decided to data mine it, and look at what correlated with changed minds.

Conclusions:

  • Apparently people with longer, more detailed, better structured initial views were more likely to award a delta. (Maybe that's just because they changed their mind on one of the minor points though, and not the bigger topic? IDK.)

  • The more people joined the debate, and the longer the comments and comment chains, the more likely the poster was to award a delta. (More and better arguments? I find it interesting that so few people changed their mind on first exposure to someone else's considerations, and that it takes several back-and-forths to change a view. Maybe they're going to the meta-level to decide whether it's valid evidence? Maybe it just takes some dialog to grok the concept? Maybe people are usually aware of the simplest forms of counterarguments to their positions? I'd be interested to know more, but maybe we should be striving for more of a Socratic dialogue. I think there was a recent Eliezer FB post on good arguments being 4+ layers deep, which seems relevant.)

  • Using the word "we" and insults hurt the odds. (Ingroup/outgroup dynamics, I assume.)

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Feb. 13 - Feb. 19, 2017 · 2017-02-13T20:59:01.338Z · LW · GW

If you read it, I'd be interested to know what specific techniques they endorse, and how those differ from the sorts of things LW writes.

The general 4 categories of goals/subgoals Wikipedia lists seem right though. I've see people get stuck on 3 without having any idea what the physical problem was (2) and without more than a 1 hr meeting to set a "strategy" (1) to solve the problems that weren't understood.

  1. In consideration of a vision or direction...

  2. Grasp the current condition.

  3. Define the next target condition.

  4. Move toward that target condition iteratively, which uncovers obstacles that need to be worked on.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Feb. 13 - Feb. 19, 2017 · 2017-02-13T20:48:47.123Z · LW · GW

Someone on Brain Debugging Discussion (the former LW Facebook group) runs a channel called Story Brain. He decomposes movies and such, and tries to figure out what makes them work and why we like certain things.

Comment by WhySpace_duplicate0.9261692129075527 on The types of manipulation on vote-based forums · 2017-02-13T20:31:40.554Z · LW · GW

It seems weird to me to talk about reddit as a bad example. Look at /r/ChangeMyView, /r/AskScience, /r/SpaceX, etc, not the joke subs that aren't even trying for epistemic honesty. /r/SpaceX is basically a giant mechanism that takes thousands of nerds and geeks in on one end, and slowly converts them into dozens of rocket scientists, and spits them out the other side. For example, this is from yesterday. Even the majority that never buy a textbook or take a class learn a lot.

I think this is largely because Reddit is one of the best available architectures for sorting out the gems from the rest, when there is a critical mass of people who want gems. If you want more gems, you need to put more dirt through the filter.

The failure to this rule is the default subreddits, because everyone can vote, and not just those with high standards. An ideal solution would be to ask people to predict whether something will get, say, a 1-5 star rating by mods/curators, as a means of signal boosting. If you take a weighted average of these predictions based on each person’s historical accuracy, you can just use that as a rating, and spare the mods from having to review almost everything.

Really, I like extending this to multiple axes, rather than just lumping everything into one. For example, some topics are only new to new people, and such threads can be good places to link to older discussions and maybe even build on them. However, older members may have little interest, and may not want to have to engage with commenters who aren't familiar with older discussions. Arbital seems to be moving in this direction, just by having more coherent chains of prerequisite concepts, rather than whatever couple links the author thought to include in an essay.

Just some musings.

Comment by WhySpace_duplicate0.9261692129075527 on Pretending to be Wise · 2017-02-11T18:55:46.103Z · LW · GW

Nick Bostrom's Apostasy post

For anyone who comes this way in the future, I found Nick Bostrom's post through a self-critique of Effective Altruism.

Comment by WhySpace_duplicate0.9261692129075527 on The Social Substrate · 2017-02-10T06:40:45.023Z · LW · GW

I rather like this concept, and probably put higher credence on it than you. However, I don't think we are actually modeling that many layers deep. As far as I can tell, it's actually rare to model even 1 layer deep. I think your hypothesis is close, but not quite there. We are definitely doing something, but I don't think it can properly be described as modeling, at least in such fast-paced circumstances. It's something close to modeling, but not quite it. It's more like what a machine learning algorithm does, I think, and less like a computer simulation.

Models have moving parts, and diverge rapidly at points of uncertainty, like how others might react. When you build a model, it is a conscious process, and requires intelligent thought. The model takes world states as inputs, and simulates the effects these have on the components of the model. Then, after a bunch of time consuming computation, the model spits out a play-by-play of what we think will happen. If there are any points of uncertainty, the model will spit out multiple possibilities stemming from each, and build up multiple possible branches. This is extremely time consuming, and resource intense.

But there's a fast, System 1 friendly way to route around needing a time-consuming model: just use a look-up-table.^[1] Maybe run the time consuming model a bunch of times for different inputs, and then mentally jot down the outputs for quick access later, on the fly. Build a big 2xn lookup table, with model inputs in 1 column, and results in the other. Do the same for every model you find useful. Maybe have 1 table for a friend's preferences: inputting tunafish outputs gratitude (for remembering her preferences). Imputing tickling outputs violence.

Perhaps this is why we obsess over stressful situations, going over all the interpretations and everything we could have done differently. We're building models of worrying situations, running them, and then storing the results for quick recall later. Maybe some of this is actually going on in dreams and nightmares, too.

But there's another way to build a lookup table: directly from data, without running any simulation. I think we just naturally keep tabs of all sorts of things without even thinking about it. Arguably, most of our actions are being directed by these mental associations, and not by anything containing conscious models.

Here's an example of what I think is going on, mentally:

Someone said something that pattern matches as rash? Quick, scan through all the lookup tables within arm’s reach for callous-inputs. One output says joking. Another says accident. A third says he's being passive aggressive. Joking seems to pattern match the situation the best.

But oh look, you also ran it through some of the lookup tables for social simulations, and one came up with a flashing red light saying Gary's mom doesn't realize it was a joke.

That's awkward. You don't have any TAPs (Trigger Action Plans) installed for what to do in situations that pattern match to an authority figure misunderstanding a rude joke as serious. Your mind spirals out to less and less applicable TAP lookup tables, and the closest match is a trigger called "friend being an ass". You know he's actually joking, but this is the closest match, so you look at the action column, and it says to reprimand him, so you do.

Note that no actual modeling has occurred, and that all lookup tables used could have been generated purely experimentally, without ever consciously simulating anyone. This would explain why it's so hard to explain the parts of our model when asked: we have no model, just heuristics, and fuzzy gut feeling about the situation. Running the model again would fill in some of the details we've forgotten, but takes a while to run, and slows down the conversation. That level of introspection is fine in an intimate, introspective conversation, but if it's moving faster, the conversation will have changed topics by the time you've clarified your thoughts into a coherent model.

Most of the time though, I don't think we even explicitly think about the moving parts that would be necessary to build a model. Take lying, for example:

We rarely think "A wants B to think X about C, because A models B as modeling C in a way that A doesn't like, and A realizes that X is false but would cause B to act in a way that would benefit A if B believed it." (I'm not even sure that will parse correctly for anyone who reads it. That's kind of my point though.)

Instead, we just think "A told lie X to B about C". Or even just "A lied", leaving out all the specific details unless they become necessary. All the complexity of precisely what a lie is gets tucked away neatly inside the handle "lie", so we don't have to think about it or consciously model it. We just have to pattern match something to it, and then we can apply the label.

If pressed, we'll look up what "lied" means, and say that "A said X was true, but X is actually false". If someone questions whether A might actually believe X, we'll improve out model of lying further, to include the requirement that A not actually believe X. We'll enact a TAP to search for evidence that A thinks X, and come up with memories Y and Z, which we will recount verbally. If someone suspects that you are biased against A, or just exhibiting confirmation bias, they may say so. This just trips a defensive TAP, which triggers a "find evidence of innocence" action. So, your brain kicks into high gear and automatically goes and searches all your lookup tables for things which pattern match as evidence in your favor.

We appear to be able to package extremely complex models up into a single function, so it seems unlikely that we are doing anything different with simpler models of things like lying. There's no real difference in how complex the concept of god feels from the concept of a single atom or something, even though one has much more moving parts under the hood of the model. We're not using any of the moving parts of the model, just spitting out cashed thoughts from a lookup table, so we don't notice the difference.

If true, this has a bunch of other interesting implications:

  • This is likely also why people usually act first and pick a reason for that choice second: we don't have a coherent model of the results until afterward anyway, so it's impossible to act like an agent in real time. We can only do what we are already in the habit of doing, by following cashed TAPs. This is the reason behind akrasia, and the "elephant and rider" (System 1 and System 2) relationship.

  • Also note that this scales much better: you don't need to know any causal mechanisms to build a lookup table, so you can think generally about how arbitrarily large groups will act based only on past experience, without needing to build it up from simulating huge numbers of individuals.

  • It implies that we are just Chinese Rooms most of the time, since conscious modeling is not involved most of the time. Another way of thinking of it is that we store keep the answers to the sorts of common computations we expect to do in (working?) memory, so that the more computationally intense consciousness can concentrate on the novel or difficult parts. Perhaps we could even expand our consciousness digitally to always recompute responses every time.


[1] For the record, I don't think our minds have neat, orderly lookup tables. I think they use messy, associative reasoning, like the Rubes and Bleggs in How An Algorithm Feels From The Inside. This is what I'm referring to when I mention pattern matching, and each time I talk about looking something up in a empirically derived lookup table, a simulation input/results lookup table, or a TAP lookup table.

I think these sorts of central nodes with properties attached make up a vast, web-like networks, built like network 2 in the link. All the properties are themselves somewhat fuzzy, just like the central "rube"/"blegg" node. We could de-construct "cube" into constituent components the same way: 6 sides, all are flat, sharp corners, sharp edges, sides roughly 90 degrees apart, etc. You run into the same mental problems with things like rhombohedrons, and are forced to improve your sloppy default mental conception of cubes somehow if you want to avoid ambiguity.

All nodes are defined only by it's relation to adjacent nodes, just like the central rube/blegg node. There are no labels attached to the nodes, just node clusters for words and sounds and letters attached to the thing they are meant to represent. It would be a graph theory monster if we tried to map it all out, but in principle you could do it by asking someone how strongly they associated various words and concepts.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Feb. 06 - Feb. 12, 2017 · 2017-02-07T19:08:48.653Z · LW · GW

I use Metaculus a lot, and have made predictions on the /r/SpaceX subreddit which I need to go back and make a calibration graph for.

(They regularly bet donations of reddit gold, and have occasional prediction threads, like just before large SpaceX announcements. They would make an excellent target audience for better prediction tools.)

I've toyed with the idea of making a bot which searched for keywords on Reddit/LW, and tracked people's predictions for them. However, since LW is moving away from the reddit code base, I'm not sure if building such a bot would make sense right now.

Comment by WhySpace_duplicate0.9261692129075527 on new study finds performance enhancing drugs for chess · 2017-02-01T16:19:26.185Z · LW · GW

I'm worried that I found the study far more convincing than I should have. If I recall, it was something like "this would be awesome if it replicates. Regression toward the mean suggests the effect size will shrink, but still." This thought didn't stop me from still updating substantially, though.

I remember being vaguely annoyed at them just throwing out the timeout losses, but didn't discard the whole thing after reading that. Perhaps I should have.

I know about confirmation bias and p-hacking and half a dozen other such things, but none of that stopped me from overupdating on evidence I wanted to believe. So, thanks for your comment.

Comment by WhySpace_duplicate0.9261692129075527 on How often do you check this forum? · 2017-01-31T15:46:12.017Z · LW · GW

Ping pong.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Jan. 23 - Jan. 29, 2017 · 2017-01-24T18:54:02.163Z · LW · GW

In that case, let me give a quick summary of what I know of that segment of effective altruism.

For context, there are basically 4 clusters. While many/most people concentrate on traditional human charities, some people think animal suffering matters more than 1/100th as much as a human suffering, and so think of animal charities are therefore more cost effective. Those are the first 2 clusters of ideas.

Then you have people who think that movement growth is more important, since organizations like Raising for Effective Giving have so far been able to move like $3/year (I forget) to effective charities for each dollar donated to them that year. Other organizations may have an even higher multiplier, but this is fairly controversial, because it’s difficult to measure future impact empirically, and it risks turning EA into a self-promoting machine which achieves nothing.

The 4^th category is basically weird future stuff. Mostly this is for people who think humans going extinct would be significantly worse than a mere 7 billion deaths would be. However, it's not exclusively focused on existential risk. Unfortunately, we have no good ways of even evaluating how effective various anti-nuclear efforts are at actually reducing existential risk, and it's even worse for efforts against prospective future technologies like AI. The best we can do is measure indirect effects. So the entire category is fairly controversial.

I would further divide the "weird future stuff" category into Global Catastrophic Risk/x-risk and non-GCR/x-risk stuff. For example, Brian Tomasik has coined the term s-risk for risks of astronomical future suffering. He makes a strong case for wild animals experiencing more net suffering than happiness, and so thinks that even without human extinction the next billion years are likely to be filled with astronomical amounts of animal suffering.

Within the GDR/x-risk half of the "weird future stuff" category, there appear to be maybe 4 or 5 causes I'm aware of. Nuclear war is the obvious one, along with climate change. I think most EAs tend to think climate change is important, but just not tractable enough to be a cost effective use of resources. The risk of another 1918 Flu pandemic, or of an engineered pandemic, comes up occasionally, especially with relation to the new CRISPR gene editing technology. AI is a big concern too, but more controversial, since it is more speculative. I'm not sure I've ever seen asteroid impacts or nanotechnology floated as a cost-effective means of reducing x-risk, but I don't follow that closely, so perhaps there is some good discussion I've missed.

Much or most of the effort I've seen is to better understand the risks, so that we can better allocate resources in the future. Here are some organizations I know of which study existential risk, or are working to reduce it:

  • The Future of Humanity Institute at Oxford, and is led by Nick Bostrom. They primarily do scholarly research, and focus a good chunk of their attention on AI. There are now more academic papers published on human extinction than there are on dung beetles, largely due to their efforts to lead the charge.

  • Center for the Study of Existential Risk is out of Cambridge. I don't know much about them, but they seem to be quite similar to FHI.

  • Future of Life Institute was founded by a bunch of people from MIT, but I don't believe there is any official tie. They fund research too, but they seem to have a larger body of work directed at the general public. They give grants to researchers, and publish articles on a range of existential risks.

Perhaps there are discussion forums associated with these groups, but I'm unaware of them. There are a bunch of EA facebook groups, but they are mostly regional groups as far as I know. However, the EA forum and here are the closest things I know to what you're after.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Jan. 23 - Jan. 29, 2017 · 2017-01-23T17:16:54.104Z · LW · GW

I checked their karma before replying, so I could taylor my answer to them if they were new. They have 1350 karma though, so I asume they are already familiar with us.

Same likely goes for the existential risk segment of EA. These are the only such discussion forums I'm aware of, but neither is x-risk only.

Comment by WhySpace_duplicate0.9261692129075527 on What would you like to see posts about? · 2017-01-20T19:15:03.699Z · LW · GW

math, physics, and computer science

Yes, yes, and yes.

surveys of subjects or subfields

You mean like a literature review, but aimed at people entirely new to the field? If so, Yes. If not, probably also yes, but I'll hold off on committing until I understand what I'm committing to.

instrumental rationality

No. Just kidding, of course it's a Yes.

Personally, I think that changing the world is a multi-armed bandit problem, and that EA has been overly narrow in the explore/exploit tradeoff, in part due to the importance/tractablness/neglectedness heuristic. (And I can translate that sentence into english if the jargon is a bit much.)

I would like to see LW explore science, philosophy, and the world with an eye toward uncovering new things which are potentially big and important. (Hence, I'm a fan of Future of Humanity Institute, Foundational Research Institute, Principia Qualia, etc.) I suspect that in the next couple decades, we are likely to uncover multiple things as important or more important than AI takeoff scenarios, and the more we uncover the better.

Within the topics you mentioned, I'm particularly curious about:

  1. Mathematics: mathematical infinities and whether infinite utility might be attainable in our universe.

  2. Physics: condensed matter physics and entropy, with an eye toward engineering materials which will survive long into the heat death of the universe.

  3. Computer science: Everything seems to be built on simple binary Boolean logic, but obviously DNA uses base 4. (There are 4 base pairs.) So, I'm particularly interested in base 3 logic, many-valued logic, fuzzy logic, etc. I suspect these may have applications to quantum computers or novel architectures, where physics doesn’t like to give you simple Boolean operators, but more complex operators are easier to implement, if not to understand.

Comment by WhySpace_duplicate0.9261692129075527 on Infinite Summations: A Rationality Litmus Test · 2017-01-20T17:58:55.174Z · LW · GW

Edit: TL;DR: Mathematics is largely Ra worship, perhaps worse than even the more abstract social sciences. This means that That Magic Click never happens for most people. It's a prime example of "most people do not expect to understand things", to the point where even math teachers don't expect to understand math, and they pass that on to their students in a vicious cycle.

Surely as soon as you see the formula ... you know that you are dealing with some notion of addition that has been extended from the usual rules of addition.

Only if you know that it's possible to have multiple rules of addition. That's an unknown unknown for almost everyone on the planet. Most people aren't even familiar with the concept of unknown unknowns, and so are hopelessly far away from this in idea space. For them, they are more likely to just reject logic and math entirely as obviously wrong.

That requires being aware of the fact that addition can be constructed in multiple ways, which is very much NOT something you learn in school. They basically just present you with a series of weird looking "facts", and give a handwaving explanation. I suspect the vast majority of people, maybe even a narrow majority of LessWrongers, wouldn't even know that disagreeing with mathematics is something you're allowed to do. (“It’s math, it’s totally unambiguous, you can’t just disagree about the results.”) I suspect that's why this post has as many upvotes as it does, even if most of us are dimply aware of such things.

Let me try and explain where I'm coming from with this. I don't know about the rest of you, but I always went through the exact same procedure after learning each new layer of mathematics. It goes something like this:

Phase 1: Wait, 1234x5678 can be solved by multiplying 4x8, then 4x70, then 4x600, ect., then adding it all up??!! What are the chances of that algorithm in particular working? Of all the possible procedures, why not literally anything else?

Phase 2: Ok, I've done some simple examples, and it seems to produce the correct result. I guess I'll just have to grudgingly accept this as a brute fact about reality. It's an irreducible law that some ancient mathematician stumbles upon by accident, and then maybe did some complex an impenetrable sorcery to verify. Maybe someday I'll get a PhD in mathematics, and maybe then I'll understand what's going on here. Or maybe noone really understands it, and they just use a brute force solution. They just try every possible algorythm, in order of increasing kolmogrov complexity until one works. Pythagoras tried A+B+C=0, A+B=C, etc until finding that A^2 + B^2 = C^2. Progress in mathematics is just an automated, mechanical process, like supercomputers doing things entirely at random, and then spitting out things that work. No one really understands the process, but just blindly applying it seems to produce more useful math theorems, so they keep blindly turning the crank.

So, upon being told that A^2 + B^2 = C^2, or that 1+2+3+4+5+… = -1/12, my initial reaction is the usual disbelief, but with the expectation that after an hour or two of toying with numbers and banging my head against the wall trying to make sense of it, I'll invariably just give up and accept it as just one more impenetrable brute fact. After all, I've tried to punch holes in things like this ten thousand times before and never had any success. So, the odds of making any sense of it this time can't be more than 0.01% at most, especially with something so far above my head.

How can someone even do math without understanding what math is? Well, I can only offer my own anecdata:

I was always good at math through highschool, but I suspect I spent twice as much time as everyone else doing the homework. (When I did it. I didn't bother if I could get A's despite getting 0's on my homework.) Most of this time was spent trying to decipher how what we were doing could possibly work, or solving the problems in alternate ways that made more sense to me.

When I hit Calculus in college, I promptly failed out because I didn't have enough time to do the homework or complete the tests my way. (I rarely just memorized formulas, but instead beet my head against the wall toying with them until I more or less knew the algorithm to follow, even if I didn't understand it. I didn't know about spaced repetition yet, so I was unable to memorize enough of the formulas to pass the tests, and didn't have time to derive them.)

I concluded that I was just bad at math, especially since I could never follow anything being written on the board, because I would get stuck trying to make sense of the first couple lines of any proof. I considered my mathematical curiosity a useless compulsion, and assumed my brain just didn’t work in a way that let me understand math. In retrospect, I don't think anyone else in any of the classes actually understood either, but were just blindly following the algorithms they had memorized.

Personally, I have acquired 3 clues that math isn't just a series of random brute facts:

  1. Philosophy of Mathematics has a divide between Mathematical Platonism and Empiricism. I was really confused to hear a calculus professor make an offhand empiricist remark, because I wasn't aware that there was an alternative to Platonism. I had always just assumed that math was a series of platonic ideal forms, suspended in the void, and then physics was just built up from these brute facts. The idea of math as a social construct designed to fit and understand reality was bizarre. It wasn't until I read Eliezer's The Simple Truth and How to Convince Me That 2+2=3 that it really clicked.

  2. I stumbles upon A Mathematician's Lament, and gained a bunch of specific insight into how new mathematical ideas are created. It's difficult to sum up in just a few words, but Lockhart argues that how we teach mathematics would be like teaching music by having kids memorize and follow a vastly complex set of musical rules and notations, and never let them touch an instrument or hear a note until graduate school. After all, without the proper training, they might do it wrong. He argues that mathematics should be a fundamentally creative process. It is just a bunch of rules made up by curious people wondering what would happen to things if they applied those rules. Previously, whenever I saw a new proof, I'd spend hours trying to figure out why they had chosen those particular axioms, and how they knew to apply them like that. I could never understand, and figured it was way beyond my grade. Lockhart provides a simple explanation, which has since saved me many hours of handwringing: They were just playing around, and noticed something weird or cool or interesting or potentially useful. They then played around with things, experimenting with different options to see what would happen, and then eventually worked their way toward a proof. Their original thought process was nothing like the mysterious series of steps we memorize from the textbook to pass the test. It was exactly the sorts of things I was doing when I was toying with numbers and formulas, trying to make sense of them.

  3. I recently taught myself some lambda calculus. ("Calculus" here doesn't mean integration and differentiation, but only the simplest forms of operations. In fact, the basics are so simple that someone made a children's game called Alligator Eggs out of the rules of lambda calc.) It's basically just a simple set of rules, that you can string together and use to build up some interesting properties, including AND, OR, IF, IFF operators, integers, and addition/subtraction.

Let me tie it all back together. Apparently there are multiple ways of building up to operators like this, and lambda calc is just 1 of several possibilities. (And, I would have been mystified as to why the rules of lambda calc were chose if it weren't for reading The Mathematician's Lament first.) Under the mathematical empiricist view, by extension, it's not just how we build up to such operators that's arbitrary. It's ALL OF MATHEMATICS that's arbitrary. We just focus on useful operators instead of useless ones that don't fit reality. Or not, if we find other things interesting. No one expected non-Euclidian geometry to be useful, but as it turns out spacetime can warp, so it drifted into the domain of applied mathematics. But it started as someone toying around just for lolz.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Jan. 02 - Jan. 08, 2017 · 2017-01-03T17:00:03.913Z · LW · GW

That looks like a useful way of decreasing this failure mode, which I suspect we LWers are especially susceptible to.

Does anyone know any useful measures (or better yet heuristics) for how many gears are inside various black boxes? Kolmogorov complexity (from Solomonoff induction) is useless here, but I have this vague idea that chaos theory systems > weather forecasting > average physics simulation > simple math problems I can solve exactly by hand

However, that's not really useful if I want to know how long it would take to do something novel. For example, I’m currently curious how long it would take to design a system for doing useful computation using more abstract functions instead of simple Boolean logic. Is this a weekend of tinkering for someone who knows what they are doing? Or a thousand people working for a thousand years?

I could look at how long it took to design some of the first binary or ternary computers, and then nudge it up by an order of magnitude or two. However, I could also look at how long it takes to write a simple lambda-calculus compiler, and nudge up from that. So, that doesn't narrow it down much.

How should I even go about making a Fermi approximation here? And, by extension, what generalized principles can we apply to estimate the size of such black boxes, without knowing about any specific gears inside?

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Dec. 19 - Dec. 25, 2016 · 2016-12-19T19:18:26.668Z · LW · GW

off-by-one errors would go away.

I always have to consciously adjust for off-by-one errors, so this sounded appealing at first. (In the same way that Tau is appealing.)

However, after a bit of thought, I'm unable to come up with an example where this would help. Usually, the issue is that the 7th thru 10th seats aren't actually 10-7=3 seats, but 4 seats. (the 7th, 8th, 9th, and 10th seats).

Calling them the 6th thru 9th seats doesn't solve this. Can someone give an example of what it does solve, outside of timekeeping? (i.e., anything that counting time in beats wouldn't solve?)

Comment by WhySpace_duplicate0.9261692129075527 on Circles of discussion · 2016-12-17T15:08:50.736Z · LW · GW

BTW it's "canon" not "cannon" - cheers!

Thanks for the correction. I always worry that I'll make similar mistakes in more formal writing.

Comment by WhySpace_duplicate0.9261692129075527 on Circles of discussion · 2016-12-16T18:07:27.374Z · LW · GW

I don't really understand the reasons behind a lot of the proposed site mechanics, but I've been toying around with an idea similar to your slider, but for a somewhat different purpose.

Consider this paradox:

  1. As far as I can tell, humor and social interaction is crucial to keeping a site fun and alive. People have to be free to say whatever is on their mind, without worrying too much about social repercussions. They have to feel safe and be able to talk freely.

  2. This is, to some extent, at odds with keeping quality high. Having extremely high standards is one of the things that makes LW valuable, and gives it such a high signal to noise.

So, how do we cope with this dichotomy? One way is to allow users to either submit a comment/post to the outer circles, or to an inner one. I think this is part of what we were going for with the Discussion/Main dichotomy, but no one posted to Main, so people don't even check it anymore. But, because of our quality standards for Discussion, people also hadn't felt comfortable posting there, until recently when things have started picking up with a lot of good interesting articles. So, most of the actual discussion got pushed to the weekly open threads, or off the site entirely.

One way around this would be to have 2 "circles" as you call them. Users tag their own comments and submissions as either "cannon" or "non-cannon", based on epistemic status, whether they've thought about it for at least 5 min, whether it's informative or just social, whether they've read the Sequences yet or are a newbie, etc. You could, of course, add more circles for more granularity, but 2 is the minimum.

Either way, it's extremely important that the user's self-rating is visible, alongside the site's rating, so that people aren't socially punished for mediocre or low quality content if they made no claim to quality in the first place. This allows them to just toss ideas out there without having totally refined potential diamonds in the rough.

An interesting thing you could do with this, to discourage overconfidence and encourage the meek, would be to show the user their calibration curve. That is, if they routinely rank their own comments as outer circle quality, but others tend to vote them up to inner quality status, the system will visually show a corrected estimate of quality when they slide the bar on their own comment.

Maybe even autocorrect it, so that if someone tries to rate a comment with 1 star, but their average 1 star comment is voted to 3 stars, then the system will start it at 3 stars instead. Probably best to let people rate them themselves, though, since the social pressure of having to live up to the 3 star quality might cause undue stress, and lead to less engagement.

Comment by WhySpace_duplicate0.9261692129075527 on Open thread, Dec. 12 - Dec. 18, 2016 · 2016-12-14T04:08:27.494Z · LW · GW

Precisely my reaction. I aim for midnight-1:00, but consider 2:00 or 3:00 a mistake. 4:00 or 5:00 is regret incarnate.

Comment by WhySpace_duplicate0.9261692129075527 on 2016: a year in review in science · 2016-12-12T00:47:57.269Z · LW · GW

Personally, I'm excited about the formation of Solid Metallic Hydrogen in the lab. (Although, it only has 52% odds of being a big deal, as measured by citation count.) SMH may be stable at room temperature, and the SMH to gas phase transition could release more energy than chemical reactions do, making it more energy dense than rocket fuel. Additionally, there's like a ~35% chance of it superconducting at room temperature.

(As a side note, does anyone know whether something like this might make fusion pressures easier to achieve? I realize starting off a little more dense than other forms of solid hydrogen won't help much, but could the compression force and heating from the energy released supply the energy needed, similar to General Fusion's approach but starting with a solid instead of a plasma? Or, if it superconducts and has a high critical current density, could that enable stronger magnetic fields and stronger z pinch, or is that not the limiting factor?)

Comment by WhySpace_duplicate0.9261692129075527 on Which areas of rationality are underexplored? - Discussion Thread · 2016-12-05T17:07:52.322Z · LW · GW

group theory / symmetry

The Wikipedia page for group theory seems fairly impenetrable. Do you have a link you'd recommend as a good place to get one’s feet wet in the topic? Same with symmetry.

Thanks!

Comment by WhySpace_duplicate0.9261692129075527 on Downvotes temporarily disabled · 2016-12-02T19:29:48.862Z · LW · GW

...reactions like this,...

The relevant bit from the link:

... I'll happily volunteer a few hours a week.

EDIT: AAAUUUGH REDDIT'S DB USES KEY-VALUE PAIRS AIIEEEE IT ONLY HAS TWO TABLES OH GOD WHY WHY SAVE ME YOG-SOTHOTH I HAVE GAZED INTO THE ABYSS AAAAAAAIIIIGH okay. I'll still do it. whimper

Comment by WhySpace_duplicate0.9261692129075527 on Epistemic Effort · 2016-11-30T05:46:07.909Z · LW · GW

My reaction was the complete opposite: an excellent signaling tool.

If I just made a connection between 2 things, and want to bounce ideas off people, I can just say Epistemic effort: Thought about it musingly, and wanted to bounce the idea off a few people and no one will judge me for have a partially formed idea. Perhaps more importantly, anyone not interested in such things will skip the article, instead of wasting their time, and feeling the need do discourage my offending low quality post.

I'm not a fan of "brainstorming" in particular, but there really does seem to be a problem that brainstorming is trying to solve, and I think this would help solve it. Refining a diamond in the rough doesn't have to be a solitary activity; it can be a community task.

Comment by WhySpace_duplicate0.9261692129075527 on Expert Prediction Of Experiments · 2016-11-30T05:07:34.945Z · LW · GW

Thanks again. I haven't actually read the book, just Yvain's review, but maybe I should make the time investment after all.

Comment by WhySpace_duplicate0.9261692129075527 on Why GiveWell can't recommend MIRI or anything like it · 2016-11-30T05:01:33.641Z · LW · GW

Footnote: https://en.wikipedia.org/wiki/Streetlight_effect

Comment by WhySpace_duplicate0.9261692129075527 on Expert Prediction Of Experiments · 2016-11-29T18:12:39.127Z · LW · GW

Thanks for the comment. It's humbling to get a glimpse of vastly different modes of thought, optimized for radically different types of problems.

Like, I feel like I have this cognitive toolbox I've been filling up with tools for carpentry. If a mode of thought looks useful, I add it. But then I learn that there's such a thing as a shipyard, and that they use an entirely different set of tools, and I wonder how many such tools people have tried to explain to me only for me to roll my eyes and imagine how poorly it would work to drive a nail. When all you have is a hammer, everything is seen through the lens of the nail-driving paradigm.

My hindsight bias thinks it's obvious that Tetlock's questions might bias the sample toward foxes over hedghogs). He's asking people to make predictions about a wide range of domains. I predict that, if you asked leading experts in various subfields to make predictions about the results of experiments being conducted within their subfield, they would trounce outsiders, especially if the questions were open ended. (By open ended, I mean "how will the economy change if X happens" rather than "Will housing prices go up or down if X happens".)

It would be interesting if they also could make better predictions on similar types of problems. For example, are there "soft" problems, with thousands of variables and thousands of unknowns, which experts in the "soft" sciences are systematically better at solving?

It seems plausible that the "hard sciences" mental models necessary to solve problems with a couple of variables and 1 or 2 unknowns might not work well for soft problems. After all, the 2 mental architectures useful for these look radically different. The point of associative reasoning isn't to evaluate which of the first couple hypothesis is most likely. It's to make sure that the bigest factors are even on your list of potentially influential variables. Simply evaluating a couple random items on a long tailed Pareto chart of solutions will radically underpreform even the briefest seach through hypothesis space. The point is to be able to narrow down the space of hypotheses from trillions to a few dozen. It's the "explore" side of the explore/exploit tradeoff.

It would be interesting to test this, however. What happens if you take a Nobel laureate with a prize in medicine, and another with a prize in physics, give them enough funding to conduct a large number of experiments, and ask them both to spend a couple years trying to solve a wicked problem in an unrelated field? Does the physicist overestimate how applicable his or her knowledge is to the problem? Might the medical researcher take a more high-level, holistic approach, and have higher odds of success? Perhaps a less thorough but easier experiment would be to see how the two types of minds performed on candle problems.

(Also, on my second read through of your comment, I noticed an open parenthesis (which creates an unresolved tension which will stay with me all day).

Comment by WhySpace_duplicate0.9261692129075527 on A Return to Discussion · 2016-11-28T03:59:13.591Z · LW · GW

Maybe we could start tagging such stuff with epistemic status: exploratory or epistemic status: exploring hypotheses or something similar? Sort of the opposite of Crocker's rules, in effect. Do you guys think this is a community norm worth adding?

We have a couple concepts around here that could also help if they turned into community norms on these sorts of posts. For example:

  • triangulating meaning: If we didn't have a word for "bird", I might provide a penguin, a ostrich, and an eagle as the most extreme examples which only share their "birdness" in common. If you give 3+ examples of the sort of thing you're talking about, generally people will be able to figure out what the 3 things have in common, and can narrow things down to more or less the same concept you are trying to convey to them.

  • Principle of Charity: I think we pretty much have this one covered. We do have a bad nitpicking habit, though, which means...

  • Steel manning: If I'm trying to build up an idea, but it's only in the formative stages, it's going to have a lot of holes, most of which will be fairly obvious. This means making a lot of sweeping generalizations while explaining.

These are literally just the first couple things that popped into my head, so feel free to suggest others or criticize my thoughts.

In general, it seems like such discussions should be places to share related anecdotes, half-baked thoughts on the matter, and questions. Criticism might be rephrased as questions about whether the criticism applies in this instance. Those who don't "get" what is being gestured at might be encouraged to offer only questions.

I remember some study about innovation, which found that a disproportionate amount happened around the water cooler. Apparently GPS was invented by a bunch of people messing around and trying to figure out if they could triangulate Sputnik's position, and someone else wondering whether they could do the reverse and triangulate their own position from satellites with known orbits. We need places for that sort of aimless musing if we want to solve candle problems.

More broadly, we could start applying some of these norms to Discussion. After all, it's supposed to be for, you know, discussion. :p I think it's long overdue.

Comment by WhySpace_duplicate0.9261692129075527 on If we can't lie to others, we will lie to ourselves · 2016-11-28T02:21:10.101Z · LW · GW

I'd agree with you that most abstract beliefs aren't needed for us to simply live our lives. However, it looks like you were making a normative claim that to minimize bias, we shouldn't deal too much with abstract beliefs when we can avoid it.

Similarly, IIRC, this is also your answer to things like discussion over whether EMs will really "be us", and other such abstract philosophical arguments. Perhaps such discussion isn't tractable, but to me it does still seem important for determining whether such a world is a utopia or a dystopia.

So, I would have argued the opposite: try to develop a good, solid, comprehensive set of abstract principles, and then apply them uniformly to object-level decisions. This should help us optimize for the sorts of things our talk and thoughts are optimized for, and minimize the influence of our other biases. I am my conscious mind, so why should I care much what my subconscious wants?


Here's a bit more detail, if you are (or anyone else is) curious. If you've heard these sorts of arguments a hundred times before, feel free to skip and link to a counterargument.

Predicting how an unreflective society will actually react may be easier than this sort of philosophy, but social acceptance seems necessary but not sufficient here. Under my view, Oedipus Rex's life accomplishments might still have negative utility to him, even if he lived a happy life and never learned who his mother was. Similarly, the Star Trek universe quickly turns from a utopia to a dystopia if teleportation technically counts as death, or the moral equivalent, according to human minds who know all the facts and have heard all the arguments. (Klingons may come to different conclusions, based on different values.) I'm not a vegan, but there appears to be a small chance that animals do have significant moral weight, and we're living in a Soylent Green style dystopia.

I would argue that ignoring the tough philosophical issues now dooms us to the status quo bias in the future. To me, it seems that we're potentially less biased about abstract principles that aren't pressing or politically relevant at the moment. If I've thought about trolley problems and whatnot before, and have formed abstract beliefs about how to act in certain situations, then I should be much less biased when an issue comes up at work or a tough decision needs to be made at home, or there's a new political event and I'm forming an opinion. More importantly, the same should be true of reasoning about the far future, or anything else.

Comment by WhySpace_duplicate0.9261692129075527 on If we can't lie to others, we will lie to ourselves · 2016-11-28T00:40:47.514Z · LW · GW

Believe Less.

As in, believe fewer things and believe them less strongly? By assigning lower odds to beliefs, in order to fight overconfidence? Just making sure I'm interpreting correctly.

don't bother to hold beliefs on the kind of abstract topics

I've read this sentiment from you a couple times, and don't understand the motive. Have you written about it more in depth somewhere?

I would have argued the opposite. It seems like societal acceptance is almost irrelevant as evidence of whether that world is desirable.

Comment by WhySpace_duplicate0.9261692129075527 on Less costly signaling · 2016-11-26T03:58:41.104Z · LW · GW

Normally, I'm pretty good at remembering sources I get info from, or at least enough that I can find it again quickly. Not so much in this case. This was about halfway through a TED talk, but unfortunately TED doesn't search their "interactive transcripts" when you use the search function on their page. A normal web search for the sorts of terms I remember doesn't seam to be coming up with anything.

I scanned through all the TED talks in my browser history without much luck, but I have this vague notion that the speaker used the example to make a point about the importance of risk taking or something. But that doesn't really narrow down the search space within TED much, so I can't use it as a heuristic to screen search results.

Unless you want to scan through a couple hundred or thousand TED transcripts, or know of a way to search TED for keywords not in the titles, I'm ad a dead end. Sorry.