Open & Welcome Thread – April 2021 2021-04-04T19:25:09.049Z
An Exploratory Toy AI Takeoff Model 2021-01-13T18:13:14.237Z
Range and Forecasting Accuracy 2020-11-16T13:06:45.184Z
Considerations on Cryonics 2020-08-03T17:30:42.307Z
"Do Nothing" utility function, 3½ years later? 2020-07-20T11:09:36.946Z
niplav's Shortform 2020-06-20T21:15:06.105Z


Comment by niplav on bfinn's Shortform · 2021-07-19T12:16:35.551Z · LW · GW

Two comments:

There could be something that is at least deontologically iffy about killing gigantic numbers of sentient beings for comparatively pedestrian purposes. If one isn't completely certain of consequentialism, then that might weigh into ones considerations.

It also seems much less likely that animals are going to be treated well enough than for factory farming to be outlawed and/or superseded by clean meat. This is kind of answered by your answer to the hypothetical objection in your last paragraph, though.

Comment by niplav on Longtermism vs short-termism for personal life extension · 2021-07-17T13:54:13.116Z · LW · GW

On the risk of possibly distracting from the core point of the post, other possible short-termist approaches to longevity include ones that work against the speeding up of the subjective experience of time with age, for example by increasing the novelty in one's own life (there's also some research on meditation having that effect, though I haven't looked into it very much).

Comment by niplav on "If and Only If" Should Be Spelled "Ifeff" · 2021-07-17T00:54:28.077Z · LW · GW

This is my claim to fame (CTRL-F "ifef").

Comment by niplav on Thoughts on Human Models · 2021-07-13T15:11:52.083Z · LW · GW

As far as I understand the post, a system that wouldn't contain human values but would still be sufficient to drastically reduce existential risk from AI would not need to execute an action that has a specific effect on humans. If I'm getting the context right, it refers to something like task-directed AGI that would allow the owner to execute a pivotal act – in other words, this is not yet the singleton we want to (maybe) finally build that CEVs us out into the universe, but something that enables us to think long & careful enough to actually build CEV safely (e.g. by giving us molecular nanotechnology or uploading that perhaps doesn't depend on human values, modeled or otherwise).

Or have I misunderstood your comment?

Comment by niplav on [deleted post] 2021-07-07T19:00:48.872Z

Is the desire/cesire pun a thing that has had similar usage somewhere else (e.g. on LW or on another website?) A cursory websearch gives no results.

Comment by niplav on Taboo "Outside View" · 2021-06-19T14:25:41.716Z · LW · GW

Earlier proposal for more precise terminology: Gears Level & Policy Level by Abram Demski.

Comment by niplav on Bioinfohazards · 2021-06-11T12:20:58.464Z · LW · GW

Surprised about the answers to the second question. In conversations in EA circles I've had about biorisk, infohazards have never been brought up.

Perhaps there is some anchoring going on here?

Comment by niplav on Qria's Shortform · 2021-06-08T07:54:04.452Z · LW · GW

Surely, you've heard the adage that humans can adapt to anything? They have probably adapted to death, and removing that psychological adaption that has probably been with humans since they became smart enough to understand that death is a thing. I would expect it to be really hard to change or remove it (in fact, Terror Management Theory goes even further and argues that much of our psychology is built on the denial of or dealing with death).

Comment by niplav on Explanation vs Rationalization · 2021-06-03T12:44:43.929Z · LW · GW

This is nicely symmetric with Socratic Grilling on the other side (how can I explain without looking like I want to force the conclusion ←→ how can I ask questions without seeming confrontative/focused on rejecting the conclusion).

Also, "There's lots of room in interior design", lol. Thank you.

Comment by niplav on Mistakes with Conservation of Expected Evidence · 2021-05-31T13:42:23.942Z · LW · GW

Part 2 (and the dream algorithm) remind me of semi-decidability.

Comment by niplav on Moloch Hasn’t Won · 2021-05-31T13:41:54.516Z · LW · GW

There is a big difference between a universe with -hugenum value, and a universe with 0 value. Moloch taking over would produce a universe with 0 value, not -hugenum (not just because we might assume that pain and pleasure are equally energy-efficient).

When one then considers how much net value there is in the universe (or how much net disvalue!), I suspect Eluas winning, while probably positive, isn't that great: Sure, sometimes someone learns something in the education system, but many other people also waste intellectual potential, or get bullied.

Comment by niplav on Open and Welcome Thread - May 2021 · 2021-05-11T17:53:50.806Z · LW · GW

Out of interest, is there a public registry of bans? I assume not all bans are announced as in the case of ialdabaoth?

Comment by niplav on Open and Welcome Thread - May 2021 · 2021-05-11T17:49:21.213Z · LW · GW

Thanks for your answer! I had an idea that there's more very small housing in Asia, but never got such a clear exposition to a clear example. I'm not from the US, but from Europe, but they're fairly similar culturally (although I suspect Europe might have even stronger housing regulations than the US).

After some of the comments here, I've settled on a mixture of "it's the regulations" and "not *that* many people want it, but it's still available for the ones who do". I think that's because the need for dense housing during the industrial revolution was a long time ago, and the majority of people don't need/want nano-apartments, so they don't care/think about the possibility of very dense housing.

My guess would be that it's different in Asia because there industrial development is much younger, and the population is more used to "poor" and less luxurious living conditions.

Do you think that's getting at the truth?

Comment by niplav on Vulkanodox's Shortform · 2021-05-11T17:41:16.743Z · LW · GW

You're right, my example is not censorship.

Comment by niplav on Vulkanodox's Shortform · 2021-05-06T21:35:49.513Z · LW · GW

Yeah, good point about control through a third party/vs. the author themselves.

Tangentially related: My intuition is that there's a spectrum between categorization & censorship (burying comments, hiding very downvoted threads (i.e. making people click extra to see them) – just some trivial inconveniences). The great firewall of china is not difficult to circumvent, but >90% of people can't be bothered to set up a VPN.

I really like this paragraph of yours:

About the quality/truth aspect I agree but any system
currently used is not reflecting that. If somebody makes a post it is
rated for quality/truth by other people. But nobody rates their rating. People
can just vote down or up without it reflecting the truth or quality. I
can downvote your comment even if it is true because I do not like you.

I wonder what would happen if sites allowed higher-order voting (voting about votes themselves). Or does voting itself already solve the necessary problems?

As for checking truth/relevance, I'm a big fan of Metaculus. Sure, it has an up/downvote functionality for comments/questions, but there's still an inbuilt mechanism for deciding who was right with their predictions in the end (and if you are prescient, people will respect you more).

I disagree with you on "good" content, though. On the very basic level, there's stuff I like (and would like to like, and so on), and stuff I don't like (or whose disliking I'd endorse, and so on). I realize other people are similar to that, and will respect their recommendations (e.g. LessWrong upvotes). This "liking" already includes stuff from different viewpoints – anarchist and communization writings, social choice theory and deleuze etc.

And while I don't know how you organise your social interactions, I (mostly subconsciously) perform a lot of social filtering for people who say interesting and smart things, and probably also for people who agree with me in their basic outlook on life. Not completely, of course, but I'd be surprised if not everyone did this.

Comment by niplav on Vulkanodox's Shortform · 2021-05-06T13:15:05.416Z · LW · GW

How is filtering for quality/truth performed, then? The only website that approaches non-censorship is 4chan, and while I think that 4chan is probably more valuable than not (although I see why that could be debatable), I don't think it's the only viable way of organizing a website.

The comparison with the real world falls flat due to the much greater amount of content on internet fora.

Comment by niplav on Range and Forecasting Accuracy · 2021-05-04T22:53:35.764Z · LW · GW

Yep, I share your concerns! I wanted to include them in the post, but then I got busy. Perhaps I'll update it in the forseeable future (no promises however, I'm pretty busy with other things). Maybe I'll just put a warning at the top of the article.

And, in case you publish your stuff, I'd love to read it.

Comment by niplav on Open and Welcome Thread - May 2021 · 2021-05-03T20:08:12.576Z · LW · GW

That's disheartening :-(

But good to know nonetheless, thanks.

Perhaps not a *completely* senseless regulation considering disease spreading (though there are better ways of attacking _that_ with other means).

Comment by niplav on Open and Welcome Thread - May 2021 · 2021-05-03T19:56:20.854Z · LW · GW

Why Not Nano-Apartments?

There seem to be goods of many different sizes and price-tags, with people being able to buy bulk or the bare minimum, e.g. transportation: walking by foot, biking, public transport, leasing a car, owning a car, or by helicopter.

However, the very small scale for apartments seems to be neglected – cheap apartments are often in bad neighbourhoods, with longer commutes and worse living conditions, but rarely just extremely small (<10 m²). But one could easily imagine 5 m² apartments, with just a bed & a small bathroom (or even smaller options with a shared bathroom). However, I don't know of people renting/buying these kinds of apartments – even though they might be pretty useful if one wants to trade size against good location.

Why, therefore, no nano-apartments?

Possible reasons:

No Supply

Perhaps nano-apartments are not economically viable to rent. Maybe the fixed cost per apartment is so high that it's not worth it below a certain size – every tenant being an additional burden, plumbing + upkeep of stairways, organising trash & electricity just isn't worth it. Or, perhaps, the amount of walls is too big – the more separate apartments you want to create, the more floor-space is going to be used on walls to separate those apartments, and at some fixed point around 15 m² it's just not worth it.

Another possibility is that there are regulations dictating the minimal size of apartments (or something that effectively leads to apartments having a minimal size).

No Demand

I could be over-estimating the number of people who'd like to live in such an apartment. I could see myself renting one, especially if the location is very good – I'm glad to trade off space against having a short commute. But perhaps I'm very unusual in this regard, and most people trade off more harshly against the size of the apartment, due to owning just too much stuff to fit into such a small place.

Or the kinds of people who would make this kind of trade-off just move into a shared flat, and bare the higher costs (but most rooms in shared apartments are still larger than 10 m²).

The group of people who would rent those nano-apartments would naturally be young singles who want to save money and live urban, perhaps that group is just too small/already served with university dorms?

So, why are there no nano-apartments? Does anyone have more insight into this? (The title is, of course, a hansonism).

Comment by niplav on TurnTrout's shortform feed · 2021-05-01T19:39:25.607Z · LW · GW

With 5999 karma!

Edit: Now 6000 – I weak-upvoted an old post of yours I hadn't upvoted before.

Comment by niplav on [Letter] Advice for High School #2 · 2021-05-01T13:10:25.160Z · LW · GW

I guess both stages, but more willingness to think & easiness to think.

Comment by niplav on [Letter] Advice for High School #2 · 2021-04-30T20:04:35.811Z · LW · GW

Yeah, that clears things up. Thanks!

Comment by niplav on [Letter] Advice for High School #2 · 2021-04-30T14:58:01.248Z · LW · GW

I think your perspective on Intelligence vs. Willingness to Think is interesting, but wrong – my model is that how willing you are to think is strongly correlated with how easy thinking is for you, and how easy thinking is for you is pretty directly just what intelligence is (yes, correlation isn't transitive, and tails come apart, but I think both hold in general for non-weird cases).

Comment by niplav on What topics are on Dath Ilan's civics exam? · 2021-04-27T14:14:10.110Z · LW · GW
Comment by niplav on Reframing Impact · 2021-04-19T20:56:06.464Z · LW · GW

If the question about accessibility hasn't been resolved, I think Ramana Kumar was talking about making the text readable for people with visual impairments.

Comment by niplav on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-04-17T09:27:32.195Z · LW · GW

Seconding the recommendation. iamef, maybe you want to play around with the dose; the usual dose is too high, and maybe you could take it ~3-4 hours before going to bed. (If you've already tried that, please ignore this).

Comment by niplav on deluks917's Shortform · 2021-04-14T18:53:12.966Z · LW · GW

I remember Yudkowsky asking for a realistic explanation for why the Empire in Star Wars is stuck in an equilibrium where it builds destroyable gigantic weapons.

Comment by niplav on What weird beliefs do you have? · 2021-04-14T16:46:49.547Z · LW · GW

Does this include extreme examples, such as pieces of information that permanently damage your mind when exposed to, or antimemes?

Have you made any changes to your personal life because of this?

Comment by niplav on Auctioning Off the Top Slot in Your Reading List · 2021-04-14T07:40:26.444Z · LW · GW

I predict that this will not become popular, mostly because of the ick-factor around monetary transactions between indviduals that most people have.

However, the inverse strategy seems just as interesting (and more likely to work) to me.

Comment by niplav on What if AGI is near? · 2021-04-14T07:33:50.991Z · LW · GW

I want to clarify that "AGI go foom!" is not really concerned with the nearness of the advent of AGI, but with whether AGIs have a discontinuity that results in an acceleration of the development of their intelligence over time.

Comment by niplav on Book Review: The Secret Of Our Success · 2021-04-13T19:46:53.379Z · LW · GW

For completion, here's the prediction on the naive theory, namely that intelligence is instrumentally useful and evolved because solving plans helps you survive:

Comment by niplav on niplav's Shortform · 2021-04-13T12:56:17.826Z · LW · GW

Isn't life then a quine running on physics itself as a substrate?

I hadn't considered thinking of quines as two-place, but that's obvious in retrospect.

Comment by niplav on Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance · 2021-04-12T20:49:09.234Z · LW · GW

Let the record show that 6 years later, the price of bitcoin has increased 250-fold over the price at the time at which this article was written.

Comment by niplav on niplav's Shortform · 2021-04-11T21:23:14.430Z · LW · GW

Life is quined matter.

Comment by niplav on niplav's Shortform · 2021-04-10T09:54:15.567Z · LW · GW

Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they "trick" people into agreeing with assumption one.

(for the record, I think the first premise is true)

Comment by niplav on niplav's Shortform · 2021-04-09T21:31:30.104Z · LW · GW

The child-in-a-pond thought experiment is weird, because people use it in ways it clearly doesn't work for (especially in arguing for effective altruism).

For example, it observes you would be altruistic in a near situation with the drowning child, and then assumes that you ought to care about people far away as much as people near you. People usually don't really argue against this second step, but very much could. But the thought experiment makes no justification for that extension of the circle of moral concern, it just assumes it.

Similarly, it says nothing about how effectively you ought to use your resources, only that you probably ought to be more altruistic in a stranger-encompassing way.

But not only does this thought experiment not argue for the things people usually use it for, it's also not good for arguing that you ought to be more altruistic!

Underlying it is a theme that plays a role in many thought experiments in ethics: they appeal to game-theoretic intuition for useful social strategies, but say nothing of what these strategies are useful for.

Here, if people catch you letting a child drown in a pond while standing idly, you're probably going to be excluded from many communities or even punished. And this schema occurs very often! Unwilling organ donors, trolley problems, and violinists.

Bottom line: Don't use the drowning child argument to argue for effective altruism.

Comment by niplav on What will GPT-4 be incapable of? · 2021-04-06T21:11:33.281Z · LW · GW

I'd be surprised if it could do 5 or 6-digit integer multiplication with >90% accuracy. I expect it to be pretty good at addition.

Comment by niplav on Procedural Knowledge Gaps · 2021-04-05T16:42:47.273Z · LW · GW

While this comment might point towards a real phenomenon, it's phrased in a way I read as passive-aggressive. Tentatively weakly downvoted.

Comment by niplav on Open and Welcome Thread - April 2021 · 2021-04-04T19:25:54.257Z · LW · GW

When forecasting, you can be well-calibrated or badly calibrated (well calibrated if e.g. 90% of your 90% forecasts come true). This can be true on smaller ranges: you can be well-calibrated from 50% to 60% if your 50%/51%/52%/…/60% forecasts are each well calibrated.

But, for most forecasters, there must be a resolution at which their forecasts are pretty much randomly calibrated, if this is e.g. at the 10% level, then they are pretty much taking random guesses from the specific 10% interval around their probability (they forecast 20%, but they could forecast 25% or 15% just as well, because they're just not better calibrated).

I assume there is a name for this concept, and that there's a way to compute it from a set of forecasts and resolutions, but I haven't stumbled on it yet. So, what is it?

Comment by niplav on niplav's Shortform · 2021-04-03T18:45:16.008Z · LW · GW

After nearly half a year and a lot of procrastination, I fixed (though definitely didn't finish) my post on range and forecasting accuracy.

Definitely still rough around the edges, I will hopefully fix the charts, re-analyze the data and add a limitations section.

Comment by niplav on What specific decision, event, or action significantly improved an aspect of your life in a value-congruent way? How? · 2021-04-01T19:20:23.383Z · LW · GW

Mostly learning about things, in a "oh, this thing exists! how great!" way. I have detailed some examples (and some failures) here, most notable:

  • starting to take Melatonin
  • meditating a lot
  • stopping to bite my nails
  • (not in the post) becoming much better at dealing with other people as a result of grokking the typical mind fallacy & how unwell people are most of the time
  • (not in he post) going to a not-home place to do work and being much more productive (measured & found a relatively strong correlation between going outside the house and productivity)
  • (not in the post) discovering that I am, in fact, an agent, and can invest money and do something about my appearance and improve it
  • (not in the post) deciding to stop martial arts and starting to exercise at home as a result of looking at what I value in sport (trying to clearly look at my values and see whether doing X exercise is worth the money/commute).
Comment by niplav on romeostevensit's Shortform · 2021-03-13T11:28:44.879Z · LW · GW

The Trial by Kafka (intransparent information processing by institutions).

Comment by niplav on romeostevensit's Shortform · 2021-03-12T23:18:15.080Z · LW · GW

The antimemetics division? Or are you thinking of something different?

Comment by niplav on What (feasible) augmented senses would be useful or interesting? · 2021-03-06T16:32:03.019Z · LW · GW

Imagine an ultra-intelligent tribe of congenitally blind extraterrestrials. Their ignorance of vision and visual concepts is not explicitly represented in their conceptual scheme. To members of this hypothetical species, visual experiences wouldn’t be information-bearing any more than a chaotic drug-induced eruption of bat-like echolocatory experiences would be information-bearing to us. Such modes of experience have never been recruited to play a sensory or signaling function. At any rate, some time during the history of this imaginary species, one of the tribe discovers a drug that alters his neurochemistry. The drug doesn’t just distort his normal senses and sense of self. It triggers what we would call visual experiences: vivid, chaotic in texture and weirder than anything the drug-taker had ever imagined. What can the drug-intoxicated subject do to communicate his disturbing new categories of experiences to his tribe’s scientific elite? If he simply says that the experiences are “ineffable”, then the sceptics will scorn such mysticism and obscurantism. If he speaks metaphorically, and expresses himself using words from the conceptual scheme grounded in the dominant sensory modality of his species, then he’ll probably babble delirious nonsense. Perhaps he’ll start talking about messages from the gods or whatever. Critically, the drug user lacks the necessary primitive terms to communicate his experiences, let alone a theoretical understanding of what’s happening. Perhaps he can attempt to construct a rudimentary private language. Yet its terms lack public “criteria of use”, so his tribe’s quasi-Wittgensteinian philosophers will invoke the (Anti-)Private Language Argument to explain why it’s meaningless. Understandably, the knowledge elite are unimpressed by the drug-disturbed user’s claims of making a profound discovery. They can exhaustively model the behaviour of the stuff of the physical world with the equations of their scientific theories, and their formal models of mind are computationally adequate. The drug taker sounds psychotic. Yet from our perspective, we can say the alien psychonaut has indeed stumbled on a profound discovery, even though he has scarcely glimpsed its implications: the raw materials of what we would call the visual world in all its glory.

Interview with David Pearce with the H+ magazine, 2009

Or, in other words: I hear your "new colors" and raise you new qualia varieties that are as different from sight and taste as sight and taste are from each other.

Comment by niplav on abramdemski's Shortform · 2021-02-07T00:37:27.207Z · LW · GW

What are your goals?

Generally, I try to avoid any subreddits with more than a million subscribers (even 100k is noticeably bad).

Some personal recommendations (although I believe discovering reddit was net negative for my life in the long term):

Typical reddit humor: /r/breadstapledtotrees, /r/chairsunderwater (although the jokes get old quickly). /r/bossfight is nice, I enjoy it.

I highly recommend /r/vxjunkies. I also like /r/surrealmemes.

/r/sorceryofthespectacle, /r/shruglifesyndicate for aesthetic incoherent doomer philosophy based on situationism. /r/criticaltheory for less incoherent, but also less interesting discussions of critical theory.

/r/thalassophobia is great of you don't have it (in a simile vein, /r/thedepthsbelow). I also like /r/fifthworldpics and sometimes /r/fearme, but highly NSFW at this point. /r/vagabond is fascinating.

/r/streamentry for high-quality meditation discussion, and /r/mlscaling for discussions about the scaling of machine learning networks. Generally, the subreddits gwern posts in have high-quality links (though often little discussion). I also love /r/Conlanging, /r/neography and /r/vexillology.

I also enjoy /r/negativeutilitarians. /r/jazz sometimes gives good music recommendations. Strongly recommend /r/museum.

/r/mildlyinteresting totally delivers, /r/not interesting is sometimes pretty funny.

And, of course, /r/slatestarcodex and /r/changemyview. /r/thelastpsychiatrist sometimes has very good discussions, but I don't read it often. /r/askhistorians has the reputation of containing accurate and comprehensive information, though I haven't read much of it.

General recommendations: Many subreddits have good sidebars and wikis, it's often useful to read them (e. g. the wiki of /r/bodyweight fitness or /r/streamentry), but not aleays. I strongly recommend using, together with the reddit enhancement suite. The old layout loads faster, and RES let's you tag people, expand linked images/videos in-place and much more. Top posts of all time are great on good subs, and memes on all the others.Still great to get a feel for the community.

Comment by niplav on Is the influence of money pervasive, even on LessWrong? · 2021-02-02T21:46:09.406Z · LW · GW

Meta: I think "Where does LessWrong stand financially" is a very good question, and I never knew I wanted a clear answer to it until now (my model always was something like "It gets money from CFAR & MIRI, not sure how that is organized otoh"). However, the way you phrased the question is confusing to me, and you go into several tangents along the way, which causes me to only understand part of what you're asking about.

Comment by niplav on Vaccinated Socializing · 2021-02-02T10:15:15.380Z · LW · GW

I think this might be missing a dimension of fairness considerations:

  1. People who were least at risk (broadly: the young) from COVID-19 were asked to give up socializing & income during the lockdowns for the people who are most at risk.
  2. People who are most at risk (broadly: the old) from COVID-19 get vaccinated first.
  3. Giving people who get vaccinated early an advantage would signal people who were least at risk that they incurred two costs (lockdown & late vaccination) and received no tangible benefits, which might damage future willingness to cooperate.
Comment by niplav on niplav's Shortform · 2021-01-31T14:32:43.012Z · LW · GW

I have the impression that very smart people have many more ideas than they can write down & explain adequately, and that these kinds of ideas especially get developed in & forgotten after conversations among smart people.

For some people, their comparative advantage could be to sit in conversations between smart people, record & take notes, and summarize the outcome of the conversations (or, alternatively, just interview smart people, let them explain their ideas & ask for feedback, and then write them down so that others understand them).

Comment by niplav on How is Cryo different from Pascal's Mugging? · 2021-01-27T17:16:30.599Z · LW · GW

There is a lot of related discussion on this post.

Comment by niplav on How is Cryo different from Pascal's Mugging? · 2021-01-27T17:03:51.142Z · LW · GW

I examined point 2 in this section of my cost-benefit analysis. I collect estimates of revival probability here (I subjectively judge these two metaculus estimates to be most trustworthy on the forecast, due to the track-record of performance).

As for point 3: Functional fixedness in assuming dependencies might make estimates too pessimistic. Think about the Manhattan or Apollo project: doing a linked conditional probabilities estimate would have put the probabilities of these two succeeding at far far lower than 1%, yet they still happened (this is a very high-compression summary of the linked text). Here is EY talking about that kind of argument, and why it might sometimes fail.