Posts

Open & Welcome Thread – April 2021 2021-04-04T19:25:09.049Z
An Exploratory Toy AI Takeoff Model 2021-01-13T18:13:14.237Z
Range and Forecasting Accuracy 2020-11-16T13:06:45.184Z
Considerations on Cryonics 2020-08-03T17:30:42.307Z
"Do Nothing" utility function, 3½ years later? 2020-07-20T11:09:36.946Z
niplav's Shortform 2020-06-20T21:15:06.105Z

Comments

Comment by niplav on Reframing Impact · 2021-04-19T20:56:06.464Z · LW · GW

If the question about accessibility hasn't been resolved, I think Ramana Kumar was talking about making the text readable for people with visual impairments.

Comment by niplav on On Sleep Procrastination: Going To Bed At A Reasonable Hour · 2021-04-17T09:27:32.195Z · LW · GW

Seconding the recommendation. iamef, maybe you want to play around with the dose; the usual dose is too high, and maybe you could take it ~3-4 hours before going to bed. (If you've already tried that, please ignore this).

Comment by niplav on deluks917's Shortform · 2021-04-14T18:53:12.966Z · LW · GW

I remember Yudkowsky asking for a realistic explanation for why the Empire in Star Wars is stuck in an equilibrium where it builds destroyable gigantic weapons.

Comment by niplav on What weird beliefs do you have? · 2021-04-14T16:46:49.547Z · LW · GW

Does this include extreme examples, such as pieces of information that permanently damage your mind when exposed to, or antimemes?

Have you made any changes to your personal life because of this?

Comment by niplav on Auctioning Off the Top Slot in Your Reading List · 2021-04-14T07:40:26.444Z · LW · GW

I predict that this will not become popular, mostly because of the ick-factor around monetary transactions between indviduals that most people have.

However, the inverse strategy seems just as interesting (and more likely to work) to me.

Comment by niplav on What if AGI is near? · 2021-04-14T07:33:50.991Z · LW · GW

I want to clarify that "AGI go foom!" is not really concerned with the nearness of the advent of AGI, but with whether AGIs have a discontinuity that results in an acceleration of the development of their intelligence over time.

Comment by niplav on Book Review: The Secret Of Our Success · 2021-04-13T19:46:53.379Z · LW · GW

For completion, here's the prediction on the naive theory, namely that intelligence is instrumentally useful and evolved because solving plans helps you survive:

Comment by niplav on niplav's Shortform · 2021-04-13T12:56:17.826Z · LW · GW

Isn't life then a quine running on physics itself as a substrate?

I hadn't considered thinking of quines as two-place, but that's obvious in retrospect.

Comment by niplav on Why you should consider buying Bitcoin right now (Jan 2015) if you have high risk tolerance · 2021-04-12T20:49:09.234Z · LW · GW

Let the record show that 6 years later, the price of bitcoin has increased 250-fold over the price at the time at which this article was written.

Comment by niplav on niplav's Shortform · 2021-04-11T21:23:14.430Z · LW · GW

Life is quined matter.

Comment by niplav on niplav's Shortform · 2021-04-10T09:54:15.567Z · LW · GW

Right, my gripe with the argument is that these first two assumptions are almost always unstated, and most of the time when people use the argument, they "trick" people into agreeing with assumption one.

(for the record, I think the first premise is true)

Comment by niplav on niplav's Shortform · 2021-04-09T21:31:30.104Z · LW · GW

The child-in-a-pond thought experiment is weird, because people use it in ways it clearly doesn't work for (especially in arguing for effective altruism).

For example, it observes you would be altruistic in a near situation with the drowning child, and then assumes that you ought to care about people far away as much as people near you. People usually don't really argue against this second step, but very much could. But the thought experiment makes no justification for that extension of the circle of moral concern, it just assumes it.

Similarly, it says nothing about how effectively you ought to use your resources, only that you probably ought to be more altruistic in a stranger-encompassing way.

But not only does this thought experiment not argue for the things people usually use it for, it's also not good for arguing that you ought to be more altruistic!

Underlying it is a theme that plays a role in many thought experiments in ethics: they appeal to game-theoretic intuition for useful social strategies, but say nothing of what these strategies are useful for.

Here, if people catch you letting a child drown in a pond while standing idly, you're probably going to be excluded from many communities or even punished. And this schema occurs very often! Unwilling organ donors, trolley problems, and violinists.

Bottom line: Don't use the drowning child argument to argue for effective altruism.

Comment by niplav on What will GPT-4 be incapable of? · 2021-04-06T21:11:33.281Z · LW · GW

I'd be surprised if it could do 5 or 6-digit integer multiplication with >90% accuracy. I expect it to be pretty good at addition.

Comment by niplav on Procedural Knowledge Gaps · 2021-04-05T16:42:47.273Z · LW · GW

While this comment might point towards a real phenomenon, it's phrased in a way I read as passive-aggressive. Tentatively weakly downvoted.

Comment by niplav on Open and Welcome Thread - April 2021 · 2021-04-04T19:25:54.257Z · LW · GW

When forecasting, you can be well-calibrated or badly calibrated (well calibrated if e.g. 90% of your 90% forecasts come true). This can be true on smaller ranges: you can be well-calibrated from 50% to 60% if your 50%/51%/52%/…/60% forecasts are each well calibrated.

But, for most forecasters, there must be a resolution at which their forecasts are pretty much randomly calibrated, if this is e.g. at the 10% level, then they are pretty much taking random guesses from the specific 10% interval around their probability (they forecast 20%, but they could forecast 25% or 15% just as well, because they're just not better calibrated).

I assume there is a name for this concept, and that there's a way to compute it from a set of forecasts and resolutions, but I haven't stumbled on it yet. So, what is it?

Comment by niplav on niplav's Shortform · 2021-04-03T18:45:16.008Z · LW · GW

After nearly half a year and a lot of procrastination, I fixed (though definitely didn't finish) my post on range and forecasting accuracy.

Definitely still rough around the edges, I will hopefully fix the charts, re-analyze the data and add a limitations section.

Comment by niplav on What specific decision, event, or action significantly improved an aspect of your life in a value-congruent way? How? · 2021-04-01T19:20:23.383Z · LW · GW

Mostly learning about things, in a "oh, this thing exists! how great!" way. I have detailed some examples (and some failures) here, most notable:

  • starting to take Melatonin
  • meditating a lot
  • stopping to bite my nails
  • (not in the post) becoming much better at dealing with other people as a result of grokking the typical mind fallacy & how unwell people are most of the time
  • (not in he post) going to a not-home place to do work and being much more productive (measured & found a relatively strong correlation between going outside the house and productivity)
  • (not in the post) discovering that I am, in fact, an agent, and can invest money and do something about my appearance and improve it
  • (not in the post) deciding to stop martial arts and starting to exercise at home as a result of looking at what I value in sport (trying to clearly look at my values and see whether doing X exercise is worth the money/commute).
Comment by niplav on romeostevensit's Shortform · 2021-03-13T11:28:44.879Z · LW · GW

The Trial by Kafka (intransparent information processing by institutions).

Comment by niplav on romeostevensit's Shortform · 2021-03-12T23:18:15.080Z · LW · GW

The antimemetics division? Or are you thinking of something different?

Comment by niplav on What (feasible) augmented senses would be useful or interesting? · 2021-03-06T16:32:03.019Z · LW · GW

Imagine an ultra-intelligent tribe of congenitally blind extraterrestrials. Their ignorance of vision and visual concepts is not explicitly represented in their conceptual scheme. To members of this hypothetical species, visual experiences wouldn’t be information-bearing any more than a chaotic drug-induced eruption of bat-like echolocatory experiences would be information-bearing to us. Such modes of experience have never been recruited to play a sensory or signaling function. At any rate, some time during the history of this imaginary species, one of the tribe discovers a drug that alters his neurochemistry. The drug doesn’t just distort his normal senses and sense of self. It triggers what we would call visual experiences: vivid, chaotic in texture and weirder than anything the drug-taker had ever imagined. What can the drug-intoxicated subject do to communicate his disturbing new categories of experiences to his tribe’s scientific elite? If he simply says that the experiences are “ineffable”, then the sceptics will scorn such mysticism and obscurantism. If he speaks metaphorically, and expresses himself using words from the conceptual scheme grounded in the dominant sensory modality of his species, then he’ll probably babble delirious nonsense. Perhaps he’ll start talking about messages from the gods or whatever. Critically, the drug user lacks the necessary primitive terms to communicate his experiences, let alone a theoretical understanding of what’s happening. Perhaps he can attempt to construct a rudimentary private language. Yet its terms lack public “criteria of use”, so his tribe’s quasi-Wittgensteinian philosophers will invoke the (Anti-)Private Language Argument to explain why it’s meaningless. Understandably, the knowledge elite are unimpressed by the drug-disturbed user’s claims of making a profound discovery. They can exhaustively model the behaviour of the stuff of the physical world with the equations of their scientific theories, and their formal models of mind are computationally adequate. The drug taker sounds psychotic. Yet from our perspective, we can say the alien psychonaut has indeed stumbled on a profound discovery, even though he has scarcely glimpsed its implications: the raw materials of what we would call the visual world in all its glory.

Interview with David Pearce with the H+ magazine, 2009

Or, in other words: I hear your "new colors" and raise you new qualia varieties that are as different from sight and taste as sight and taste are from each other.

Comment by niplav on abramdemski's Shortform · 2021-02-07T00:37:27.207Z · LW · GW

What are your goals?

Generally, I try to avoid any subreddits with more than a million subscribers (even 100k is noticeably bad).

Some personal recommendations (although I believe discovering reddit was net negative for my life in the long term):

Typical reddit humor: /r/breadstapledtotrees, /r/chairsunderwater (although the jokes get old quickly). /r/bossfight is nice, I enjoy it.

I highly recommend /r/vxjunkies. I also like /r/surrealmemes.

/r/sorceryofthespectacle, /r/shruglifesyndicate for aesthetic incoherent doomer philosophy based on situationism. /r/criticaltheory for less incoherent, but also less interesting discussions of critical theory.

/r/thalassophobia is great of you don't have it (in a simile vein, /r/thedepthsbelow). I also like /r/fifthworldpics and sometimes /r/fearme, but highly NSFW at this point. /r/vagabond is fascinating.

/r/streamentry for high-quality meditation discussion, and /r/mlscaling for discussions about the scaling of machine learning networks. Generally, the subreddits gwern posts in have high-quality links (though often little discussion). I also love /r/Conlanging, /r/neography and /r/vexillology.

I also enjoy /r/negativeutilitarians. /r/jazz sometimes gives good music recommendations. Strongly recommend /r/museum.

/r/mildlyinteresting totally delivers, /r/not interesting is sometimes pretty funny.

And, of course, /r/slatestarcodex and /r/changemyview. /r/thelastpsychiatrist sometimes has very good discussions, but I don't read it often. /r/askhistorians has the reputation of containing accurate and comprehensive information, though I haven't read much of it.

General recommendations: Many subreddits have good sidebars and wikis, it's often useful to read them (e. g. the wiki of /r/bodyweight fitness or /r/streamentry), but not aleays. I strongly recommend using old.reddit.com, together with the reddit enhancement suite. The old layout loads faster, and RES let's you tag people, expand linked images/videos in-place and much more. Top posts of all time are great on good subs, and memes on all the others.Still great to get a feel for the community.

Comment by niplav on Is the influence of money pervasive, even on LessWrong? · 2021-02-02T21:46:09.406Z · LW · GW

Meta: I think "Where does LessWrong stand financially" is a very good question, and I never knew I wanted a clear answer to it until now (my model always was something like "It gets money from CFAR & MIRI, not sure how that is organized otoh"). However, the way you phrased the question is confusing to me, and you go into several tangents along the way, which causes me to only understand part of what you're asking about.

Comment by niplav on Vaccinated Socializing · 2021-02-02T10:15:15.380Z · LW · GW

I think this might be missing a dimension of fairness considerations:

  1. People who were least at risk (broadly: the young) from COVID-19 were asked to give up socializing & income during the lockdowns for the people who are most at risk.
  2. People who are most at risk (broadly: the old) from COVID-19 get vaccinated first.
  3. Giving people who get vaccinated early an advantage would signal people who were least at risk that they incurred two costs (lockdown & late vaccination) and received no tangible benefits, which might damage future willingness to cooperate.
Comment by niplav on niplav's Shortform · 2021-01-31T14:32:43.012Z · LW · GW

I have the impression that very smart people have many more ideas than they can write down & explain adequately, and that these kinds of ideas especially get developed in & forgotten after conversations among smart people.

For some people, their comparative advantage could be to sit in conversations between smart people, record & take notes, and summarize the outcome of the conversations (or, alternatively, just interview smart people, let them explain their ideas & ask for feedback, and then write them down so that others understand them).

Comment by niplav on How is Cryo different from Pascal's Mugging? · 2021-01-27T17:16:30.599Z · LW · GW

There is a lot of related discussion on this post.

Comment by niplav on How is Cryo different from Pascal's Mugging? · 2021-01-27T17:03:51.142Z · LW · GW

I examined point 2 in this section of my cost-benefit analysis. I collect estimates of revival probability here (I subjectively judge these two metaculus estimates to be most trustworthy on the forecast, due to the track-record of performance).

As for point 3: Functional fixedness in assuming dependencies might make estimates too pessimistic. Think about the Manhattan or Apollo project: doing a linked conditional probabilities estimate would have put the probabilities of these two succeeding at far far lower than 1%, yet they still happened (this is a very high-compression summary of the linked text). Here is EY talking about that kind of argument, and why it might sometimes fail.

Comment by niplav on [deleted post] 2021-01-26T00:17:43.602Z

Ironically, the explanation of your archives mirror is now also inaccessible (fortunately, it is still available on archive.org).

Comment by niplav on Open & Welcome Thread - January 2021 · 2021-01-25T20:14:52.142Z · LW · GW

I sent you a PM.

Comment by niplav on An Exploratory Toy AI Takeoff Model · 2021-01-16T16:11:18.938Z · LW · GW

I agree that the background growth could be higher. I have two calculations with different backdrop growth speeds in this section, but the model works with unitless timesteps.

Comment by niplav on mike_hawke's Shortform · 2021-01-08T19:43:38.539Z · LW · GW

The post talks about this book, I assume the first paragraph is supposed to be a quote.

Comment by niplav on [deleted post] 2021-01-08T15:43:36.426Z

I very much dislike the second sentence in this tag: "If you do something to feel good about helping people, or even to be a better person in some spiritual sense, it isn't truly altruism."

First of all, it's cryptonormative. Second, it leads to the old "people only care about their happiness" model that explains everything. Third (but this is a weak & contextualizing point), it is related to the common perception that egoistic actions are usually bad.

I have replaced the second sentence with "However, non-altruistically motivated actions can still be good (e.g. people pursuing non-rival goods), and altruistically motivated actions can still be bad (e.g. people being mistaken about what is good).", and added that altruism is a motivation rather than a set of actions, but this is rather preliminary. I would be equally fine with the second sentence being deleted altogether.

Comment by niplav on Eli's shortform feed · 2021-01-07T09:12:36.404Z · LW · GW

I think you are thinking of “AI Alignment: Why It’s Hard, and Where to Start”:

The next problem is unforeseen instantiation: you can’t think fast enough to search the whole space of possibilities. At an early singularity summit, Jürgen Schmidhuber, who did some of the pioneering work on self-modifying agents that preserve their own utility functions with his Gödel machine, also solved the friendly AI problem. Yes, he came up with the one true utility function that is all you need to program into AGIs!

(For God’s sake, don’t try doing this yourselves. Everyone does it. They all come up with different utility functions. It’s always horrible.)

His one true utility function was “increasing the compression of environmental data.” Because science increases the compression of environmental data: if you understand science better, you can better compress what you see in the environment. Art, according to him, also involves compressing the environment better. I went up in Q&A and said, “Yes, science does let you compress the environment better, but you know what really maxes out your utility function? Building something that encrypts streams of 1s and 0s using a cryptographic key, and then reveals the cryptographic key to you.”

He put up a utility function; that was the maximum. All of a sudden, the cryptographic key is revealed and what you thought was a long stream of random-looking 1s and 0s has been compressed down to a single stream of 1s.

There's also a mention of that method in this post.

Comment by niplav on 1960: The Year The Singularity Was Cancelled · 2021-01-04T21:42:45.684Z · LW · GW

I believe this is an important gears-level addition to posts like hyperbolic growth, long-term growth as a sequence of exponential modes and an old yudkowsky post I am unable to find at the moment.

I don't know how closely these texts are connected, but Modeling the Human Trajectory picks up one year later, creating two technical models: one stochastically fitting and extrapolating GDP growth; the other providing a deterministic outlook, considering labor, capital, human capital, technology and production (and, in one case, natural resources). Roodman arrives at somewhat similar conclusions, too: The industrial revolution was a very big deal, and something happened around 1960 that has slowed the previous strong growth (as far as I remember, it doesn't provide an explicit reason for this).

A point in this post that I found especially interesting was the speculation about the back plague being the spark that ignited the industrial revolution. The reason given is a good example of slack catapulting a system out of a local maximum, in this case a malthusian europe into the industrial revolution.

Interestingly, both this text and Roodman don't consider individual intelligence as an important factor in global productivity. Despite the well-known Flynn-Effect that has mostly continued since 1960 (caveat caveat), no extraordinary change in global productivity has occurred. This makes some sense: a rise of less than 1 standard deviation might be appreciable, but not groundbreaking. But the relation to artificial intelligence makes it interesting: the purported (economic) advantage of AI systems is that they can copy themselves, thereby making population growth not the most constraining variable in this growth model. I don't believe this is particularly anticipation-constraining, though: this could mean that either the post-singularity ("singularity") world is multipolar, or the singleton controlling everything has created many sub-agents.

I appreciate this post. I have referenced it a couple of times in conversations. Together with the investigation by OpenPhil it makes a solid case that the gods of straight lines have decided to throw us into the most important century of history. May the godess of everything else be merciful with us.

Comment by niplav on Open & Welcome Thread - January 2021 · 2021-01-04T19:36:25.052Z · LW · GW

Hello everybody!

I have done some commenting & posting around here, but I think a proper introduction is never bad.

I was Marxist for a few years, then I fell out of it, discovered SSC and thereby LW three years ago, started reading the Sequences and the Codex (yes, you now name them together). I very much enjoy the discussions around here, and the fact that LW got resurrected.

I sometimes write things for my personal website about forecasting, obscure programming languages and [REDACTED]. I think I might start cross-posting a bit more (the two last posts on my profile are such cross-posts).

I endorse spending my time reading, meditating, and [REDACTED], but my motivational system often decides to waste time on the internet instead.

Comment by niplav on Daniel Kokotajlo's Shortform · 2021-01-03T22:15:11.200Z · LW · GW

Maybe we give the LessWrong team a magic Karma Wand, and they take all the karma that the anonymous reviews got and bestow it (plus or minus some random noise) to the actual authors.

Wouldn't this achieve the opposite of what we want, disincentivize reviews? Unless coupled with paying people to write reviews, this would remove the remaining incentive.

I'd prefer going into the opposite direction, making reviews more visible (giving them a more prominent spot on the front page/on allPosts, so that more people vote on them/interact with them). At the moment, they still feel a bit disconnected from the rest of the site.

Comment by niplav on Eli's shortform feed · 2021-01-03T22:02:54.514Z · LW · GW

Yes, it definitely does–you just created the resource I will will link people to. Thank you!

Especially the third paragraph is cruxy. As far as I can tell, there are many people who have (to some extent) defused this propensity to get triggered for themselves. At least for me, LW was a resource to achieve that.

Comment by niplav on What failure looks like · 2020-12-30T16:12:26.065Z · LW · GW

I read this post only half a year ago after seeing it being referenced in several different places, mostly as a newer, better alternative to the existing FOOM-type failure scenarios. I also didn't follow the comments on this post when it came out.

This post makes a lot of sense in Christiano's worldview, where we have a relatively continuous, somewhat multipolar takeoff which to a large extent inherits the problem in our current world. This is especially applies to part I: we already have many different instances of scenarios where humans follow measured incentives and produce unintended outcomes. Goodhart's law is a thing. Part I ties in especially well with Wei Dai's concern that

AI-powered memetic warfare makes all humans effectively insane.

While I haven't done research on this, I have a medium strength intuition that this is already happening. Many people I know are at least somewhat addicted to the internet, having lost a lot of attention due to having their motivational system hijacked, which is worrying because Attention is your scarcest resource. I believe investigating the amount to which attention has deteriorated (or has been monopolized by different actors) would be valuable, as well as thinking about which incentives will start when AI technologies become more powerful (Daniel Kokotajlo has been writing especially interesting essays on this kind of problem).

As for part II, I'm a bit more skeptical. I would summarize "going out with a bang" as a "collective treacherous turn", which would demand somewhat high levels of coordination between agents of various different levels of intelligence (agents would be incentivized to turn early because of first-mover-advantages, but this would increase the probability of humans doing something about it), as well as agents knowing very early that they want to perform a treacherous turn to influence-seeking behavior. I'd like to think about how the frequency of premature treacherous turns relates to the intelligence of agents. Would that be continuous or discontinuous? Unrelated to Christiano's post, this seems like an important consideration (maybe work has gone into this and I just haven't seen it yet).

Still, part II holds up pretty well, especially since we can expect AI systems to cooperate effectively via merging utility functions, and we can see systems in the real world that fail regularly, but not much is being done about them (especially social structures that sort-of work).

I have referenced this post numerous times, mostly in connection with a short explanation of how I think current attention-grabbing systems are a variant of what is described in part I. I think it's pretty good, and someone (not me) should flesh the idea out a bit more, perhaps connecting it to existing systems (I remember the story about the recommender system manipulating its users into political extremism to increase viewing time, but I can't find a link right now).

The one thing I would like to see improved is at least some links to prior existing work. Christiano writes that

(None of the concerns in this post are novel.)

but it isn't clear whether he is just summarizing things he has thought about, which are implicit knowledge in his social web, or whether he is summarizing existing texts. I think part I would have benefitted from a link to Goodhart's law (or an explanation why it is something different).

Comment by niplav on Eli's shortform feed · 2020-12-30T14:40:10.885Z · LW · GW

(This question is only related to a small point)

You write that one possible foundational strategy could be to "radically detraumatize large fractions of the population". Do you believe that

  1. A large part of the population is traumatized
  2. That trauma is reversible
  3. Removing/reversing that trauma would improve the development of humanity drastically?

If yes, why? I'm happy to get a 1k page PDF thrown at me.

I know that this has been a relatively popular talking point on twitter, but without a canonical resource, and I also haven't seen it discussed on LW.

Comment by niplav on Give it a google · 2020-12-29T08:41:12.459Z · LW · GW

Another datapoint: After googling "How to stop biting my nails", reading a few of the results and trying out one of the instructions, I stopped biting my nails.

Comment by niplav on niplav's Shortform · 2020-12-22T11:51:05.981Z · LW · GW

epistemic status: a babble

While listening to this podcast yesterday, I was startled by the section on the choice of Turing machine in Solomonoff induction, and the fact that it seems arbitrary. (I had been thinking about similar questions, but in relation to Dust Theory).

One approach could be to take the set of all possible turing machines , and for each pair of turing machines find the shortest/fastest program so that emulates on . Then, if is shorter/faster than , one can say that is simpler than (I don't know if this would always intuitively be true). For example, python with witch can emulate python without witch in a shorter program that vice versa (this will only make sense if you've listened to the podcast).

Maybe this would give a total order over Turing machines with a maximum. One could then take the simplest Turing machine and use that one for Solomonoff induction.

Comment by niplav on niplav's Shortform · 2020-12-22T11:50:30.448Z · LW · GW

epistemic status: a babble

While listening to this podcast yesterday, I was startled by the section on the choice of Turing machine in Solomonoff induction, and the fact that it seems arbitrary. (I had been thinking about similar questions, but in relation to Dust Theory).

One approach could be to take the set of all possible turing machines , and for each pair of turing machines find the shortest/fastest program so that emulates on . Then, if is shorter/faster than , one can say that is simpler than (I don't know if this would always intuitively be true).

Maybe this would give a total order over Turing machines, with one or more maxima. One could then take the simplest Turing machine and use that one for Solomonoff induction.

Comment by niplav on Gauging the conscious experience of LessWrong · 2020-12-20T12:17:30.235Z · LW · GW

As for mental imagery: I can create visual objects in my mind, rotate them, move them, but it's hard to give them a clear, persistent color.

Faces are super interesting: It seems completely random to me which faces I can create a mental image of and of which faces I can't: people I last saw years ago are sometimes easy, and very good friends are sometimes really hard.

Comment by niplav on Open & Welcome Thread - December 2020 · 2020-12-19T21:24:03.904Z · LW · GW

Right. My intuition was something like "MDMA, but constantly" (which isn't sickening, at least to me). I definitely get the "sickening"/"headache" aspect of social media.

Comment by niplav on Open & Welcome Thread - December 2020 · 2020-12-19T16:56:06.173Z · LW · GW

I suspect that the pleasure received from social media & energy-dense food is far lower than what is achievable through wireheading (you could e.g. just stimulate the areas activated when consuming social media/energy dense food), and there are far more pleasurable activities than surfing reddit & drinking mountain dew, e.g. an orgasm.

I agree with regulatory hurdles. I am not sure about the profitability – my current model of humans predicts that they 1. mostly act as adaption-executers, and 2. if they act more agentially, they often optimize for things other than happiness (how many people have read basic happiness research? how many write a gratitude-journal?).

Neuralink seems more focused on information exchange, which seems a way harder challenge than just stimulation, but perhaps that will open up new avenues.

Comment by niplav on Range and Forecasting Accuracy · 2020-12-19T13:17:13.558Z · LW · GW

Okay, I finally had some time to look at your feedback.

The problem is, as you said, my attempt to bucket predictions together after range. This removes data, and makes my analysis much more complicated than it needs to be.

I thought that bucketing was a good idea because I was not sure how meaningful a brier score on only one forecast & outcome variable is (I didn't have a very clear idea of why that should be the case, and didn't question that intuition).

Let's say I have my datasets (predictions), (outcomes) and (ranges), .

Then your analysis is calculating . I introduced a partition variable () and calculated .

This throws away information: if one makes and , then one gets one brier score (of all forecasts & outcomes), and the average of all ranges, which results in a correlation of 1 (I haven't proven that partitioning more roughly loses data monotonically, but it seems intuitively true to me).

If I repeat your analysis, I get the results you got.

Basically, I believe my text lacks internal validity, but still has construct validity.

Starting from here, I will probably rewrite large parts of the text (and the code, maybe even in a more understandable language) and apply your analysis by removing the bucketing of data.

Comment by niplav on Do I have to trust past experience because past experience tells me that? · 2020-12-19T08:39:35.112Z · LW · GW

I guess you are talking about the problem of induction. To this, the canonical LW answer is Where Recursive Justification Hits the Bottom (whether it's convincing to you is for you to decide).

Comment by niplav on Open & Welcome Thread - December 2020 · 2020-12-19T08:20:26.414Z · LW · GW

Are there plans to import the Arbital content into LW? After the tag system & wiki, that seems like one possible next step. I continue to sometimes reference Arbital content, e. g. https://arbital.com/p/dwim/ just yesterday.

Comment by niplav on Open & Welcome Thread - December 2020 · 2020-12-18T16:47:11.877Z · LW · GW

Is there any practical reason why nobody is pursuing wireheading in humans, at least to a limited degree? (I mean actual intracranial stimulation, not novel drugs etc.). As far as I know, there were some experiments with rats in the 60s & 70s, but I haven't heard of recent research in e.g. macaques.

I know that wireheading isn't something most people seek, but

  1. It seems way easier than many other things (we already did it in rats! how hard can it be!)
  2. The reports I've read (involving human subjects whose brains were stimulated during operations) indicate that intracranial stimulation has both no tolerance development and feels really really nice. Surely, there could be ways of creating functional humans with long-term intracranial stimulation!

I'm sort of confused why not more people have pursued this, although I have to admit I haven't researched this very much. Maybe it's a harder technical problem than I am thinking.

Comment by niplav on Sherrinford's Shortform · 2020-12-16T16:15:01.096Z · LW · GW

I believe that steelmanning has mostly been deprecated and replaced with ideological turing tests.

Comment by niplav on The LessWrong 2019 Review · 2020-12-02T18:05:00.156Z · LW · GW

The URL for the 2018 review results gives a 404. This makes sense, since it has been reserved for the 2019 review. However, I'd like to finish my read-through of the 2018 results. Where (except in the new book series) can I do that?