## Posts

[Link] First talk by CSER 2014-03-11T15:01:10.003Z

Comment by NoSuchPlace on Sam Altman and Ezra Klein on the AI Revolution · 2021-06-29T02:47:49.582Z · LW · GW

Searching his twitter, he barely seems to have mentioned GPT at all in 2020. Maybe he deleted some of his tweets?

Comment by NoSuchPlace on Sam Altman and Ezra Klein on the AI Revolution · 2021-06-28T18:08:56.405Z · LW · GW

I remember vividly reading one of his tweets last year, enthusiastically talking about how he'd started chatting with GPT-3 and it was impressing him with its intelligence.

Are you thinking of this tweet? I believe that was meant to be a joke. His actual position at the time appeared to be that GPT-3 is impressive but overhyped.

Comment by NoSuchPlace on Stupid Questions December 2016 · 2016-12-21T15:42:09.891Z · LW · GW

Thank you I fixed it. I think the same argument shows that that question is also undefined. I think the real takeaway is that physics doesn't deal well with some infinities.

Comment by NoSuchPlace on Stupid Questions December 2016 · 2016-12-21T01:20:06.202Z · LW · GW

As you point out later in the thread the light can never touch any given sphere, since no matter which one you pick there will always be another sphere in front of it to block the light. At the same time the light beam must eventually hit something because the centre sphere is in its way. So your light beam must both eventually hit a sphere and never hit a sphere so your system is contradictory and thus ill defined.

You could make the question answerable by instead asking for the limit of the light beam as number of steps of packing done goes to infinity in which case the light reflects back at 180°, since it does that in every step of the packing. Alternately you could ask what happens to the light beam if it is reflected of a shape which is the limit of the packing you described, in which case it will split in three since the shape produced is a cube (since it will have no empty spaces). (Edit:no it doesn't the answer to this question is again undefined via the argument in the first paragraph, since the matter it bounced of of had to belong to some sphere)

Comment by NoSuchPlace on Open thread, Aug. 10 - Aug. 16, 2015 · 2015-08-14T13:19:37.571Z · LW · GW

Since I don't spend all my time inside avoiding every risk hoping for someone to find the cure to aging, I probably value a infinite life a large but finite amount times more than a year of life. This means that I must discount in such a way that after a finite number of button press Omega would need to grant me an infinite life span.

So I preform some Fermi calculations to obtain an upper bound on the number of button presses I need to obtain Immortality, press the button that often, then leave.

Comment by NoSuchPlace on Prior probabilities and statistical significance · 2015-05-24T10:32:22.646Z · LW · GW

They are different concepts, either you use statistical significance or you do Bayesian updating (ie. using priors):

If you are using a 5% threshold roughly speaking this means that you will accept a hypothesis if the chance of getting equally strong data if your hypothesis is false is 5% or less.

If you are doing Bayesian updating you start with a probability for how likely a statement is (this is your prior) and update based on how likely your data would be if your statement was true or false.

here is an xkcd which highlights the difference: https://xkcd.com/1132/

Comment by NoSuchPlace on Consistent extrapolated beliefs about math? · 2014-09-04T21:12:17.269Z · LW · GW

In particular, I intuitively believe that "my beliefs about the integers are consistent, because the integers exist". That's an uncomfortable situation to be in, because we know that a consistent theory can't assert its own consistency.

That is true, however you don't appear to be asserting the consistency of your beliefs, you are asserting the consistency of a particular subset of your beliefs which does not contain the assertion of its consistency. This is not in conflict with Gödel's incompleteness theorem which implies that no theory may consistently assert its own consistency. Gödel's incompleteness theorem does not forbid proofs of consistency by more powerful theories: for example there are proofs of the consistency of Peano arithmetic

Comment by NoSuchPlace on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-07-27T17:52:07.811Z · LW · GW

Quirrell doesn't have a very large window in which to drink the blood.

According to this he should have plenty of time:

"Is it possible to Transfigure a living subject into a target that is static, such as a coin - no, excuse me, I'm terribly sorry, let's just say a steel ball."

Professor McGonagall shook her head. "Mr. Potter, even inanimate objects undergo small internal changes over time. There would be no visible changes to your body afterwards, and for the first minute, you would notice nothing wrong. But in an hour you would be sick, and in a day you would be dead."

I could see the drinker getting sick

From the transfiguration rules:

"I will never Transfigure anything that looks like food or anything else that goes inside a human body."

This presumably means don't transfigure anything into food. However it could also be interpreted to mean, don't transfigure food into anything. I am somewhat disappointed in McGonagall for not catching that ambiguity.

Also Quirrell is not a recognized transfiguration authority:

"If I am not sure whether a Transfiguration is safe, I will not try it until I have asked Professor McGonagall or Professor Flitwick or Professor Snape or the Headmaster, who are the only recognised authorities on Transfiguration at Hogwarts. Asking another student is not acceptable, even if they say that they remember asking the same question."

"Even if the current Defence Professor at Hogwarts tells me that a Transfiguration is safe, and even if I see the Defence Professor do it and nothing bad seems to happen, I will not try it myself."

However since Quirrells past is unknown (as far as Hogwarts is concerned) he could be one of the best transfigures in the world and he wouldn't be recognized as an authority. Also I don't see Quirrell neglecting something as useful and versatile as transfiguration, so I would expect him to know how dangerous eating formerly transfigured food is.

Comment by NoSuchPlace on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-07-26T14:33:31.653Z · LW · GW

Also, won't Quirrell die of transfiguration sickness if he drinks the blood of transfigured Rarity?

No, the unicorn will, but by the time Quirrell drinks it blood it won't be transfigured any more, so he will be fine.

Comment by NoSuchPlace on Harry Potter and the Methods of Rationality discussion thread, July 2014, chapter 102 · 2014-07-26T13:52:12.674Z · LW · GW

These seem to be the relevant quotes:

"For some reason or other," said the amused voice of Professor Quirrell, "it seems that the scion of Malfoy is able to cast surprisingly strong magic for a first-year student. Due to the purity of his blood, of course. Certainly the good Lord Malfoy would not have openly flouted the underage magic laws by arranging for his son to receive a wand before his acceptance into Hogwarts."

and

Only there was a reason why they usually didn't bother giving wands to nine-year-olds. Age counted too, it wasn't just how long you'd held a wand. Granger's birthday had been only a few days into the year, when Harry had bought her that pouch. That meant she was twelve now, that she'd been twelve almost since the start of Hogwarts. And the truth was, Draco hadn't been practicing much outside of class, probably not nearly as much as Hermione Granger of Ravenclaw. Draco hadn't thought he needed any more practice to stay ahead...

-both hpmor ch.78

So from this it seems magic power increases with age, spells cast and time since first getting your wand (though the third could simply be due to the second)

So the reason Harry can only just now cast second year spells, is that he has only recently become sufficiently powerful. His partial transfiguration and patronous v2.0 don't actually require a lot of spell power they only require you to do clever things.

Comment by NoSuchPlace on Ethics in a Feedback Loop: A Parable · 2014-07-25T22:30:59.781Z · LW · GW

PeerGynt has already all but said so elsewhere

Comment by NoSuchPlace on Open thread, 16-22 June 2014 · 2014-06-20T17:12:46.136Z · LW · GW

Have a program use its own output as input, effectively letting you run programs for infinite amounts of time, which depending on how time travel is resolved may or may not give you a halting oracle.

Also you can now brute force most of mathematics:

one way to do this is using first order logic which is expressive enough to state most problems. First order logic is semi-decidable which means that there are algorithms which will eventually return a proof for correct statements. Since your computer will take at most ten seconds to do this, you will have a proof after ten seconds or know that the statement was incorrect if your computer remains silent.

Comment by NoSuchPlace on What should a Bayesian do given probability of proving X vs. of disproving X? · 2014-06-07T22:18:03.984Z · LW · GW

Is it reasonable to assign P(X) = P(will_be_proven(X)) / (P(will_be_proven(X)) + P(will_be_disproven(X))) ?

No I don't think so, consider the following example:

I flip a coin. If it comes up heads I take two green marbles, else I take one red and one green marble. Then I offer to let you see a random marble and I destroy the other one without showing you.

Then, suppose you wish to test whether my coin came up tails. if the marble is red, you have proven the coin came up tails and the chance of tails being disproven is zero, so your expression is 1, but it should be 0.5.

Comment by NoSuchPlace on Links! · 2014-06-05T13:26:44.409Z · LW · GW

I'm not going to explain what it is because that would ruin the video.

Also since explaining the video ruins it, here is a link to rot13

Comment by NoSuchPlace on Does this seem to you like evidence for the existence of psychic abilities in humans? · 2014-05-30T19:03:27.405Z · LW · GW

I feel like I am forced to raise my credence level for remote viewing being real to somewhere between 50 and 60 percent.

A general note on this sort of situation without getting into the specifics of this case:

If something very unlikely ,say P, happens and you have something which would explain that, say A. You should increase your confidence in A and as you receive stronger evidence you continue increasing your confidence. However you should not keep increasing your confidence in A until it is almost 1:

Since your test isn't between A and not A but between P and not P. You should simply move probability from not P to P which would increase the probability of things in P, like A, but not change the relative probabilities of things in P.

So the only way the quote could be correct is if you had started out believing that Psi is as good an explanation as all others put together for the things that you have observed. This seems wrong to me since even everyone involved flat out lying seems much more probable than Psi being real.

Also an Abstruse Goose which involves this sort of situation.

Comment by NoSuchPlace on Request for concrete AI takeover mechanisms · 2014-04-28T02:59:24.570Z · LW · GW

Are you saying this is some thing which MIRI considers actively bad or are you just pointing out that this something which is not helpful for MIRI?

While I don't see the benefit of this exercise I also don't see any harm since for any idea which we come up with here some one else would very likely have come up with it before if it were actionable for humans.

Comment by NoSuchPlace on Request for concrete AI takeover mechanisms · 2014-04-28T01:39:24.396Z · LW · GW

Some ideas which come to mind:

1. An AI could be very capable of predicting the stock market. It could then convince/trick/coerce a person into trading for it, making massive amounts of money, then the AI could have its proxy spend the new money to gain access to what ever the AI wants which is currently available on the market.

2. The AI could could make some program which does something incredibly cool which everyone will want to have. The program should also have the ability to communicate meaningfully with its user (this would probably count as the incredibly cool thing). This could (presumably) be achieved by the AI making copies of itself. After the program has be distributed, and assuming the AI has good social skills, it would have a lot of power via mass-manipulation.

Comment by NoSuchPlace on Questions to ask theist philosophers? I will soon be speaking with several · 2014-04-26T01:57:45.825Z · LW · GW

Maybe "God" is well defined in the context of analytic philosophy, but if not you could consider starting by asking what they mean by "God". You could then ask a variation of 1 or 2 (they seem identical?) and how their response would change with other common definitions of "God".

This would hopefully prevent wasting time due to different use of words or misunderstanding their position.

In a similar vein you could ask what would be sufficient evidence for them to believe something. (Maybe this is already specified by the analytic philosopher part?)

Are you going to talk to them one on one or in a group? If it its a group, to me it seems, a likely failure-mode is them discussing the points that they disagree on which are likely to presuppose the the existence of God.

Comment by NoSuchPlace on Open Thread April 16 - April 22, 2014 · 2014-04-21T13:20:05.570Z · LW · GW

I hate to point this out, but it is already easy enough to ridicule the proper spelling; its spelled Asperger.

Edit: Sorry tried to delete this comment, but that doesn't seem to possible for some reason.

Comment by NoSuchPlace on How long will Alcor be around? · 2014-04-17T22:59:36.364Z · LW · GW

LW believes the average probability that cryonics will be successful for someone frozen today is 22.8%

This is a nitpick, but using average (I'm assuming that means arithmetic mean) is misleading since so long as at least a nonnegotiable proportion of people is answering in the double digits every answer below 1% is being treated as essentially the same, thus skewing towards higher probabilities of cryonics working.

Comment by NoSuchPlace on [deleted post] 2014-04-03T22:35:49.880Z

Appropriate quote:

I have sometimes wondered whether a brain like von Neumann's does not indicate a species superior to that of man.

-Hans Bethe (Scroll to quotes about von Neumann for the source)

Comment by NoSuchPlace on Items to Have In Case of Emergency...Maybe. · 2014-04-03T02:17:26.805Z · LW · GW

A Google gave me this page the argument appears to be that fish antibiotics are the same as human ones, but cheaper and you don't need a medical license. Obliviously don't assume this true unless you have better evidence.

Edit: Ninja'd

Comment by NoSuchPlace on [Link] Zack Weinersmith's One-Liner Generator · 2014-03-27T16:03:57.673Z · LW · GW

Thank you. Fixed

Comment by NoSuchPlace on [Link] Zack Weinersmith's One-Liner Generator · 2014-03-25T20:02:51.641Z · LW · GW

I completely missed that the first time, thank you. Is there a way to only retract part of a post?

Comment by NoSuchPlace on [Link] Zack Weinersmith's One-Liner Generator · 2014-03-25T15:55:52.023Z · LW · GW

Perhaps you should link to the article directly. At first I was trying to figure out the connection between densely packed Hitlers and one-line generators. (Edit: My mistake the link was there I just didn't see it)

Also, unless you want to elaborate, maybe this should go into the open thread?

Comment by NoSuchPlace on Discovering Your Secretly Secret Sensory Experiences · 2014-03-24T20:45:57.005Z · LW · GW

Thank you.

People with number form synesthesia sometimes have the first twelve digits in the form of of a clock face, I was wondering if something similar was going on with male bodies usually being relatively angular in comparison to female bodies.

Comment by NoSuchPlace on Discovering Your Secretly Secret Sensory Experiences · 2014-03-22T16:53:12.881Z · LW · GW

Is it possible that this has something to do with how rounded the shapes are? I noticed that the ratio of cusps to rounded edges (a circle counting for two) is 1:0, 2:0, 3:1, 2:0 for the male digits and 0:2, 1:1, 0:3, 0:4, 0:3 for the female digits. Though obviously this can change with typeface it often remains more or less true.

Comment by NoSuchPlace on Irrationality Game III · 2014-03-18T00:39:35.305Z · LW · GW

Besides the issue of "subjective experience" that has already been brought up, there's also the question of what "thing" and "exists" mean.

I believe some form of MUH is correct so when I say exist I mean the same thing as in mathematics (in the sense of quantifying over various things). So by a thing I mean anything for which it is (at least in principle) possible to write down a mathematically precise definition.

Presumably abstract ideas and virtual particles fall under this category though in neither case am I sure because I don't know what you mean by abstract idea/I don't know enough physics. I not sure whether it possible to give a definition for subjective experience so I don't know whether subjective experiences have subjective experiences.

Also, it's "an aforementioned". That's especially important when speaking.

Substituted an a for an an.

Comment by NoSuchPlace on Irrationality Game III · 2014-03-13T13:51:54.775Z · LW · GW

By "any subcomponent," do you mean that the powerset of the universe is composed of conscious entities, even when light speed and expansion preclude causal interaction within the conscious entity?

If you replace consciousness with subjective experience I believe your statement is correct. Also once you have one infinity you can take power sets again and again.

I'm really confused by what that does to anthropic reasoning

As far as I understand it breaks anthropic reasoning because now your event space is to big to be able to define a probability measure. For the time being I have concluded that anthropic reasoning doesn't work because of a very similar argument though I will revise my argument once I have learned the relevant math.

Comment by NoSuchPlace on Irrationality Game III · 2014-03-13T00:40:04.292Z · LW · GW

Defining subjective experience is hard for the same reason that defining red is hard, since they are direct experiences. However in this case I can't get around this by pointing at examples. So the only thing I can do is offer an alternative phrasing which suffers from the same problem:

If you accept that our experiences are what an algorithm feels like from on the inside then I am saying that everything feels like something from the inside.

Comment by NoSuchPlace on Irrationality Game III · 2014-03-12T23:56:58.446Z · LW · GW

I would still consider this to be a single thing, the same way that "P and Q" is still a statement.

Phrasing this in different way when I say "exist" I mean "either exist in the sense of quantifying over relations or elements"(definition subject to revision as I learn more non-first order logic).

Comment by NoSuchPlace on Irrationality Game III · 2014-03-12T23:27:04.867Z · LW · GW

Irrationality game: Every thing which exists has subjective experience (80%). This includes things such as animals, plants, rocks, ideas, mathematics, the universe and any sub component of an aforementioned system.

Comment by NoSuchPlace on Irrationality Game III · 2014-03-12T22:24:53.920Z · LW · GW

Comment by NoSuchPlace on Irrationality Game III · 2014-03-12T22:15:20.514Z · LW · GW

Up voted, I believe that the universe is ultimately a complicated piece of mathematics. So when I say "exist" in a non-mathematical context, I mean the same thing as when I say it in a mathematical context.

Comment by NoSuchPlace on Open Thread: March 4 - 10 · 2014-03-10T22:50:55.381Z · LW · GW

German: Möbel (2) Stuhl (1) Liege (2)

Comment by NoSuchPlace on Open Thread: March 4 - 10 · 2014-03-05T21:02:11.061Z · LW · GW

I could give a serious response to this about "AI" being stand in for "the person playing the AI" however other responses I could give:

• I am firmly of the opinion that the distinction between artificial and natural is artificial.
Comment by NoSuchPlace on Open Thread: March 4 - 10 · 2014-03-05T19:35:57.947Z · LW · GW

let nothing get in or out except for some very low bandwidth channel (text, video)

You may want to read this. Basically it is the scenario you describe, except for a smart human taking the place of an AI, and it turns out to be insufficient to contain the AI.

Comment by NoSuchPlace on Open Thread: March 4 - 10 · 2014-03-05T01:20:25.756Z · LW · GW

Einstein was working at the patent office in 1905 while also working on his phd. He published his first annus mirabilis paper in March, was awarded his phd is April and published the remaining papers in May, June and September. He didn't take a position as a lecturer until 1908. This means Einstein was outside of physics while publishing his papers on Brownian motion, Special Relativity and Mass-Energy equivalence. Or did I miss something?

Comment by NoSuchPlace on Open Thread: March 4 - 10 · 2014-03-04T22:05:31.606Z · LW · GW

The obvious example example of a (/several) great discovery(s) in physics by someone outside of a physics department is Einstein.

Comment by NoSuchPlace on Open Thread for February 18-24 2014 · 2014-02-21T00:04:11.261Z · LW · GW

I think that the idea is that somethings are very specific specifications, while others aren't. For example a star isn't a particularly unlikely configuration, take a large cloud of hydrogen and you'll get a star. However a human is a very narrow target in design space: taking a pile of carbon, nitrogen, oxygen and hydrogen is very unlikely to get you a human.

Hence to explain stars we don't need posit the existence of a process with a lot of optimization power. However since since humans are a very unlikely configuration this suggests that the reason they exist is because of something with a lot of optimization power (that thing being evolution).

Comment by NoSuchPlace on Open Thread for February 18-24 2014 · 2014-02-20T22:35:26.397Z · LW · GW

Can one detect intelligence in retrospect?

Let me explain. Let's take the definition of an intelligent agent as an optimizer over possible futures, steering the world toward the preferred one.

Yes, at least some of the time. Evolution fits your definition and we know about that. So if you want examples of how to deduce the existence of an intelligence without knowing its goals ahead of time, you could look at the history of the discovery of evolution.

Also, Eliezer has has written an essay which answers your question, you may want to look at that.

Comment by NoSuchPlace on Testing my cognition · 2014-02-20T01:58:17.863Z · LW · GW

Taking it in fruit juice also solves the "how to make a placebo" problem.

Comment by NoSuchPlace on Testing my cognition · 2014-02-19T23:13:12.318Z · LW · GW

It may be worth looking at gwern's essays on nootropics first, since they has done similar self experiments.

One thing in particular you could consider is looking if you can find something which looks/tastes similar enough to creatine that you can use it as a placebo to blind your self. For example you could get a friend to put the creatine and placebo in different containers, but not tell you which. Then take substance 1 for 2 weeks, then take substance 2 for 2 weeks. Then get your friend to tell you which box contained the creatin (or better yet have them write it down somewhere and then don't look at it.)

Comment by NoSuchPlace on Open Thread for February 18-24 2014 · 2014-02-19T16:11:43.761Z · LW · GW

The latest SMBC is on the singularity, fun theory and simulations.

Comment by NoSuchPlace on Embracing the "sadistic" conclusion · 2014-02-13T21:37:53.689Z · LW · GW

I'm not saying that our population intuitions are simple, I'm saying that we can't rule out the possibility. For example a prior I wouldn't have expected physics to turn out to be simple, however (at least to the level that I took it) physics seems to be remarkably simple (particularly in comparison to the universe it describes), this leads me to conclude that there is some mechanism by which things turn out to be simpler than I would expect.

To give an example, my best guess (besides "something I haven''t though of") for this mechanism is that mathematical expressions are fairly evenly distributed over patterns which occur in reality, and that one should hence expect there to be a fairly simple piece of mathematics which comes very close to describing physics, a similar thing might happen with our population intuitions.

There's no particular reason why we should expect highly abstract aspects of our random-walk psychological presets to be elegant or simply defined.

Wouldn't highly abstract aspects of our psychology be be more recent and as such simpler?

As such, it's practically guaranteed that they won't be.

This depends on your priors. If you assign comparable probabilities to simple and complex hypothesis, this follows. If you assign higher probabilities to simple hypothesis than complex ones it doesn't.

Comment by NoSuchPlace on Embracing the "sadistic" conclusion · 2014-02-13T13:30:40.801Z · LW · GW

our population intuitions are complex...

Are they? They certainly look complex, but that could be because we haven't found the proper way to describe them. For example the Mandelbrot set looks complex, but it can be defined in a single line.

Also "complex" leads to ambiguity, perhaps it needs to be defined. I used it in the sense that something is complex if it cannot be quickly defined for a smart and reasonably knowledgeable (in the relevant domain) human, since this seems to be the relevant sense here.

Comment by NoSuchPlace on Open Thread for February 11 - 17 · 2014-02-12T16:58:46.176Z · LW · GW

It's a repost from last week.

Though rereading it, does anyone know whether Zach knows about MIRI and/or lesswrong? I expect "unfriendly human-created Intelligence " to parse to AI with bad manners to people unfamiliar with MIRI's work, which is probably not what the scientist is worried about.

Comment by NoSuchPlace on Open Thread for February 3 - 10 · 2014-02-07T13:57:17.624Z · LW · GW

Today's SMBC is about an AI with a utility function which sounds good but isn't.

Comment by NoSuchPlace on February 2014 Media Thread · 2014-02-02T21:04:06.401Z · LW · GW

Singularity by the Lisps - it's a song about the singularity.