Posts

Comments

Comment by IL on chinchilla's wild implications · 2022-07-31T18:26:15.708Z · LW · GW

When you exhaust all the language data from text, you can start extracting language from audio and video.

As far as I know the largest public repository of audio and video is YouTube. We can do a rough back-of-the-envelope computation for how much data is in there:

  • According to some 2019 article I found, in every minute 50 hours of video are uploaded to YouTube. If we assume this was the average for the last 15 years, that gets us 200 billion minutes of video.
  • An average conversation has 150 words per minute, according to a Google search. That gets us 30T words, or 30T tokens if we assume 1 token per word (is this right?)
  • Let's say 1% of that is actually useful, so that gets us 300B tokens, which is... a lot less than I expected.

So it seems like video doesn't save us, if we just use it for the language data. We could do self-supervised learning on the video data, but for that we need to know the scaling laws for video (has anyone done that?).

Comment by IL on [Linkpost] Solving Quantitative Reasoning Problems with Language Models · 2022-07-01T07:25:57.813Z · LW · GW

The previous SOTA for MATH (https://arxiv.org/pdf/2009.03300.pdf) is a fine-tuned GPT-2 (1.5b params), whereas the previous SOTA for GSM8K (https://arxiv.org/pdf/2203.11171.pdf) is PaLM (540b params), using a similar "majority voting" method as Minerva (query each question ~40 times, take the most common answer).

Comment by IL on The EMH Aten't Dead · 2020-05-16T13:35:16.004Z · LW · GW

Here's a thought experiment: Suppose that a market is perfectly efficient, except that every 50 years or so there's a crash, which sufficiently smart people can predict a month in advance. Would you say that this market is efficient? Technically it isn't, because smart people have a systematic advantage over the market. But practically, no trader systematically beats the market, because no trader lives long enough!

I suppose you can create a long-living institution, a "black swan fund", that very rarely makes bets on predictable crashes, and over a few centuries can prove it has higher returns. But I guess not enough people care about returns over these timescales.

Comment by IL on March Coronavirus Open Thread · 2020-03-09T20:53:06.936Z · LW · GW

What's the best way to convince skeptics of the severity of COVID? I keep seeing people saying it's just a slightly worse flu, or that car accidents kill a lot more people, and so on. I want some short text or image that illustrates just how serious this is.

I found this heartbreaking testimony from an Italian ICU doctor: https://twitter.com/silviast9/status/1236933818654896129

But I guess skeptics will want a more authoritative source.

Comment by IL on The First World Takeover · 2008-11-19T17:07:05.000Z · LW · GW

"Or the first replicator to catch on, if there were failed alternatives lost to history - but this seems unlikely, given the Fermi Paradox; a replicator should be more improbable than that, or the stars would teem with life already."

So do you thing that the vast majority of The Big Filter is concentrated on the creation of a first replicator? What's the justification for that?

Comment by IL on Dark Side Epistemology · 2008-10-18T16:49:42.000Z · LW · GW

-You can't prove I'm wrong!

-Well, I'm an optimist.

-Millions of people believe it, how can they all be wrong?

-You're relying too much on cold rationality.

-How can you possibly reduce all the beauty in the world to a bunch of equations?

Comment by IL on On Doing the Impossible · 2008-10-06T23:10:48.000Z · LW · GW

Eliezer, I remember an earlier post of yours, when you said something like: "If I would never do impossible things, how could I ever become stronger?" That was a very inspirational message for me, much more than any other similar sayings I heard, and this post is full of such insights.

Anyway, on the subject of human augmentation, well, what about them? If you are talking about a timescale of decades, than intelligence augmentation does seems like a worthy avenue of investment (it doesn't has to be full scale neural rewiring, it could be just smarter nootropics).

Comment by IL on Beyond the Reach of God · 2008-10-04T21:48:45.000Z · LW · GW

...Can someone explain why?

Many people believe in an afterlife... why sign up for cryonics when you're going to go to Heaven when you die?

That's probably not the explanation, since there are many millions of atheists who heard about cryonics and/or extinction risks. I figure the actual explanation is a combination of conformity, the bystander effect, the tendency to focus on short term problems, and the Silliness Factor.

Comment by IL on You Provably Can't Trust Yourself · 2008-08-19T21:42:41.000Z · LW · GW

Eliezer, I have an objection to your metaethics and I don't think it's because I mixed levels:

If I understood your metaethics correctly, then you claim that human morality consists of two parts: a list of things that we value(like love, friendship, fairness etc), and what we can call "intuitions" that govern how our terminal values change when we face moral arguments. So we have a kind of strange loop (in the Hofstadterian sense); our values judge if a moral argument is valid or not, and the valid moral arguments change our terminal values. I think I accept this. It explains quite nicely a lot of questions, like where does moral progress comes from. What I am skeptic about is the claim that if a person hears enough moral arguments, their values will always converge to a single set of values, so you could say that his morality approximates some ideal morality that can be found if you look deep enough into his brain. I think it's plausible that the initial set of moral arguments that the person hears will change considerably his list of values, so that his morality will diverge rather than converge, and there won't be any "ideal morality" that he is approximating.

Note that I am talking about a single human that hears different sets of moral arguments, and not about the convergence of moralities across all humans (which is a different matter altogether)

Also note that this is a purely empirical objection; I am asking for empirical evidence that supports your metaethics

Comment by IL on Sorting Pebbles Into Correct Heaps · 2008-08-10T15:38:28.000Z · LW · GW
why isn't the moral of this fable that pursuing subjective intuitions about correctness a wild goose chase?

Bacause those subjective intuitions are all we got. Sure, in an absolute sense, human intuitions on correctness are just as arbitrary as the pebblesorter's intuitions(and vastly more complex), but we don't judge intuitions in an absolute way, we judge them with are own intuitons. You can't unwind past your own intuitions. That was the point of Eliezer's series of posts.

Comment by IL on Humans in Funny Suits · 2008-07-31T08:59:50.000Z · LW · GW
The gap between autistic humans and neurotypical humans may be bigger than the gap between male and female humans. I would list autism as an exception to the psychological unity of humankind.

I remember reading "The Curious Incident of the Dog in the Night-time" and thinking: "This guy is more alien than most aliens I saw in Sci-fi".

Comment by IL on The Meaning of Right · 2008-07-29T11:23:44.000Z · LW · GW

P.S : My great "Aha!" moment from reading this post is the realisation that morality is not just a utility function that maps states of the world to real numbers, but also a set of intuitions for changing that utility function.

Comment by IL on The Meaning of Right · 2008-07-29T10:43:16.000Z · LW · GW

Let me see if I get this straight:

Our morality is composed of a big computation that includes a list of the things that we value(love, friendship, happiness,...) and a list of valid moral arguments(contagion backward in time, symmetry,...). If so, then how do we discover those lists? I guess that the only way is to reflect on our own minds, but if we do that, then how do we know if a particular value comes from our big computation, or is it just part of our regular biases? And if our biases are inextricably tangled with The Big Computation, then what hope can we possibly have?

Anyway, I think it would be useful to moral progress to list all of the valid moral arguments. Contagion backward in time and symmentry seem to be good ones. Any other suggestions?

Comment by IL on Setting Up Metaethics · 2008-07-28T18:15:18.000Z · LW · GW

I second Doug.

Comment by IL on The Gift We Give To Tomorrow · 2008-07-17T14:16:47.000Z · LW · GW

The AIs will not care about the things you care about. They'll have no reason to.
The "AIs" will be created by us. If we're smart enough, we will program the AIs to value the things we value (not in the same way that evolution programmed us).

The important thing is that we humans value love, and therefore we want love to perpetuate throught the universe.

Comment by IL on Is Morality Preference? · 2008-07-05T12:17:49.000Z · LW · GW

I guess that this will lead to your concept of volition, isn't it?

Anyway, is Obert really arguing that morality is entirely outside the mind? Couldn't the "fact" of morality that he is trying to discover be derived from his (or humanity's, or whatever) brain design? And if you tweak Subhan's definition of "want" enough, couldn't they actually reach agreement?

Comment by IL on Created Already In Motion · 2008-07-01T15:06:55.000Z · LW · GW
Also, I take it that this means you don't believe in the whole, "if a program implements consciousness, then it must be conscious while sitting passively on the hard disk" thing. I remember this came up before in the quantum series and it seemed to me absurd, sort of for the reasons you say.

I used that as an argument against timeless physics: If you could have consciousness in a timeless universe, than this means that you could simulate a conscious being without actually running the simulation, you could just put the data on the hard drive. I'm still waiting out for an answer on that one!

Comment by IL on The Moral Void · 2008-06-30T13:22:57.000Z · LW · GW

Vladimir, why not? From reading your comment, it seems like the only reason you don't hurt other people is because you will get hurt by it, so if you would take the pill, you would be able to hurt other people. Have I got it wrong? Is this really the only reason you don't hurt people?

Comment by IL on The Moral Void · 2008-06-30T11:38:03.000Z · LW · GW

Vladimir, if there was a pill that would make the function of the mirror neurons go away, in other words, a pill that would make you able to hurt people without feeling remorse or anguish, would you take it?

Comment by IL on No Universally Compelling Arguments · 2008-06-26T15:04:39.000Z · LW · GW

Roko, what exactly do you mean by "optimal"? "optimal" means "good", which is another word for "ethical", so your definition of ethics doesn't actually tell us anything new! An AI can view the supergoal of "creating more paperclips" as the optimal/correct/succesful/good thing to do. the value of the AI's supergoal(s) doesn't has anything to do with it's intelligence.

Comment by IL on Timeless Identity · 2008-06-03T11:32:12.000Z · LW · GW

I still don't get the point of timeless physics. It seems to me like two different ways of looking at the same thing, like classical configuration space vs relational configuration space. Sure, it may make more sense to formulate the laws of physics without time, and it may make the equations much simpler, but how exactly does it change your expected observations? In what ways does a timeless universe differ from a timeful universe?

Also, I don't think it's neccessary to study quantum mechanics in order to understand personal identity. I've reached the same conclusions about identity without knowing anything about QM, I feel it's just simple deductions from materialism.

Comment by IL on Timeless Causality · 2008-05-29T12:23:43.000Z · LW · GW

Wait a second, this doesn't make sense. If the universe is timeless, then you don't have to actually simulate the universe on a computer. You can just create a detailed model of the universe, put in the neccesery causality structure, stick it in the RAM, and voila! you have conscious beings living out their lives in a universe. You don't even have to put it in the RAM, you can just write out symbols on a piece of paper! Or can this impeccable line of reasoning be invalidated by experimental evidence?

Comment by IL on Can You Prove Two Particles Are Identical? · 2008-04-14T09:38:22.000Z · LW · GW

But the experiment does'nt prove that the two photons are really identical, it just proves that the photons are identical as far as the configurations are concerned. The photons could still have tiny tags with a number on them, but for some reason the configurations don't care about tags.