Comment by gallabytes on How rapidly are GPUs improving in price performance? · 2019-03-25T07:47:56.462Z · score: 1 (1 votes) · LW · GW

Of course, if you don’t like how an exponential curve fits the data, you can always change models—in this case, probably to a curve with 1 more free parameter (indicating a degree of slowdown of the exponential growth) or 2 more free parameters (to have 2 different exponentials stitched together at a specific point in time).

Oh that's actually a pretty good idea. Might redo some analysis we built on top of this model using that.

Comment by gallabytes on Blackmailers are privateers in the war on hypocrisy · 2019-03-15T08:30:56.296Z · score: 1 (1 votes) · LW · GW

correct. edited to make this more obvious

Comment by gallabytes on Blackmailers are privateers in the war on hypocrisy · 2019-03-14T11:13:16.085Z · score: 15 (9 votes) · LW · GW

This argument would make much more sense in a just world. Information that should damage someone is very different from information that will damage someone. With blackmail you're optimized to maximize damage to the target, and I expect tails to mostly come apart here. I don't see too many cases of blackmail replacing MeToo. When was the last time the National Enquirer was a valuable whistleblower?

EDIT: fixed some wording

Comment by gallabytes on How rapidly are GPUs improving in price performance? · 2018-12-14T07:02:22.837Z · score: 6 (2 votes) · LW · GW
When trying to fit an exponential curve, don't weight all the points equally

We didn't. We fit a line in log space, but weighted the points by sqrt(y). The reason we did that is because it doesn't actually appear linear in log space.

This is what it looks like if we don't weight them. If you want to bite the bullet of this being a better fit, we can bet about it.

How rapidly are GPUs improving in price performance?

2018-11-25T19:54:10.830Z · score: 32 (9 votes)
Comment by gallabytes on Act of Charity · 2018-11-18T08:04:37.983Z · score: 3 (2 votes) · LW · GW
I'd optimize more for not making enemies or alienating people than for making people realize how bad the situation is or joining your cause.

Why isn't this a fully general argument for never rocking the boat?

Comment by gallabytes on Act of Charity · 2018-11-18T07:46:33.130Z · score: 9 (5 votes) · LW · GW
Based on my models (such as this one), the chance of AGI "by default" in the next 50 years is less than 15%, since the current rate of progress is not higher than the average rate since 1945, and if anything is lower (the insights model linked has a bias towards listing recent insights).

Both this comment and my other comment are way understating our beliefs about AGI. After talking to Jessica about it offline to clarify our real beliefs rather than just playing games with plausible deniability, my actual probability is between 0.5 and 1% in the next 50 years. Jessica can confirm that hers is pretty similar, but probably weighted towards 1%.

Comment by gallabytes on Act of Charity · 2018-11-18T01:44:33.973Z · score: 11 (3 votes) · LW · GW
I think I'm more skeptical than you are that it's possible to do much better (i.e., build functional information-processing institutions) before the world changes a lot for other reasons (e.g., superintelligent AIs are invented)

Where do you think the superintelligent AIs will come from? AFAICT it doesn't make sense to put more than 20% on AGI before massive international institutional collapse, even being fairly charitable to both AGI projects and prospective longevity of current institutions.

Comment by gallabytes on Where does ADT Go Wrong? · 2017-12-02T10:21:03.000Z · score: 0 (0 votes) · LW · GW

When considering an embedder , in universe , in response to which SADT picks policy , I would be tempted to apply the following coherence condition:

(all approximately of course)

I'm not sure if this would work though. This is definitely a necessary condition for reasonable counterfactuals, but not obviously sufficient.

Comment by gallabytes on I Want To Live In A Baugruppe · 2017-03-17T04:23:47.120Z · score: 5 (5 votes) · LW · GW

I'm fairly interested but don't really want to be around children.

Asymptotic Decision Theory

2016-10-15T02:42:44.000Z · score: 5 (5 votes)
Comment by gallabytes on A new proposal for logical counterfactuals · 2016-07-15T21:07:54.000Z · score: 0 (0 votes) · LW · GW

By censoring I mean a specific technique for forcing the consistency of a possibly inconsistent set of axioms.

Suppose you have a set of deduction rules over a language . You can construct a function that takes a set of sentences and outputs all the sentences that can be proved in one step using and the sentences in . You can also construct a censored by letting .

A new proposal for logical counterfactuals

2016-07-07T22:15:37.000Z · score: 3 (3 votes)
Comment by gallabytes on The art of grieving well · 2015-12-17T21:05:39.934Z · score: 2 (2 votes) · LW · GW

I feel like this comment belongs on the LessWrong 2.0 article (to the point that I assumed that's where it was when I was told about it), but it doesn't actually matter.

Comment by gallabytes on Open Thread, Jun. 1 - Jun. 7, 2015 · 2015-06-03T20:15:45.539Z · score: 1 (1 votes) · LW · GW

I'd be interested to read another take on it if there's some novel aspect to the explanation. Do you have a particular approach to explaining it that you think the world doesn't have enough of?

Comment by gallabytes on Meetup : San Francisco Meetup: Prisoner's Dilema Tournament · 2015-01-27T08:14:52.326Z · score: 0 (0 votes) · LW · GW

The tournament definitely made for an interesting game, but more as a conversation starter than a game itself. I suspect part of the issue was how short this particular game was, but the constant cooperation got old pretty quickly. Some ideas for next time someone wants to run a dilemma tournament:

1) Bracket play - you play with another person until one of you defects while the other cooperates. Example round: both cooperate, both cooperate both defect, both defect, player 1 defects and player 2 cooperates, player 1 moves on. This has a trivial vulnerability in that always defecting always wins or ties, so there'd have to be some other incentive to encourage cooperation.

2) Factions - each person is part of a faction, you have some minor term for the total utility of your faction at the end of the game. You could go further with this and have a negative (even more minor) term for the total utility of other factions, or maybe add the option of giving points to other players.

Unrelated note: In the future there should probably be some coordination around walking to/from. While nothing happened this time, Civic Center did not feel like the safest place to be walking around alone late at night.

Comment by gallabytes on Superintelligence Reading Group 2: Forecasting AI · 2014-09-26T06:14:58.395Z · score: 3 (3 votes) · LW · GW

I doubt there is a sharp distinction between them

Actually, let's taboo weak and strong AI for a moment.

By weak AI I mean things like video game AI, self driving cars, WolframAlpha, etc.

By strong AI I think I mean something that can create weak AIs to solve problems. Something that does what I mean by this likely includes a general inference engine. While a self driving car can use its navigation programs to figure out lots of interesting routes from a to b, if you tell it to go from California to Japan it won't start building a boat

Comment by gallabytes on Superintelligence Reading Group 2: Forecasting AI · 2014-09-24T00:11:01.050Z · score: 4 (4 votes) · LW · GW

What's your reason for thinking weak AI leads to strong AI? Generally, weak AI seems to take the form of domain-specific creations, which provide only very weak general abstractions.

One example that people previously thought would lead to general AI was chess playing. And sure, the design of chess playing AI forced some interesting development of efficient traversing of large search spaces, but as far as I can tell it has only done so in a very weak way, and hasn't contributed meaningfully to anything resembling the efficiency of human-style chunking.

Comment by gallabytes on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T23:58:43.168Z · score: 3 (3 votes) · LW · GW

AGI takeoff is an event we as a culture have never seen before, except in popular culture. So, that in mind, reporters draw on the only good reference points the population has, sci fi.

What would sane AI reporting look like? Is there a way to talk about AI to people who have only been exposed to the cultural background (if even that) in a way that doesn't either bore them or look at least as bad as this?

Comment by gallabytes on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T23:53:56.687Z · score: 1 (1 votes) · LW · GW

Perhaps people who are a step removed from the actual AI research process? When I say that, I'm thinking of people like Robin Hanson and Nick Bostrom, whose work depends on AI but isn't explicitly about it.

Comment by gallabytes on Superintelligence Reading Group 2: Forecasting AI · 2014-09-23T23:49:12.341Z · score: 2 (2 votes) · LW · GW

The answer to this question depends really heavily on my estimation of MIRI's capability as an organization and on how hard the control problem turns out to be. My current answer is "the moment the control problem is solved and not a moment sooner", but I don't have enough of a grip on the other difficulties involved to say when that would be more concrete.

Comment by gallabytes on A proof of Löb's theorem in Haskell · 2014-09-20T19:40:51.964Z · score: 2 (2 votes) · LW · GW

I'm not really seeing the importance of the separate construct Theorem (A → B) vs (A → B). It seems to get manipulated in the exact same way in the code above. Is there some extra capability of functions that Theorem + postulates doesn't include?

Also, I think you're misunderstanding what a function A → B means in Agda. A → □ A doesn't mean "all true statements are provable", it means "all statements we can construct a proof term for are provable" - you don't get an element of type A to apply that to without a proof of A, so it's actually a pretty trivial implication. I'm actually tempted to say that the notion of provability is actually wholly unnecessary here because of the Curry-Howard Correspondence, and that the real issue is the construction of the fixpoint

Comment by gallabytes on A proof of Löb's theorem in Haskell · 2014-09-20T08:07:10.692Z · score: 3 (3 votes) · LW · GW

Hmm... so, interestingly enough, the proof is actually much simpler than both yours and the one on Wikipedia when written up in Agda.

Interestingly enough though, just the notion of provability being equivalent to proof doesn't actually lead to an inconsistency in Agda - the positivity checker prevents the declaration of a Löb sentence. But, if you admit the sentence, everything breaks in about the way you'd expect.

I've uploaded it as a gist here

EDIT 1: So I seem to have misunderstood Löb's theorem. Going to redo it and update the gist when I think it makes more sense. EDIT 2: Updated the gist, should actually contain a valid proof of Löb's theorem now

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-17T05:02:04.090Z · score: 2 (2 votes) · LW · GW

True in the case of owls, though in the case of AI we have the luxury and challenge of making the thing from scratch. If all goes correctly, it'll be born tamed.

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-17T02:25:08.354Z · score: 2 (2 votes) · LW · GW

It just means that the intelligence gap was smaller, potentially much, much smaller, when humans first started developing a serious edge relative to apes. It's not evidence for accumulation per se, but it's evidence against us just being so much smarter from the get go, and renormalizing has it function very much like evidence for accumulation.

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T21:11:56.125Z · score: 4 (4 votes) · LW · GW

Sure, I still don't think that if you elevated the intelligence of a group of chimps to the top 5% of humanity without adding some better form of communication and idea accumulation it wouldn't matter.

If Newton were born in ancient Egypt, he might have made some serious progress, but he almost certainly wouldn't have discovered calculus and classical mechanics. Being able to stand on the shoulders of giants is really important.

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T21:07:54.818Z · score: 1 (1 votes) · LW · GW

Eh, not especially. IIRC, scores have also had to be renormalized on Stanford-Binet and Weschler tests over the years. That said, I'd bet it has some effect, but I'd be much more willing to bet on less malnutrition, less beating / early head injury, and better public health allowing better development during childhood and adolescence.

That said, I'm very interested in any data that points to other causes behind the Flynn Effect, so if you have any to post don't hesitate.

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T03:22:48.440Z · score: 16 (16 votes) · LW · GW

I would say one of the key strong points about the fable of the sparrows is that it provides a very clean intro to the idea of AI risk. Even someone who's never read a word on the subject, when given the title of the book and the story, gets a good idea of where the book is going to go. It doesn't communicate all the important insights, but it points in the right direction.

EDIT: So I actually went to the trouble of testing this by having a bunch of acquaintances read the fable, and, even given the title of the book, most of them didn't come anywhere near getting the intended message. They were much more likely to interpret it as about the "futility of subjugating nature to humanity's whims". This is worrying for our ability to make the case to laypeople.

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T03:16:46.862Z · score: 1 (1 votes) · LW · GW

Do you have any examples of approaches that are indefinitely extendable?

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T02:02:33.505Z · score: 4 (4 votes) · LW · GW

Interestingly enough, a team at MIT managed to make an AI that learned how to play from the manual and proceeded to win 80% of it's games against the AI, though I don't know which difficulty it was set to, or how the freeciv AI compares to the one in normal Civilization.

Comment by gallabytes on Superintelligence Reading Group - Section 1: Past Developments and Present Capabilities · 2014-09-16T01:28:04.537Z · score: 4 (4 votes) · LW · GW

I would bet heavily on the accumulation. National average IQ has been going up by about 3 points per decade for quite a few decades, so there have definitely been times when Koko's score might have been above average. Now, I'm more inclined to say that this doesn't mean great things for the IQ test overall, but I put enough trust in it to say that it's not differences in intelligence that prevented the gorillas from reaching the prominence of humans. It might have slowed them down, but given this data it shouldn't have kept them pre-Stone-Age.

Given that the most unique aspect of humans relative to other species seems to be the use of language to pass down knowledge, I don't know what else it really could be. What other major things do we have going for us that other animals don't?

Comment by gallabytes on Search Engines and Oracles · 2014-07-09T15:25:35.544Z · score: 1 (1 votes) · LW · GW

"Despite the theoretical availability to find out virtually anything from the Internet, we seem pretty far from any plausible approximation of this dream"

I'm not as convinced this is as easy as you seem to think it is. One of the fundamental problems of all attempts to do natural language programming and/or queries is that natural languages have nondeterministic parsing. There's lots of ambiguities floating about in there, and lots of social modeling is necessary to correctly parse most sentences.

To take your "first ruler of Russia" example, to infer the correct query, you'd need to know:

  • That they mean Russia the landmass not Russia the nation-state
  • What they mean by "ruler of Russia" (for example, does Kievan Rus count as Russia?)