Posts

Comments

Comment by Greg C (greg-colbourn) on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-12-16T16:22:48.189Z · LW · GW

we probably won’t figure out how to make AIs that are as data-efficient as humans for a long time--decades at least. This is because 1. We’ve been trying to figure this out for decades and haven’t succeeded

EfficientZero seems to have put paid to this pretty fast. It seems incredible that the algorithmic advances involved aren't even that complex either. Kind of makes you think that people haven't really been trying all that hard over the last few decades. Worrying in terms of its implications for AGI timelines.

Comment by Greg C (greg-colbourn) on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-07T16:29:50.163Z · LW · GW

Ok, but Eliezer is saying that BOTH that his timelines are short (significantly less than 30 years) AND that he thinks ML isn't likely to be the final paradigm (this judging from not just this conversation, but the other, real, ones in this sequence).

Comment by Greg C (greg-colbourn) on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-07T12:26:33.296Z · LW · GW

2 * 10^16 ops/sec*

(*) Two TPU v4 pods.

Shouldn’t this be 0.02 TPU v4 pods?

Comment by Greg C (greg-colbourn) on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-07T11:49:56.414Z · LW · GW

I note that mixture-of-experts is referred to as the kind of thing that in principle could shorten timelines, but in practice isn't likely to. Intuitively, and naively from neuroscience (different areas of the brain used for different things), it seems that mixture-of-experts should have a lot of potential, so I would like to see more detail on exactly why it isn't a threat.

Comment by Greg C (greg-colbourn) on Biology-Inspired AGI Timelines: The Trick That Never Works · 2021-12-07T11:46:25.405Z · LW · GW

Eliezer has short timelines, yet thinks that the current ML paradigm isn’t likely to be the final paradigm. Does this mean that he has some idea of a potential next paradigm? (Which he is, for obvious reasons, not talking about, but presumably expects other researchers to uncover soon, if they don’t already have an idea). Or is it that somehow the recent surprising progress within the ML paradigm (AlphaGo, AlphaFold, GPT3 etc) makes it more likely that a new paradigm that is even more algorithmically efficient is likely to emerge soon? (If the latter, I don’t see the connection).

Comment by Greg C (greg-colbourn) on Thiel on secrets and indefiniteness · 2021-12-03T17:38:27.116Z · LW · GW

There’s an optimistic way to describe the result of these trends: today, you can’t start a cult. Forty years ago, people were more open to the idea that not all knowledge was widely known.

This doesn't seem to have aged well in light of the rampant spread of misinformation and conspiracy theories on social media (especially Facebook!)

Comment by Greg C (greg-colbourn) on LessWrong FAQ · 2021-12-03T12:32:01.809Z · LW · GW

Test spoiler:

Test

Comment by Greg C (greg-colbourn) on Christiano, Cotra, and Yudkowsky on AI progress · 2021-12-03T12:26:36.972Z · LW · GW

Curious about Eliezer's and Paul's takes on the Netflix series neXt as a plausible future scenario. My guess:

too Eliezer-ish for Paul; too Paul-ish for Eliezer.

Comment by Greg C (greg-colbourn) on Christiano, Cotra, and Yudkowsky on AI progress · 2021-12-03T12:18:31.365Z · LW · GW

There was still a big update from ~20%->90%, which is what is relevant for Eliezer's argument, even if he misremembered the timing. The fact that the update was from the Fan Hui match rather than the Lee Sedol match doesn't seem that important to the argument [for superforecasters being caught flatfooted by discontinuous AI-Go progress].

Comment by Greg C (greg-colbourn) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T20:32:32.725Z · LW · GW

I think the people cold emailing Terry in this scenario should at least make sure they have the $10M ready!

Comment by Greg C (greg-colbourn) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T12:58:00.461Z · LW · GW

Ha, the same point on the EA Forum! (What is the origin of the idea?)

I think we probably want to go about it in a way that maximises credibility - i.e. it coming from a respected academic institution, even if the money is from elsewhere (CHAI, FHI, CSER, FLI, BERI, SERI could help with this). And also have it open to all Fields Medalists / all Nobel Prize winners in Physics / other equivalent in Computer Science, or Philosophy(?) or Economics(?) / anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem).

Comment by Greg C (greg-colbourn) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T12:45:15.076Z · LW · GW

Re Musk, his main goal is making a Mars Colony (SpaceX), with lesser goals of reducing climate change (Tesla, Solar City) and aligning AI (OpenAI, FLI). Making a trillion dollars seems like it's more of a side effect of using engineering and capitalism as the methodology. Lots of his top level goals also involve "making sure you do things right" (i.e. making sure the first SpaceX astronauts don't die). OpenAI was arguably a mis-step though.

Comment by Greg C (greg-colbourn) on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T12:44:06.471Z · LW · GW

To bypass the argument of whether pure maths talent is what is needed, we should generalise "Terry Tao / world's best mathematicians" to "anyone a panel of top people in AGI Safety would have on their dream team (who otherwise would be unlikely to work on the problem)"

Comment by Greg C (greg-colbourn) on The Craft & The Community - A Post-Mortem & Resurrection · 2018-05-01T03:26:38.888Z · LW · GW
The amounts are disputed

The first I heard of this was after I discovered that they had absconded whilst I was away. They had been racking up debts for months, during which time none of them disputed what they owed* (To be absolutely clear: this is not a flaking on future dated rent, these are accumulated debts for past rent and bills, services that have already been used.)

*The one exception to this was Ben laying claim to a saving made by me on the purchase of my house. He had a chance encounter that led to a heads up on potential issues with my house, and ultimately allowed me to negotiate £1,800 off the asking price. This info was given to me unconditionally at the time, as any friend would do. It don’t think it’s reasonable for him to backdate a claim to some of the savings and regard it as fungible with rent and bills owed. If he’d wanted to sell me the information at the time, fine. I might’ve given him £100 for it or something (although I’m sure a lot of people would just be like “WTF dude, just tell me! I thought you were a friend!”). But he didn’t do this. I will admit that when I first heard this claim, the first order consequentialist in me thought that there may be something to it. The second order consequentialist in me ran away screaming however (see mention of “Kafkaesque hellscape” below). Anyway - and this is neither here nor there, but - had he paid his debts when asked, I probably would have given him something for it.

damages resulting from Greg's personal negligence

I assume Ben is referring to a hot water outage early in the tenancy, which I was away for a lot of. I contacted the landlord about it. I also suggested they phone the landlord but they didn't want to (note I was only “named tenant” here, not the landlord. Just because their names weren’t on the rental contract, it doesn’t mean that they weren’t also tenants of the property, with every right to contact the landlord about things that needed fixing). Perhaps they might also add the leaky taps, that might end up costing us £10 each in wasted water at the most (if a reading of the meter is ever taken). Anyway, I offered them concessions on these points (£250 off for hot water, £10 off for leaky taps), and also asked them how much they thought was reasonable. I got no response.

if all points in our counterclaim for damages hold water, you would actually be owing thousands to us

I also have a number of claims of indefinite monetary value against them, some of which are quite substantial, so I really doubt this.

you rebuffed all claims as trivial

No, my main point was - and still is - that their claims in no way amount to justification for breaking the norm that in shared houses fair shares of rent and bills are paid. Without this norm shared houses are impossible, and by extension, physical rationalist communities. Claims of intangible or indefinite monetary value are not fungible with claims of tangible monetary value (i.e. rent and bills). This point is absolutely key, and should be regarded as the main takeaway from all of this. If claims of indefinite (or intangible) monetary value were to always be explicitly counted, and regarded as fungible with claims of definite (or tangible) monetary value, we’d end up in a Kafkaesque hellscape of a society pretty quickly.

36 hours to pay up or else

What I actually said -- after posting to a Kernel Project Facebook group chat and getting removed from the group immediately -- was: “Ok, I wont post to any other groups for at least 36 hours. You have a chance to redeem yourselves and not be part of the “rationalist houses with people just fucking off owing money” statistic.”. In this case, I would have accepted an attempt to negotiate amounts, or an offer to pay in instalments. Anything conciliatory really. Instead I was met with silence, and the odd snipe.

Remember that this is after they’d had months of racking up debts, without disputing them with me whenever I periodically reminded them of what they owed. It was also clear to me that they had no intention of either paying, or negotiating, given that they had revoked my access to the house finances spreadsheet on Google Sheets (I discovered this shortly after discovering the empty house).

Note that they probably thought that they had concealed evidence (and thereby got away with twisting the truth) by removing me from the Keybase house chat so that I wouldn’t have a record of it. Luckily I had managed to take pictures of the chat logs before they did this. They went from disputing my figures - which are approximations due to me no longer having access to the spreadsheet - to accepting them after this too.

taken this to every platform you could find

I have commented on two LessWrong posts, and posted to the “Coordination of Rationality/EA/SSC Housing Projects” Facebook group chat.

contacting one person's startup team members and potential seed accelerators

There was good reason for this. I am acting as a shareholder protecting my investment. (Yes, I am a shareholder in the company in question. In fact I provided seed funding for it.) The person in question clearly isn’t fit to be CEO of the company given their woeful grasp of tactics and strategy as illustrated by this case. I mean, risking their reputation over a debt of £2,000 when their company aspires to unicorn status? I frankly find it hard to believe that this is happening to be honest.

another person's immediate family in attempt to pressure them into compliance

I contacted parents as an attempt to shame them (the housemates). To be fair, I realise now that perhaps I was typical-minding a bit here. They might not be capable of experiencing shame. Note that I just stated facts and said I would appreciate it if the parents would "have a word" with them. They are free to write to my parents, my conscience is clear regarding this matter.

please don't pretend to mourn something you actively opposed during the nine months you shared a house with us.

During the last nine months, I have:

- Helped get the physical manifestation of the project started by being the only one willing to put my name on legally binding documents (being named tenant, having my name on all the bills);

- Provided white goods and appliances for house use;

- Attended meet-ups;

- Spent a week of my time reading (inc. linked material), and editing, drafts of this blog post;

- Offered my house for use by the project (with individual rental contracts for tenants);

- Regarded them as friends, despite our differences and disagreements. This was right up until last week when I discovered the rental house empty.

They displayed massive entitlement to my house. The plan was for us to all agree on an interior design. And Ben was going to do a lot of the work on the renovation. However, they weren’t accepting of any of my inputs to the design, despite it being my house! The arguments over it were so bad, and Ben was so difficult to work with, that I took him off the job. This is perhaps the trigger for all this happening: some perceived slight - and a great deal of misplaced pride - coming from Ben. He was concerned that were the aesthetics of the house not worthy of his “objectively correct” vision, then the Kernel Project would have been hampered in it’s ability to attract prestigious people. I find it absolutely bizarre that he thinks this, and yet appears to have no concern for how what basically amounts to petty theft looks to said prestigious people!

—————————————————————————————————————————————————

Lastly, I will say that I am not motivated by money here, but rather: fairness. I have enough money. It makes no sense for me to attempt to cheat them out of what to me are relatively small amounts. To show that I am absolutely clear about this, I have offered that they donate what they owe me to EA charities of their choice in lieu of paying me back. We can then discuss our other grievances (i.e. those of indefinite monetary value), and whether they require any net financial transfers either way. For my part this has been an attempt to uphold the golden norm of fair shares of rent and bills being paid in shared houses. Again: without this norm being very widely respected, physical rationalist communities aren’t possible.

Comment by Greg C (greg-colbourn) on The Craft & The Community - A Post-Mortem & Resurrection · 2018-04-26T16:01:36.090Z · LW · GW

Update on this project: it has fallen into the classic irrational rationalist attractor of people leaving the group house unannounced owing money (approx. £3000 in rent and bills between them, owed to me). The violation of this most basic of social norms leads me to conclude that the Manchester community (Kernel Project) is now doomed. Now that it is regarded as acceptable, amongst those who remain, to screw people over on rent and bills, what’s to stop it happening again and again, until no one will go near the project? Such a grand vision, trashed for a such a small short term gain. I’m finding it hard to believe to be honest. Regardless of whether genuine friendships (as opposed to temporary alliances) are even possible in this community, there seems to be this blind spot when it comes to basic game theoretical considerations involving trust, reputation and appropriate risks. Why are physical rationalist communities so prone to it? It’s so irrational.

Comment by Greg C (greg-colbourn) on Notes from the Hufflepuff Unconference · 2018-04-26T14:57:05.821Z · LW · GW

Re: group houses and people leaving owing money. I’ve just fallen victim to another example of this. At the Kernel Project, of all places - https://www.lesswrong.com/posts/wmEcNP3KFEGPZaFJk/the-craft-and-the-community-a-post-mortem-and-resurrection - a community with the stated aim of setting down long term, multigenerational roots (in Manchester, UK)! All 3 of the others left the group house unannounced at the end of the tenancy, whilst I was away, owing me approx. £3000 between them. The violation of this most basic of social norms leads me to conclude that the Kernel Project (like others before it) is now doomed. Now that it is regarded as acceptable, amongst those who remain, to screw people over on rent and bills, what’s to stop it happening again and again, until no one will go near the project? Such a grand vision, trashed for a such a small short term gain. I’m finding it hard to believe to be honest. This blind spot when it comes to such basic game theoretical considerations involving trust, reputation and appropriate risks. Why are physical rationalist communities so prone to it? It’s so irrational.