The Maker of MIND 2021-11-20T16:28:56.327Z
On Raising Awareness 2021-11-17T17:12:36.843Z
The Best Software For Every Need 2021-09-10T02:40:13.731Z
(2009) Shane Legg - Funding safe AGI 2021-07-17T16:46:24.713Z
What would it look like if it looked like AGI was very near? 2021-07-12T15:22:35.321Z
Going Out With Dignity 2021-07-09T22:07:59.169Z
Irrational Modesty 2021-06-20T19:38:25.320Z
AI-Based Code Generation Using GPT-J-6B 2021-06-16T15:05:26.381Z
A Breakdown of AI Chip Companies 2021-06-14T19:25:46.720Z
Parameter vs Synapse? 2021-03-11T15:30:59.745Z
Thoughts on the Repugnant Conclusion 2021-03-07T19:00:37.056Z


Comment by Bjartur Tómas on Tears Must Flow · 2021-11-30T19:51:08.472Z · LW · GW

Marshalling yourself in this way, reflects poorly on your movement. If I imagine myself as a member of this family, I would react poorly to the behaviour displayed. And would be repulsed by this movement, which can make someone act in such a way. 

Now, I don't share your emotional reaction to animal cruelty, and plausibly I am less empathetic. But consider what your emotional reaction here is doing, compare it to abstract acknowledgement of the harms and kind, thoughtful but confident explanations of your veganism.

And as a matter of scope, your reaction here is incorrect. The terror you saw at that table is as nothing compared to the industrial farming conditions as a whole. Reacting to it as a synecdoche of the agricultural system does not seem useful. It seems paralyzing.  

Also, promoting norms of disassociation among vegans makes veganism even more unappealing than it already is. 

Once cruelty free meats are cheap, and veganism itself becomes a cheap signal, those looking back from the future at your pledge will admire this uncompromising stance. But in terms of actually doing the most good for animals, I suspect it is harmful. 

Comment by Bjartur Tómas on Which song do you think is perfect? Why? · 2021-11-27T15:36:26.111Z · LW · GW

Love of the Loveless by The Eels:

Comment by Bjartur Tómas on The Maker of MIND · 2021-11-24T16:09:36.909Z · LW · GW

Thanks. I've written enough aborted novels to know I don't like writing novels, but I will probably write a few more short stories at some point.

Comment by Bjartur Tómas on The Maker of MIND · 2021-11-24T01:54:12.658Z · LW · GW

This story doesn’t perfectly represent my opinions, and I actually have a lot of sympathy for “mundane utopias”.

Comment by Bjartur Tómas on Giving Up On T-Mobile · 2021-11-21T17:16:37.493Z · LW · GW

Really wish I could get GV is Canada.

Comment by Bjartur Tómas on The Maker of MIND · 2021-11-21T14:25:28.012Z · LW · GW

Thanks. Fixed... I mean, the secret meaning was too subtle so I removed it so as to not confuse people.

Comment by Bjartur Tómas on The Maker of MIND · 2021-11-20T20:29:37.101Z · LW · GW

Thanks! I felt weird using LW's copy-editing feature for fiction and didn't, so this is really helpful. The double spaces were not intentional, ditto with the unclosed quotations.

Comment by Bjartur Tómas on The Maker of MIND · 2021-11-20T17:15:55.782Z · LW · GW

Nothing conscious, but I have read Metamorphises of the Prime Intellect.

Comment by Bjartur Tómas on The Maker of MIND · 2021-11-20T16:57:11.895Z · LW · GW

I started writing this for the EA Forum Creative Writing Contest but missed the deadline, so posting it here. 

Comment by Bjartur Tómas on On Raising Awareness · 2021-11-19T15:45:32.443Z · LW · GW

Yeah, any ideas how to filter for this? Seems difficult not to have this effect on someone. One would hope the smarter people would get orthogonality, but like empirically that does not seem to be the case. The brightest people in AI have insane naïveté on the likely results of AGI.

Comment by Bjartur Tómas on On Raising Awareness · 2021-11-18T23:20:41.479Z · LW · GW

Or those who might choose to become programmers.

Comment by Bjartur Tómas on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-16T04:56:57.603Z · LW · GW

Ha, I know. I was weighing in, in support, against this claim he was replying to:

10 million dollars will probably have very small impact on Terry Tao's decision to work on the problem.

Comment by Bjartur Tómas on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-16T00:39:44.095Z · LW · GW

I'm probably too dumb to have an opinion of this matter, but the belief that all super-genius mathematicians care zero about being fabulously wealthy strikes me as unlikely. 

Comment by Bjartur Tómas on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-12T14:50:16.392Z · LW · GW

The idea has been joked about for awhile. I think it is probably worth trying in both the literally offer Tao 10 million and the generalized case of finding the highest g people in the world and offering them salaries that seem truly outrageous. Here and on EA forum, many claim genius people would not care about 10 million dollars. I think this is, to put it generously, not at all obvious. And certainly something we should establish empirically. Though Eliezer is a genius, I do not think he is literally the smartest person on the planet. To the extent we can identify the smartest people on the planet, we would be a really pathetic civilization were we were not willing to offer them NBA-level salaries to work on alignment.

Comment by Bjartur Tómas on Discussion with Eliezer Yudkowsky on AGI interventions · 2021-11-11T06:29:44.364Z · LW · GW

I know we used to joke about this, but has anyone considered actually implementing the strategy of paying Terry Tao 10 million dollars to work on the problem for a year? 

Comment by Bjartur Tómas on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T20:04:37.832Z · LW · GW

LSD did not permanently lower my mathematical abilities, and if I suggested that I probably misspoke? I suspect it damaged my memory, though; my memory is worse now than before I took LSD. 

Thanks. Corrected; I probably conflated the two.  But my feeling towards that change are the same so the line otherwise remains unchanged.  I should probably organize my opinons/feelings on this topic and write an effortpost or something rather than hash it out in the comments.

Comment by Bjartur Tómas on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T19:46:42.284Z · LW · GW

And I get you might think I'm... brainwashed or something? by drugs?

I'm not sure what you find implausible about that.  Drugs do not literally propagandize the user, but they can hijack the reward system, in the case of many drugs, and in the case of psychedelics they seem to alter beliefs in reliable ways. Psychedelics are also taken in a memetic context with many crystalized notions about what the psychedelic experience is, what enlightenment is, that enlightenment itself is a mysterious but worthy pursuit.

The classic joke about psychedelics is they provide the feelings associated with profound insights without the actual profound insights. To the extent this is true, I feel this is pretty dangerous territory for a rationalist to tread.  

In your own case unless I am misremembering, I believe on your blog you discuss LSD permanently lowering your mathematical abilities degrading your memory. This seems really, really bad to me…

Maybe this one is less concrete, but some part of me feels really deeply at peace, always, like it knows everything is going to be ok and I didn't have that before.

I’m glad your anxiety is gone, but I don't think everything is going to be alright by default. I would not like to modify myself to think that. It seems clearly untrue. 

Perhaps the masturbation line was going too far.  But the gloss of virtue that “seeking enlightenment” has strikes me as undeserved. 

Comment by Bjartur Tómas on My experience at and around MIRI and CFAR (inspired by Zoe Curzi's writeup of experiences at Leverage) · 2021-10-18T14:53:28.143Z · LW · GW

Even in the case of Sam Harris, who seems relatively normal, he lost a decade of his life pursuing “enlightenment” though meditation - also notable is this was spurred on by psychedelic use. Though I am sure he would not agree with the frame that it was a waste,  I read his *Waking Up* as a bit of a horror story. For someone without his high IQ and indulgent parents, you could imagine more horrible ends. 

I know of at least one person who was bright, had wild ambitious ideas, and now spends his time isolated from his family inwardly pursuing “enlightenment.” And this through the standard meditation + psychedelics combination. I find it hard to read this as anything other than wire-heading, and I think a good social norm would be one where we consider such behavior as about as virtuous as obsessive masturbation.

In general, for any drug that produces euphoria, especially spiritual euphoria, the user develops an almost romantic relationship with their drug, as the feelings they inspire are just as intense (and sometimes more so) as familial love.  One should at least be slightly suspicious of the benefits propounded by their users, who in many cases literally worship their drugs of choice. 

Comment by Bjartur Tómas on Dominic Cummings : Regime Change #2: A plea to Silicon Valley · 2021-10-02T20:16:07.897Z · LW · GW

Someone's been reading Yarvin. 

Comment by Bjartur Tómas on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T16:23:43.814Z · LW · GW

Dead man's curl command?

Comment by Bjartur Tómas on Petrov Day 2021: Mutually Assured Destruction? · 2021-09-26T16:18:08.610Z · LW · GW

I suspect it was supposed to be a "false alarm".

Comment by Bjartur Tómas on [Book Review] Altered Traits · 2021-09-25T17:51:40.118Z · LW · GW

Reading the part about the default network, I found I could turn off my internal monologue. I cannot read or write in that state. It seemed a little strange, but neither pleasant or unpleasant. 

Comment by Bjartur Tómas on [deleted post] 2021-09-25T16:06:46.518Z

Topic: AI

Question: Why can't we use MuZero on Core Wars to train models to code? MuZero uses self-play to master games, and Core Wars is a programming game amenable to self-play. Why has no one tried this? Or if they have, why does it not work?

Why my research failed: Googling this question, I get results that describe how MuZero works, DeepMind's blog post on the topic, and various Hacker News threads that do not address my question.

Comment by Bjartur Tómas on AI-Based Code Generation Using GPT-J-6B · 2021-09-25T15:10:31.382Z · LW · GW

This post is obsolete now with Codex, but it is interesting how (even knowing little about ML myself) just hanging around ML people on discord, let me get a better sense of the future than I would have otherwise. Perhaps a post like The Power of Lurking might be worthwhile. 

Comment by Bjartur Tómas on Sam Altman Q&A Notes - Aftermath · 2021-09-08T21:08:24.324Z · LW · GW

Though my preferred outcome would be you taking the post down without much of a fuss, I understand this is a pretty self-serving preference. I did like the compromise idea of making the post available only to members, but that does not appear to be an existing feature of the site. 

Taking it down helps remedy a failure of my own rather than yours, as we clearly should have been more explicit about this. 

You posting them initially is perfectly understandable. Though I disagreed with your desire to keep them up after I requested them down, I understand this is a matter of opinion.

"Defection" is a pretty loaded word and I should not have used it. 

In general, I think it is really great when people provide public goods like book reviews or highlights (I also think it is really rewarding and I have never regretted doing such things myself), so to the extent this has discouraged you from this path, I would like to point out that this is obviously a weird “scissor case” and similar efforts in the future will certainly be well received. 

Comment by Bjartur Tómas on [deleted post] 2021-09-07T21:39:30.766Z

I think it has mostly been pretty civil. I have nothing against the OP and don't think he is malicious. Just think the situation is unfortunate. I was not blameless.  We should have explicitly mentioned in the emails to not publish notes.  And I should have asked OP flat out to remove it in my initial reply,  rather than my initial slightly timid attempt. 

Most of our meetups are recorded and posted publicly and obviously we are fine with summarization and notes, but about 1/10 guests prefer them to not be recorded.

Comment by Bjartur Tómas on [deleted post] 2021-09-07T19:18:22.412Z

I would also like it to be removed. 

Comment by Bjartur Tómas on [deleted post] 2021-09-07T14:59:57.334Z

>If you don't want any notes to be published this post is a good incentive to make that explicit in the future or even retroactively. 

I consider this point to be slightly uncivil, but we will be explict in the future. 

Comment by Bjartur Tómas on [deleted post] 2021-09-07T14:54:51.363Z

See, it is on the front page of HackerNews now, all over Reddit. I'm the person who books guests for Joshua's meetups, and I feel like this is a sort of defection against Altman and future attendees of the meetup.  As I said, I think notes are fine and sharing them privately is fine but publishing on the open web vastly increases the probability of some journalist writing a click-bait story about your paraphrased take of what Altman said. 

Actually attending the meetups was a trivial inconvenience that reduced the probability of this occurring. Perhaps the damage is now done, but I really don't feel right about this.

I take some responsibility for not being explicit about not publishing notes on the web, for whatever reason this was not a problem last time.

Comment by Bjartur Tómas on [deleted post] 2021-09-06T20:06:53.596Z

If possible on this site, perhaps a good compromise would be to make it available to LessWrong members only.

Comment by Bjartur Tómas on [deleted post] 2021-09-06T18:27:13.648Z

I think it is fine to take notes, and fine to share them with friends. I'd prefer if this was not posted publicly on the web, as the reason he did not want to be recorded is it allowed him to speak more freely.

Comment by Bjartur Tómas on AI-Based Code Generation Using GPT-J-6B · 2021-08-10T17:57:08.741Z · LW · GW

Thanks! Any thoughts on Codex? Do you think insane progress in code generation will continue for at least a few years?

Comment by Bjartur Tómas on Open Philanthropy is seeking proposals for outreach projects · 2021-07-21T18:19:31.845Z · LW · GW

Regarding your podcast example, I have some thoughts:

Psychometrics is both correct and incredibly unpopular - this means there is possibly an arbitrage here for anyone willing to believe in it.

Very high IQ people are rare and often have hobbies that are considered low-status in the general population. Searching for low-status signals that are predictive of cognitive ability looks to be an efficient means of message targeting. 

It is interesting to note that Demis Hassibias’s prodigious ability was obvious to anyone paying attention to board games competitions in the late 90s. It may have been high ROI to sponsor the Mind Sports Olympiad at that time just for a small shot at influencing someone like Demis. There are likely other low-status signals of cognitive ability that will allow us to find diamonds in the rough. 

Those who do well in strategic video games, board games, and challenging musical endeavors may be worth targeting. (Heavy metal for example - being very low-status and extremely technical musically - is a good candidate for being underpriced).

With this in mind, one obvious idea for messaging is to run ads. Unfortunately, high-impact people almost certainly have ad-blockers on their phones and computers. 

However, the podcast space offers a way around this. Most niche 3rd party apps allow podcasters to advertise their podcasts on the podcast search pages. On the iPhone, at least, these cannot be adblocked trivially.

As the average IQ of a 3rd-party podcast app user is likely sligher higher than those who use first-party podcast apps, the audience is plausibly slightly enriched for high-impact people already. By focusing ads on podcast categories that are both cheap and good proxies for listener’s IQs (especially of the low-status kind mentioned above) one may be able to do even better.

I have been doing this for the AXRP podcast on the Overcast podcast app, and it has worked out to about ~5 dollars per subscriber. I did this without asking the permission of the podcast's host.

Due to the recurring nature of podcasts and the parasocial relationship podcast listeners develop to the hosts of podcasts, it is my opinion their usefulness as a propaganda and inculcation tool is underappreciated at this time. It is very plausible to me that 5 dollars per subscriber may indeed be very cheap for the right podcast. 

Directly sponsoring niche podcasts with extremely high-IQ audiences may be even more promising. There are likely mathematics, music theory, games and puzzle podcasts that are small enough to have not attracted conventional advertisers but are enriched enough in intelligent listeners to be a gold mine from this perspective. 

I do not think I am a particularly good fit for this project. My only qualification is I am the only person I am aware of who is running such a project. Someone smarter with a better understanding of statistics would plausibly do far better. Perhaps if you have an application by a higher-quality person with a worse idea, you can give them my project. Then I can use my EA budget on something even crazier! 

Comment by Bjartur Tómas on The shoot-the-moon strategy · 2021-07-21T17:11:21.870Z · LW · GW

This is my favourite LW post in a long while. Trying to think what the shoot-the-moon strat would be for AI risk, ha.

Comment by Bjartur Tómas on Going Out With Dignity · 2021-07-10T05:39:43.578Z · LW · GW

Fair enough. "Silly" is out. 

Comment by Bjartur Tómas on Musing on the Many Worlds Hypothesis · 2021-07-06T14:53:49.160Z · LW · GW

On reading your words I start to see,

The sheer improbability of me,

I will remember this for if I don’t,

The me who recalls this moment won’t

Be the me who recalls this thought,

And instead will one that has forgot,

That they are me not someone new,

This class of "mes" may as well be you!

Comment by Bjartur Tómas on Irrational Modesty · 2021-06-21T14:35:44.689Z · LW · GW

Another, though this time slightly tongue-in-cheek, motivational technique that may be helpful: If feels to you like a "status overreach" to try to save the world, it may help to reframe it as merely saving yourself - with saving the world just a happy, incidental side effect.

Comment by Bjartur Tómas on AI-Based Code Generation Using GPT-J-6B · 2021-06-16T21:39:32.055Z · LW · GW

I don't know too much about it. But I do know it was used extensively by Shell; they credited it with allowing them to respond to the Oil Shock much quicker than their competitors. They had analyzed the symptoms of a similar scenario (which was considered extremely outlandish at the time of scenario's creation) and begin to notice eerie similarities between those symptoms and their present reality.

I see it as a sort of social technology that tries to assist an organization (and perhaps an individual) in resisting becoming the proverbial slowly-boiling frog. 

As to evidence of its efficacy, I am only aware of anecdotal evidence. There appears to be an extensive Wikipedia page on the topic but I have not read it - my knowledge comes mostly from hearing Vernor Vinge speak about the technique,  as he assisted in scenario-creation for several companies. 

Ever since I heard Vinge speak about this, I have occasionally tried to think about the present as if it were a scenario I developed in the past: what sort of scenario would it be, how surprised would my past self be, and so on. Seeing how much The Pile improved GPT-J's performance on this task trigged such thoughts.

Comment by Bjartur Tómas on A Breakdown of AI Chip Companies · 2021-06-16T04:05:38.383Z · LW · GW

I did not write this post. Just thought it was interesting/relevant for LessWrong.

Comment by Bjartur Tómas on Are we in an AI overhang? · 2021-04-03T15:19:01.940Z · LW · GW

Just posting in case you did not get my PM. It has my email in it.

Comment by Bjartur Tómas on Logan Strohl on exercise norms · 2021-03-30T16:16:48.796Z · LW · GW

This is probably not a meta enough comment, but I have been using kettlebells since the pandemic and I think they are the highest ROI form of exercise I have ever tried. I do 5 minutes of kettlebell swings with a 60 pound bell 3 times a day: before work, on my lunch break, and after work. My strength has significantly increased and it feels like a good cardio workout too.

My big problem with exercise is not the discomfort but the monotony. Swings are much more exhausting than most exercises and are also a hybrid of lifting and cardio, making them very efficient.

Comment by Bjartur Tómas on Are we in an AI overhang? · 2021-03-11T15:27:20.186Z · LW · GW

Your estimates of hardware advancement seem higher than most people's. I've enjoyed your comments on such things and think there should be a high-level, full length post on them, especially with widely respected posts claiming much longer times until human-level hardware.Would be willing to subsidize such a thing if you are interested. Would pay 500 USD to yourself or a charity of your choice for a post on the potential of ASICS, Moore's law, how quickly we can overcome the memory bandwidth bottlenecks and such things. Would also subsidize a post estimating an answer this question, too:

Comment by Bjartur Tómas on Are we in an AI overhang? · 2020-07-27T14:53:33.704Z · LW · GW

One thing we have to account for is advances architecture even in a world where Moore's law is dead, to what extent memory bandwidth is a constraint on model size, etc. You could rephrase this as how much of an "architecture overhang" exists. One frame to view this through is in era the of Moore's law we sort of banked a lot of parallel architectural advances as we lacked a good use case for such things. We now have such a use case. So the question is how much performance is sitting in the bank, waiting to be pulled out in the next 5 years.

I don't know how seriously to take the AI ASIC people, but they are claiming very large increases in capability, on the order of 100-1000x in the next 10 years, if this is a true this is a multiplier on top of increased investment. See this response from a panel including big-wigs at NVIDIA, Google, and Cerebras about projected capabilities: On top of this, one has to account, too, for algorithmic advancement:

Another thing to note is though by parameter count, the largest modern models are 10000x smaller than the human brain, if one buys that parameter >= synapse idea (which most don't but is not entirely off the table), the temporal resolution is far higher. So once we get human-sized models, they may be trained almost comically faster than human minds are. So on top an architecture overhang we may have this "temporal resolution overhang", too, where once models are as powerful as the human brain they will almost certainly be trained much faster. And on top of this there is an "inference overhang" where because inference is much, much cheaper than training, once you are done training an economically useful model, you will almost tautologically have a lot of compute to exploit it with.

Hopefully I am just being paranoid (I am definitely more of a squib than a wizard in these domains), but I am seeing overhangs everywhere!

Comment by Bjartur Tómas on "Should Blackmail Be Legal" Hanson/Zvi Debate (Sun July 26th, 3pm PDT) · 2020-07-21T14:34:30.119Z · LW · GW

I have created this Google Calendar link if anyone wants to quickly setup a reminder:

Comment by Bjartur Tómas on Open & Welcome Thread - June 2020 · 2020-06-05T14:26:16.056Z · LW · GW
What would be a good exit plan? If you've thought about this, can you share your plan and/or discuss (privately) my specific situation?'

+1 for this. Would love to talk to other people seriously considering exit. Maybe we could start a Telegram or something.

Comment by Bjartur Tómas on human psycholinguists: a critical appraisal · 2020-01-01T17:33:32.687Z · LW · GW

They already assigned >90% probability that GPT-2 models something like how speech production works.

Is that truly the case? I recall reading Corey Washington a former linguist (who left the field for neuroscience in frustration with its culture and methods) claim that when he was a linguist the general attitude was there was no way in hell something like GPT-2 would ever work even close to the degree that it does.

Found it:

Steve: Corey’s background is in philosophy of language and linguistics, and also neuroscience, and I have always felt that he’s a little bit more pessimistic than I am about AGI. So I’m curious — and answer honestly, Corey, no revisionist thinking — before the results of this GPT-2 paper were available to you, would you not have bet very strongly against the procedure that they went through working?

Corey: Yes, I would’ve said no way in hell actually, to be honest with you.

Steve: Yes. So it’s an event that caused you to update your priors.

Corey: Absolutely. Just to be honest, when I was coming up, I was at MIT in the mid ’80s in linguistics, and there was this general talk about how machine translation just would never happen and how it was just lunacy, and maybe if they listened to us at MIT and took a little linguistics class they might actually figure out how to get this thing to work, but as it is they’re going off and doing this stuff which is just destined to fail. It’s a complete falsification of that basic outlook, which I think, — looking back, of course — had very little evidence — it had a lot of hubris behind it, but very little evidence behind it.

I was just recently reading a paper in Dutch, and I just simply… First of all, the OCR recognized the Dutch language and it gave me a little text version of the page. I simply copied the page, pasted it into Google Translate, and got a translation that allowed me to basically read this article without much difficulty. That would’ve been thought to be impossible 20, 30 years ago — and it’s not even close to predicting the next word, or writing in the style that is typical of the corpus.