Posts

wunan's Shortform 2021-04-14T01:27:45.895Z
OpenAI charter 2018-04-09T21:02:04.621Z

Comments

Comment by wunan on wunan's Shortform · 2021-09-12T14:38:12.978Z · LW · GW

If compute is the main bottleneck to AI progress, then one goalpost to watch for is when AI is able to significantly increase the pace of chip design and manufacturing. After writing the above, I searched for work being done in this area and found this article. If these approaches can actually speed up certain steps in this process from taking weeks to just taking a few days, will that increase the pace of Moore's law? Or is Moore's law mainly bottlenecked by problems that will be particularly hard to apply AI to?

Comment by wunan on alenglander's Shortform · 2021-09-06T21:06:25.790Z · LW · GW

Do you have some examples? I've noticed that rationalists tend to ascribe good faith to outside criticisms too often, to the extent that obviously bad-faith criticisms are treated as invitations for discussions. For example, there was an article about SSC in the New Yorker that came out after Scott deleted SSC but before the NYT article. Many rationalists failed to recognize the New Yorker article as a hit piece which I believe it clearly was, even more clearly now that the NYT article has come out.

Comment by wunan on [deleted post] 2021-09-06T20:51:30.738Z

Yeah, my main takeaway from that question was that a change in the slope of of the abilities graph was what would convince him of an imminent fast takeoff. Presumably the x axis of the graph is either time (i.e. the date) or compute, but I'm not sure what he'd put on the Y axis and there wasn't enough time to ask a followup question.

Comment by wunan on Why the technological singularity by AGI may never happen · 2021-09-03T19:22:30.581Z · LW · GW

Even without having a higher IQ than a peak human, an AGI that merely ran 1000x faster would be transformative.

Comment by wunan on How to turn money into AI safety? · 2021-08-25T21:37:16.314Z · LW · GW

Of the bottlenecks I listed above, I am going to mostly ignore talent. IMO, talented people aren't the bottleneck right now, and the other problems we have are more interesting.

 

Can you clarify what you mean by this? I see two main possibilities for what you might mean:

  • There are many talented people who want to work on AI alignment, but are doing something else instead.
  • There are many talented people working on AI alignment, but they're not very productive.

If you mean the first one, I think it would be worth it to survey people who are interested in AI alignment but are currently doing something else -- ask each of them, why aren't they working on AI alignment? Have they ever applied for a grant or job in the area? If not, why not? Is money a big concern, such that if it were more freely available they'd start working on AI alignment independently? Or is it that they'd want to join an existing org, but open positions are too scarce?

Comment by wunan on ozziegooen's Shortform · 2021-08-08T03:17:05.425Z · LW · GW

The movie Downsizing is about this.

Comment by wunan on What weird treatments should people who lost their taste from COVID try? · 2021-07-30T03:00:37.121Z · LW · GW

Psychedelics, maybe.

Comment by wunan on A low-probability strategy to elminate suffering via hostile takeover of a publically traded corporation · 2021-07-11T21:55:00.030Z · LW · GW

What's the advantage of taking over an existing corporation rather than creating a new organization?

Comment by wunan on Sam Altman and Ezra Klein on the AI Revolution · 2021-06-27T14:30:37.624Z · LW · GW

What are some examples of makers who gained wealth/influence/status by having a huge negative impact on the world?

Comment by wunan on Willa's Shortform · 2021-06-17T18:58:32.005Z · LW · GW

What I mean is that they haven't really considered it. As I'm sure you're aware, your mind does not work like most people's. When most people consider the question of whether they'd like to die someday, they're not really thinking about it as if it were a real option. Even if they give detailed, logically coherent explanations for why they'd like to die someday, they haven't considered it in in near mode.

 

I am very confident of this -- once they have the option, they will not choose to die. Right now they see it as just an abstract philosophical conversation, so they'll just say whatever sounds nice to them. For a variety of reasons, "I'd like to die someday; being immortal doesn't appeal to me" sounds wise to a lot of people. But once they have the actual option to not die? Who's going to choose to die? Only people who are suicidally depressed or who are in very extreme cults will choose that.

Comment by wunan on Willa's Shortform · 2021-06-16T13:07:19.107Z · LW · GW

They'll almost definitely change their minds once we have good treatments for aging.

Comment by wunan on Habryka's Shortform Feed · 2021-06-08T14:56:18.426Z · LW · GW

Those graphs all show the percentage share of the different variants, but more important would be the actual growth rate. Is the delta variant growing, or is it just shrinking less quickly than the others?

Comment by wunan on Covid 6/3: No News is Good News · 2021-06-04T23:27:21.191Z · LW · GW

Why is that?

Comment by wunan on Sexual Dimorphism in Yudkowsky's Sequences, in Relation to My Gender Problems · 2021-05-03T18:02:34.361Z · LW · GW

Can you taboo the words "weird" and "wrong" in that comment?

Comment by wunan on ozziegooen's Shortform · 2021-05-02T15:04:40.836Z · LW · GW

Precommitment for removal and optionality for adding.

Comment by wunan on hereisonehand's Shortform · 2021-04-21T00:52:47.035Z · LW · GW

There's a discord for Crypto+Rationalists you may be interested in if you're not already aware: https://discord.gg/3ZCxUt8qYw

Comment by wunan on [Letter] Advice for High School #1 · 2021-04-20T04:41:22.908Z · LW · GW

To any high schoolers reading this: If I could send just one of the items from the above list back to myself in high school, it would be "lift weights." Starting Strength is a good intro.

Comment by wunan on wunan's Shortform · 2021-04-14T01:27:46.174Z · LW · GW

I have a potential category of questions that could fit on Metaculus and work as an "AGI fire alarm." The questions are of the format "After an AI system achieves task x, how many years will it take for world output to double?"

Comment by wunan on supposedlyfun's Shortform · 2021-03-24T02:17:13.688Z · LW · GW

Yes, the value of minimizing response time is a well-studied area of human-computer interfaces: https://www.nngroup.com/articles/response-times-3-important-limits/

Comment by wunan on MikkW's Shortform · 2021-03-24T02:11:52.488Z · LW · GW

I'm curious what cards people have paid to put in your deck so far. Can you share, if the buyers don't mind?

Comment by wunan on Voting-like mechanisms which address size of preferences? · 2021-03-18T23:41:31.585Z · LW · GW

Ralph Merkle's Dao Democracy addresses size of preferences because constituents only  "vote" by reporting their own overall happiness level. Everything else is handled by conditional prediction markets (like in futarchy) to maximize future happiness of the constituents. This means that if some issue is very important to a voter, it will have a greater impact on their reported happiness, which will have a greater impact on what proposals get passed.

Comment by wunan on Parameter vs Synapse? · 2021-03-11T19:05:31.989Z · LW · GW

For reference: section 40 of Reframing Superintelligence: Comprehensive AI Services as General Intelligence.

Comment by wunan on Why Productivity Systems Don't Stick · 2021-01-16T17:57:01.028Z · LW · GW

Has this new congruency-based approach led to less, the same, or more productivity than what you were doing before and how long have you been doing it?

Comment by wunan on Matt Goldenberg's Short Form Feed · 2021-01-06T17:44:45.359Z · LW · GW

Is losing weight one of your goals with this?

 

Like you said, since it hasn't been studied you're not going to find anything conclusive about it, but it may be a good idea to skip the fast once a month (i.e. 3 weeks where you do 88 hour fasts, then 1 week where you don't fast at all).

Comment by wunan on I object (in theory) · 2020-12-30T22:48:43.007Z · LW · GW

I object to the demonstration because it's based on the false assumption that there's a fixed amount of value (candy, money) to be distributed and that by participating in capitalism, you're playing a zero-sum game. Most games played in capitalism are positive-sum -- you can make more candy.

Comment by wunan on Tweet markets for impersonal truth tracking? · 2020-11-10T16:07:51.696Z · LW · GW

Do you have a source for the 80% figure?

Comment by wunan on Seek Upside Risk · 2020-09-29T21:33:09.741Z · LW · GW

I agree that this is a really important concept. Two related ideas are asymmetric risk and Barbell strategies, both of which are things that Nassim Nicholas Taleb writes about a lot.

Comment by wunan on Where is human level on text prediction? (GPTs task) · 2020-09-21T14:21:39.370Z · LW · GW

What is that formula based on? Can't find anything from googling. I thought it may be from the OpenAI paper Scaling Laws for Neural Language Models, but can't find it with ctrl+f.

Comment by wunan on Where is human level on text prediction? (GPTs task) · 2020-09-20T13:39:23.202Z · LW · GW

In Steve Omohundro's presentation on GPT-3, he compares the perplexity of some different approaches. GPT-2 scores 35.8, GPT-3 scores 20.5, and humans score 12. Sources are linked on slide 12.

Comment by wunan on Escalation Outside the System · 2020-09-09T10:38:24.221Z · LW · GW

People are literally looting businesses and NPR is publishing interviews supporting it. They're not just interviewing people who support it -- the interviewer also supports it. What makes you think these aren't actual policy proposals?

They may only propose it for deep social-signalling reasons as you say, but that doesn't mean it's not actually a proposal. Historically, we've seen that people are willing to go through with mass murders.

Comment by wunan on Are we in an AI overhang? · 2020-07-28T13:54:00.329Z · LW · GW

In the Gwern quote, what does "Even the dates are more or less correct!" refer to? Which dates were predicted for what?

Comment by wunan on Are we in an AI overhang? · 2020-07-27T15:36:46.786Z · LW · GW

This was mentioned in the "Other Constraints" section of the original post:

Inference costs. The GPT-3 paper (§6.3), gives .4kWh/100 pages of output, which works out to 500 pages/dollar from eyeballing hardware cost as 5x electricity. Scaling up 1000x and you're at $2/page, which is cheap compared to humans but no longer quite as easy to experiment with
I'm skeptical of this being a binding constraint too. $2/page is still very cheap.
Comment by wunan on [Resource Request] What are good resources for best practice for creating bureaucracy? · 2020-07-09T16:26:05.214Z · LW · GW

The field of mechanism design seems very relevant here.

Comment by wunan on My experience with the "rationalist uncanny valley" · 2020-04-22T22:47:08.477Z · LW · GW

It might help if you try to think less in terms of making rationality and EA part of your identity and instead just look at them as some things you're interested in. You could pursue the things you're interested in and become a more capable person even if you never read anything else from the rationality community again. Maybe reading stuff from people who have achieved great things and had great ideas and who have not been influenced by the rationality community (which, by the way, describes most people who have achieved great things and had great ideas) would help? E.g. Paul Graham's essays are good (he's kind of LW-adjacent, but was writing essays long before the rationality community was a thing): http://paulgraham.com/articles.html

I think the rationality community is great, it has hugely influenced me, and I'm glad I found it, but I'm pretty sure I'd be doing great stuff even if I never found it.

Comment by wunan on Life can be better than you think · 2019-01-21T17:50:05.514Z · LW · GW

I remember reading SquirrelInHell's posts earlier and I'm really sorry to hear that. Is there any more public information regarding the circumstances of the suicide? Couldn't find anything with google.

Comment by wunan on What is a reasonable outside view for the fate of social movements? · 2019-01-05T00:50:51.593Z · LW · GW

The podcast Rationally Speaking recently had an episode on the Mohists, a "strikingly modern group of Chinese philosophers active in 479–221 BCE." They discuss what caused the movement to die out and draw comparisons between it and the Effective Altruism movement.

Comment by wunan on One night, without sleep · 2018-08-24T14:28:52.300Z · LW · GW

Have you heard about the EA Hotel? Or considered moving to a country with a very low cost of living?

Comment by wunan on One night, without sleep · 2018-08-19T18:52:02.121Z · LW · GW
In my case, I think illness is very much just a symptom of the struggle to get on with things in an interfering environment.

Do you mean you think you have something like Mindbody syndrome/TMS? I thought I had it for a while, but now suspect the root causes are actually physiological, not psychological, for me.

Just to clarify, am I interpreting your post correctly in reading it as you saying that the reason you're not operating at your full potential is because of a chronic illness which causes migraines and other symptoms? If so, this may be something that you've already thought of, but it's worth putting a lot of effort into tracking down the root cause of of the illness and fixing it (assuming you don't already know the root cause and that there is a potential fix) even if it means temporarily working more slowly on the AI alignment problem. That's what I'm doing, at least.

Comment by wunan on One night, without sleep · 2018-08-17T03:44:10.940Z · LW · GW
I want to attempt with all my strength, to do the specific things that I see to be done.
And this is why the illness of bodily defeat is so bitter;
one's struggle, conducted alone, but in the hope of one day dragging treasures into daylight,
is felled by the weakness of one's own physical vehicle.

I also suffer from a chronic illness that keeps me from pursuing my goals (which I think are the same as your goals) at anything close to the speed at which I feel I should be able to. I don't know if my condition is better or worse than yours, but one thing that helps me is to think about how there are others out there who are a lot like me, but without these limits, and they seem to be doing what I wish I could do. Maybe they'll succeed even if I'm not able to help. And if they succeed, then so do I.

You're not as alone as you think.

Comment by wunan on A friendly reminder of the mission · 2018-06-05T02:52:50.420Z · LW · GW

Related: Nick Bostrom's Letter from Utopia

Comment by wunan on Questions about the Usefulness of Self-Importance · 2018-05-27T17:12:57.251Z · LW · GW

You might like the content on 80000 hours, which is pretty popular around here.

Comment by wunan on Double Cruxing the AI Foom debate · 2018-04-27T17:35:54.353Z · LW · GW

Recursive Self-Improvement

Comment by wunan on Rationality and Spirituality - Summary and Open Thread · 2018-04-21T15:39:13.937Z · LW · GW

Book recommendation: Waking Up: A Guide to Spirituality Without Religion by Sam Harris. Discusses enlightenment, meditation, and psychedelics.

Comment by wunan on [deleted post] 2018-03-03T19:42:24.807Z

It sounds to me like most of the negative experiences you described were a result of the pills and are not associated with enlightenment:

I go off citrulline malate for 48 hours.  And it hits me.  Lethargy gone.  Cloudy headed thinking gone.  Ability to be productive returns.  I spend 10 hours at my desk in a row.  I write several thousand words.  I send off 10 emails and clear my inbox.  I power through my to-do list.  I stick to my diet for the first time in months.  I send emails, I round up outstanding notes, reorganise myself.  Reset my GTD system and power through for a day.

I've never heard of lethargy, cloudy headed thinking, and an inability to be productive as side effects of enlightenment.

The other symptoms you described later in the post, like calmness even in the face of stressors, don't seem negative to me as long as you don't abuse this ability in order to ignore problems. Also, I think the calmness associated with enlightenment might feel significantly different than what you experienced. A lot of people talk about the importance of "responding skillfully" to different situations, meaning feeling anger when you should feel anger, sadness when you should feel sadness, etc, and then being able to let go of those states once they're no longer helpful. This seems different than the vasodilator-induced state of calm you described.

Comment by wunan on [deleted post] 2018-03-03T13:25:04.821Z
Also generally a warning that you might not like enlightenment if you find it.

I'm don't necessarily disagree (I'm still looking into this topic), but what are you basing this on?

Comment by wunan on [deleted post] 2018-03-02T18:23:20.392Z
It's analogous to a music teacher instructing you to just sit down and play some notes, any notes, for twenty minutes. It would be amazing if you made progress that way.

This is exactly how it felt for me -- I even remember thinking this exact same metaphor after practicing TMI and reflecting on the difference between it and my previous attempts.

Comment by wunan on [deleted post] 2018-03-02T18:13:00.869Z

What were you doing wrong?

Comment by wunan on [deleted post] 2018-03-02T17:36:37.082Z

No problem, I don't think the question is rude. No, I didn't view it as hokey. I was actually very enthusiastic about it right from the start, but never made any progress. TMI was valuable to me because it provided much more granular instructions.

I stopped to reassess about 2 months ago and have not been meditating in that time.

Comment by wunan on [deleted post] 2018-03-02T13:17:54.992Z

I don't think this guide goes into enough detail. I had read instructions many times that were essentially the same as this and attempted them consistently every day for weeks or months and made very little progress in terms of improving my attention. There was very little difference from if I had been following the instructions "just sit and relax for 15 minutes."

In my experience, the problem isn't that meditation is often treated as something that's too complex when actually it's very simple, it's that it's treated as something very simple when actually it's pretty complex.

What did help was reading "The Mind Illuminated," which breaks things down into much more detail. Attempting meditation with the instructions in that book was a very different experience from my earlier attempts. There was a very noticeable improvement in my ability to intentionally maintain my attention within the first few sessions. In fact I made such rapid progress that I stopped after a week because I wanted to take some time to reassess whether this was really a path I wanted to go down (the book provides instructions all the way to "awakening" or "enlightenment"). I'm currently still assessing.

Comment by wunan on A LessWrong Crypto Autopsy · 2018-01-29T19:40:09.610Z · LW · GW

Do we have any data on how well other people who had similar levels of interest in tech did?