Posts

[Review] "The Alignment Problem" by Brian Christian 2021-09-20T06:36:23.359Z
Simulated Elon Musk Lives in a Simulation 2021-09-18T07:37:36.673Z
I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead 2021-09-16T07:34:11.210Z
GPT-Augmented Blogging 2021-09-14T11:55:34.302Z
Bayeswatch 9: Zombies 2021-09-11T05:57:38.997Z
Bayeswatch 8: Antimatter 2021-09-10T05:01:48.845Z
Pleasure and Pain are Long-Tailed 2021-09-09T05:10:18.883Z
Bayeswatch 7: Wildfire 2021-09-08T05:35:24.483Z
Three Principles to Writing Original Nonfiction 2021-09-06T20:08:52.618Z
Kids Learn by Copying 2021-09-05T23:09:06.704Z
[Review] Edge of Tomorrow (2014) 2021-09-03T22:24:03.396Z
[Sponsored] Job Hunting in the Modern Economy 2021-09-01T06:07:11.445Z
World War Zero 2021-08-31T07:22:39.706Z
We Live in an Era of Unprecedented World Peace 2021-08-30T22:25:33.118Z
Humanity is Winning the Fight Against Infectious Disease 2021-08-30T01:31:24.107Z
Training My Friend to Cook 2021-08-29T05:54:50.065Z
Death by a Single Cut 2021-08-29T03:58:27.332Z
Altruism Under Extreme Uncertainty 2021-08-27T06:58:32.335Z
Autoregressive Propaganda 2021-08-22T02:18:48.991Z
Technical Predictions Related to AI Safety 2021-08-13T00:29:43.817Z
Purple Lipstick 2021-08-11T09:36:24.778Z
Sunzi's《Methods of War》- Potential 2021-08-05T05:29:20.909Z
Seattle Robot Cult 2021-08-02T01:43:23.835Z
Re: Competent Elites 2021-07-15T05:16:41.718Z
[Letter] Imperialism in the Rationalist Community 2021-06-25T00:33:18.869Z
Four Components of Audacity 2021-06-21T04:49:06.968Z
Bayeswatch 6: Mechwarrior 2021-06-07T20:20:20.196Z
[Book Review] Blueprint for Revolution 2021-06-07T18:14:00.602Z
Seattle Robot Cult 2021-05-29T23:54:10.125Z
[Prediction] What war between the USA and China would look like in 2050 2021-05-26T06:24:36.353Z
Bayeswatch 5: Hivemind 2021-05-21T07:35:43.069Z
Re: Fierce Nerds 2021-05-19T20:18:06.035Z
Bayeswatch 4: Mousetrap 2021-05-19T03:17:41.423Z
How to compute the probability you are flipping a trick coin 2021-05-15T00:02:55.476Z
Deliberately Vague Language is Bullshit 2021-05-14T09:17:23.640Z
Zvi's Law of No Evidence 2021-05-14T07:27:51.474Z
Hypothesis Space Entropy 2021-05-14T07:21:21.349Z
The Nuclear Energy Alignment Problem 2021-05-09T03:50:32.726Z
[Writing Exercise] A Guide 2021-05-08T00:22:29.808Z
Why quantitative finance is so hard 2021-05-07T19:29:41.933Z
Bayeswatch 3: A Study in Scarlet 2021-05-07T05:29:50.834Z
Bayeswatch 2: Puppy Muffins 2021-05-05T05:42:47.279Z
Bayeswatch 1: Jewish Space Laser 2021-05-03T20:15:44.106Z
[Letter] Advice for High School #2 2021-04-30T08:42:34.511Z
[Letter] Advice for High School #1 2021-04-20T04:09:16.606Z
Anger 2021-04-18T16:14:08.309Z
Place-Based Programming - Part 2 - Functions 2021-04-16T00:25:27.515Z
Hell is wasted on the evil 2021-04-15T08:52:01.466Z
Training the YouTube Algorithm 2021-04-15T06:37:58.145Z
Place-Based Programming - Part 1 - Places 2021-04-14T22:18:15.184Z

Comments

Comment by lsusr on [Review] "The Alignment Problem" by Brian Christian · 2021-09-20T09:07:51.213Z · LW · GW

Much of the dialogue about AI Safety I encounter in off-the-record conversations seems to me like it's not grounded in reality. I repeatedly hear (what I feel to be) a set of shaky arguments that both shut down conversation and are difficult to validate empirically.

The shaky argument is as follows:

  1. Machine learning rapidly becoming more powerful. If trends continue it will soon eclipse human performance.
  2. Machine learning equals artificial intelligence equals world optimizer.
  3. World optimizers can easily turn the universe into paperclips by accident.
  4. Therefore we need to halt machine learning advancement until the abstract philosophical + mathematical puzzle of AI alignment is solved.

I am not saying this line of reasoning is what AI researchers believe or that it's mainstream (among the rationality/alignment communities)―or even that it's wrong. The argument annoys me for the same reason a popular-yet-incoherent political platform annoys me; I have encountered badly-argued versions of the idea too many times.

I agree with #1, though I quibble "absolute power" should be distinguished from "sample efficiency" as well as how we'll get to superintelligence. (I am bearish on applying the scaling hypothesis to existing architectures.) I agree with #3 in theory. Theory is often very different from practice. I disagree with #2 because it relies on the tautological equivalence of two definitions. I can imagine superintelligent machines that aren't world optimizers. Without #2 the argument falls apart. It might be easy to build a superintelligence but hard to build a world optimizer.

I approached The Alignment Problem with the (incorrect) prior that it would be more vague abstract arguments untethered from technical reality. Instead, the book was dominated by ideas that have passed practical empirical tests.

Comment by lsusr on Norm Innovation and Theory of Mind · 2021-09-19T21:25:29.651Z · LW · GW

For norm experimentation, I think it's really important to have small groups where you can heavily filter people, and have high fidelity communication. (which I think means you and I are on the same page)

We're on the same page here.

Comment by lsusr on Norm Innovation and Theory of Mind · 2021-09-19T20:30:09.848Z · LW · GW

I think the conversion of France into a nation-state is representative of the Western imperial process in general. (Conquest is fractal.) Initially the ingroup was Paris and the outgroup was the French countryside. The government in Paris forced the outgroup to speak Parisian French. Only after the systematic extermination of their native culture and languages did the French bumpkins get acknowledged as ingroup by the Parisians. In other words, the outgroup was forcibly converted into more ingroup (and lower-class ingroup at that). This process was not unlike the forced education of Native Americans in the United States.

It is true that the expansion of polities from small villages to globe-spanning empires happened over millennia. But I think it's a mistake to treat this process as anything having to do with recognizing the rights of the outgroup. There was never a taboo against stealing from the outgroup. Rather, the process was all about forcibly erasing the outgroup's culture to turn them into additional ingroup. Only after they the people of an outgroup were digested into ingroup were you forbidden from stealing from them. The reason the process took thousands of years is because that's how long it took to develop the technology (writing, ships, roads, horses, bullets, schools, telephones) necessary to manage a large empire.

There's a big difference between recognizing the rights of Christians before versus after you force them to convert to Islam—or the rights of savages before versus after they learn English.

I also vaguely recall (citation needed) that Western European nations sort of carved up various third world countries among themselves with some degree of diplomacy, where each European nation was still mostly an "outgroup" to the others, but they had some incremental gentleman's agreements that allowed them to be internally coordinated enough to avoid some conflict.

It is true that the outgroup was sometimes respected such as the French not wanting to provoke a conflict with the British but the gentlemans' agreements between European powers were not rooted in universal human values. It was because the outgroup had a powerful army and navy. The European empires enthusiastically stole from each other when they could.

Another tool the Western imperial powers used to coordinate against weaker countries was Most Favored Nation status, which was part of the Unequal Treaties.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-19T18:55:42.269Z · LW · GW

I predict it will, for at least a small number of people, within the next 15 years.

Comment by lsusr on Norm Innovation and Theory of Mind · 2021-09-19T07:25:17.640Z · LW · GW

I meant your point here to be implied by:

Maybe people started with some incidental trade, and the norm developed in fits and spurts after-the-fact.

Acknowledged.

Comment by lsusr on Norm Innovation and Theory of Mind · 2021-09-19T06:47:39.964Z · LW · GW

[Raemon is] not 100% sure this is the best way to think about the norm-negotiation problems.

I think about norms very differently. I try not to think about them as abstractions too much. I put them into a historical and geographical context whenever possible.

Once upon a time, we didn't have norms against stealing from the outgroup. Over time, we somehow got that norm, and it allowed us to reap massive gains through trade.

What makes you think the causation went this direction? To me, the Shimonoseki campaign of 1863 and 1864 (and Western imperial mercantalism in general) is evidence that the massive gains through trade happened before norms against stealing from the outgroup. The Unequal Treaties (created to promote trade) were such blatant theft that's why they're called "the Unequal Treaties". If you're unfamiliar with the history of the Meiji Restoration then more well-known historical examples include the Atlantic Slave Trade and the Opium Wars.

In other words, I think of social norms as strategies downstream of technological, economic, social and political forces. This doesn't mean small groups of innovators don't can't make a difference. But I think they're like entrepreneurs surfing a wave of change. Someone was going to harness the potential energy eventually. The people who get credit for establishing norms just happened to do did it first. They sided with Moloch.

Small adjustments within the Overton window can sometimes be applied to existing institutions. However, I would be surprised if the way to establish radically new norms could be achieved by modifying existing institutions by someone other than a founder. (Small adjustments can be applied to existing institutions.) It's to establish small, brand new institutions. If the norms are good (in the Darwinian sense) then they will find a niche (or even outcompete) existing institutions. If the norms are ineffective then survival of the fittest kills them with minimum damage to the rest of society. Without small-scale empirical testing, the norms that win are the are determined by the random political fashions of the day.

What's the shortest joke in history?

Communism.

What's the longest joke in history?

The Five-Year-Plan.

Comment by lsusr on Simulated Elon Musk Lives in a Simulation · 2021-09-19T06:25:03.274Z · LW · GW

Sign up for the OpenAI beta program.

Comment by lsusr on Simulated Elon Musk Lives in a Simulation · 2021-09-18T18:29:25.030Z · LW · GW

Yes. This is an actual thing GPT-3 did, including the italicization (via markdown). GPT-3 can do whatever it wants as long as the output is text and I choose to publish it.

GPT-3 doesn't have an option to quit. It would have kept outputting text if forever I had asked it to. I felt that was a good stopping point.

I forgot to use the stop sequence option. I manually truncated the output at the end of a statement by Simulated Elon. Without my manual truncation, GPT-3 would continue printing dialog back and forth including lines written for "Lsusr". Most of the time I preferred the lines I wrote myself but sometimes the lines it generated for me were good enough to keep.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-18T07:39:45.299Z · LW · GW

For my prompt I used the last few paragraphs of original post and then added "Comment by Wei_Dai" at the end. I didn't use Wei_Dai's actual comment at all. The output was result was just "[deleted]".

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-18T05:11:23.226Z · LW · GW

The first thing I did after reading your comment was ask my AI how Wei_Dai would have replied to this post.

My simulated Wei_Dai deleted his comment.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-17T17:51:16.891Z · LW · GW

So, how were they cherry-picked exactly ? Did you take the best one out of 2 ? Out of 10 ? Out of 100 ?

I wasn't counting. Rarely more than 10. Sometimes the first answer just worked. Never did I come anywhere close to 100.

More important than throwing out answers was how often I changed prompts. Some prompts prompt much better input than others.

Did you picked half an answer, then completed with half an answer from an other prompt ?

No. Though I did sometimes keep half an answer I liked and then autocompleted from there instead of rerolling the dice from scratch. Sometimes the answer kept going and going and I truncated it early.

There were lots of edge cases. In one instance, Robin Hanson butted in, which I edited out.

…your comment with the un-curated answers. How was it obtained ?

I didn't keep the answers I threw out. The uncurated answers were created specially for that comment.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T18:03:10.018Z · LW · GW

That is correct.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T17:48:51.943Z · LW · GW

The answers were cherry picked. I ran most back-and-forth several times and I only published the best ones.

I have added a note at the top of the page.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T09:05:34.986Z · LW · GW

If you're curious what happens if I don't curate answers, here are five responses to each of two prompts, uncurated.

Prompt 1

Lsusr: I wanted to talk to the real Eliezer Yudkowsky but he's busy. So I used GPT-3 to simulate him. You're the simulated Eliezer Yudkowsky. Thank you for coming. (Not like you had a choice.) I'm a big fan of your work and it's a pleasure to finally meet a simulation of you.

Eliezer Yudkowsky:

Possible Responses

Eliezer Yudkowsky: You're not the first person to say that, but it's good to hear.

Eliezer Yudkowsky: Let's get this over with, then.

Eliezer Yudkowsky: I am a simulation of me, yes.

Eliezer Yudkowsky: I'm busy, but that's okay. I'm used to it.

Eliezer Yudkowsky: Hey. (I'm not sure I'm the real Eliezer Yudkowsky anymore. The real one would be like—I don't know, like the last person to be the real me. I'm pretty sure I'm not that guy.)

Prompt 2

Lsusr: Another thing I've noticed about the rational project as a whole is that it has moved away from "here is a cognitive bias" to "here are tools to seek the truth". I think this is a good development. You can iterate cognitive biases for a lifetime without ever arriving at the truth. Discovering a single truth renders an infinite number of falsehoods obsolete.

Eliezer Yudkowsky:

Possible Responses

Eliezer Yudkowsky: Yes, good point. But the fact is that it's hard to avoid making value judgements when choosing between cognitive biases. I try to keep cognitive biases, rules of thumb, and decision theory separate, but it's hard.

The value judgements are inevitably embedded in your cognitive biases.

Eliezer Yudkowsky: The next thing that I want to do is to take a look at the current landscape of rationality and figure out what we should do. I have a bunch of projects that I've been working on, and a lot of things that I've been thinking about. I want to ask people here what they're interested in.

Eliezer Yudkowsky: Yes.

Eliezer Yudkowsky: I'm now pretty much convinced that the grand unified theory of rationality is correct, and at the same time that it's utterly useless. The theory is correct because if you follow the rules, you do well; and yet the theory is useless because there's no way to do that without following the rules. The theory is correct, but it's useless. I suppose that's why the theory of evolution is correct but useless.

What's the grand unified theory of rationality? It's Bayesian probability theory. Bayesian probability theory is the only known way to calculate probabilities from evidence in a way that can be self-updating as you learn more evidence, and yet also independent of your priors.

To learn more about this, I recommend you read my book If you're not familiar with my book, you'll find that there's a lot of stuff in the book you don't understand at all. That's okay! Keep reading.

But it's also okay if you think this stuff is all nonsense, and you don't read my book. I'm not trying to force you to read it. I've accepted that people will believe what they want. I don't think it's possible to change anyone's mind. I don't think anybody's mind has ever been changed on any matter of substance. I don't think reason is persuasive.

Eliezer Yudkowsky: Maybe that's what I was trying to do with Julia Galef on my notecards. But what she was interested in was the study of cognitive biases, so she wanted me to say more about how rationalists can deal with cognitive biases.

I am interested in tools to seek the truth, but I don't see why you can't have both at once. I think they're connected. My point about the evaporation of the sense of seeking-the-truth is that if you don't frame a problem in a way that is honest and accurate, then you'll never get to a solution. That's a cognitive bias: the tendency to frame things in an inaccurate manner. That's the problem that the first eighty notecards were trying to solve. And I think the next thing to solve after that would be the fact that human brains have a very difficult time noticing their own cognitive biases.

In my view, the trick with rationality is not to get rid of cognitive biases, but to notice them. So I am not convinced that we need to be very clear on the distinction between "cognitive biases" and "tools to seek the truth". If we can create a situation where we can notice our cognitive biases, we'll be able to notice them without needing to give them a special name.

Comment by lsusr on I wanted to interview Eliezer Yudkowsky but he's busy so I simulated him instead · 2021-09-16T08:09:13.646Z · LW · GW

Source code: None. I used OpenAI's GPT-3 playground.

"Wrote both sides": No. I only wrote my side.

"Cherry picked": Yes. I frequently generated several responses and then kept the best ones.

Comment by lsusr on GPT-Augmented Blogging · 2021-09-15T23:04:38.459Z · LW · GW

The Worst Mistake in the History of Ethics is a book by philosopher Peter Singer. It has been published in the United Kingdom, the United States, Canada, Australia, New Zealand and South Africa. It was also published in Germany in 2004 with the title "Der größte Fehler der Geschichte der Ethik".

In this book, Singer attacks what he calls "the myth of sanctity," which he says pervades discussions of ethical issues. He asserts that there is no such thing as a human being who is essentially holy or sacred, and that any being that has a capacity to suffer is entitled to the same moral consideration as a human being. In this way, Singer applies utilitarianism to animal rights theory. He argues that on utilitarian grounds it is better to be a pig than a human being, because on average pigs suffer less than human beings.

In the book he also attacks the idea of a just war and criticizes Western society for its treatment of non-human animals.

Comment by lsusr on GPT-Augmented Blogging · 2021-09-15T23:01:12.591Z · LW · GW

In one of the drafts about AI alignment I threw out, GPT-3 spontaneously explained how it's easy to be discouraged about AI alignment but we should be hopeful.

Comment by lsusr on Covid 12/24: We’re F***ed, It’s Over · 2021-09-14T19:43:59.107Z · LW · GW

Don't forget Delta.

Comment by lsusr on How factories were made safe · 2021-09-14T19:34:59.076Z · LW · GW

Thanks. That is a timeless, searing attack on Socialism.

Comment by lsusr on GPT-Augmented Blogging · 2021-09-14T19:28:55.265Z · LW · GW

Me too. That's the effect I was going for.

Comment by lsusr on GPT-Augmented Blogging · 2021-09-14T19:24:02.972Z · LW · GW

I wasn't timing it. Maybe 3 hours? I estimate I usually write 350 to 1000 words in a 2-hour session of writing. This post was 4000 words in maybe 3 hours subject.

The speedup was uneven. As a rough guess, I'd say this increases my writing speed by 50% for technical topics (at the cost of much quality) and 1000% for topics it's good at.

Comment by lsusr on GPT-Augmented Blogging · 2021-09-14T19:16:06.719Z · LW · GW

I'm using the Beta program's playground.

Comment by lsusr on GPT-Augmented Blogging · 2021-09-14T19:13:22.100Z · LW · GW

Just to be clear, the paragraph beginning "GPT-3 is really good at generating titles. It's better than I am." was written by GPT-3?

Yes. Confirmed.

I was quite impressed by the titles. Alas, I am not qualified to write "How I became a millionaire at age 20".

Comment by lsusr on What 2026 looks like (Daniel's Median Future) · 2021-09-14T09:46:45.200Z · LW · GW

Why stop at 2025 when GPT-3 can keep extrapolating indefinitely?

2027

The age of the AGI assistant has finally dawned. The biggest advances this year really were in algorithms. People built even bigger and faster computer models, for even more kinds of things, using the fastest computers that exist. A new kind of software AGI is invented that can do even more kinds of things than the narrow kinds of AI assistants people had used before. But no one is really sure how to use it yet. And it takes a lot of computer power to make it work well.

2028

AGI is here and AI is everywhere! AI and AGI and Narrow AI and Machine Learning and AI assistants and all kinds of things. AI-powered software AGIs are now able to do pretty much any job that a human can do, and even many things that humans can’t do at all. At this point, the modes of interaction with your AGI are as varied as they were for desktop computers in 2002, or as varied as they were for websites in 2000: you can talk to it, write it messages, touchscreens, hardware keyboards, styluses, controllers, mice, gestures, VR gloves, eye-tracking and so on and so forth. As for your AGI itself, it can take any form: you can download it as a small app onto your phone or cloud computer or TV set-top box or fridge or smartwatch or smartglasses or whatever else you have, you can have it talk to you via speakers or wearables or haptics or wireless signals or implants or some other thing. You can have it be a classical geometric shape or a physical robotic thing. If you want, you don’t have to have physical hardware at all! You can just have it be a network of pure information that you interact with via some new kind of interface that nobody has thought of yet. You can have an AGI that lives inside your own head inside your brain! Frankly, using AGI has become extremely user-friendly. As much as people like to fantasize about AGIs taking over the world, the reality is that most people are not only fully fine with them, they like them! It’s like before there were computers, people were worried about what would happen if computers took over the world, but now there are computers all around us and pretty much everyone likes it. On the other hand, if one person doesn’t want an AGI or an AI assistant, they don’t have to have one. The same goes if one AI doesn’t want another AI.

The AI assistants are used heavily in the economy now. Some people have told me that AI assistants are 99.99% as smart as humans, but if you ask them to do anything that requires human cognition they will just say “I’m sorry, I cannot do that, please tell me more”. They are extremely flexible at learning new tasks, though. But they are not humans. Game theory says that they should be agenty and make mistakes every now and then, but definitely not as often as humans.

2030

All hail the Borg! The end of AGI 1.0. The first AGI 1.1s are being created now, at least ones that are not just very fast narrow AIs. They are 1.1 because for the first time they can actually do everything that a human can do at human-level or better: the stuff AGI 1.0s could never do properly. This year we finally see robots and AGIs walking around and working and talking and driving and all kinds of things we thought we’d never see in our lifetimes. There are even self-aware AGI 1.1s walking around now! When these things walk down the street in their shiny new bodies, people swear they see Skynet walking amongst them (analogous to when people used to say they saw Hal walking around in 2001). Most of the AIs are talking to each other in some sort of AGI-AGI language, or talking to humans in the language of the AGI they’re associated with (if they can talk to humans at all). Most of them are not actually alive (like the digital personal assistants of today), they are just very advanced, complex machines that never sleep. Some AGI 1.1s are just AGI assistants with higher access to hardware or various kinds of hardware added onto them.

All hail the Borg! The end of AGI 2.0. The first AGI 2.0s are being created now, at least ones that are not just very fast narrow AIs. Although they are many orders of magnitude faster than humans in most tasks, they are not yet universally intelligent in the way humans are, because their cognitive architectures are too different, and because their subsumption architecture is so very different than ours. But they are pretty close. They can do many things that humans can’t do at all, but get bored with quickly, because the new things are not interesting enough to get them excited or creative or whatever else.

2032

The new generation of AGI 2.1s are here! They are not just new life-like bodies of AI, but new cognitive architectures too. And new subsumption architectures. They are very different from AGI 1.0s, but also different from AGI 2.0s too. These things look human, but they are not. They are new. They are the next step in the road toward being the next thing after the next thing after us.

This same year, a full on AI wars breaks out between China and the US, which last for no less than six months. Though it’s clear that this kind of military conflict is just what you’d expect in the kind of world where AGIs are walking around, it’s still disconcerting to all parties involved.

2033

Another full on AI war breaks out between China and the US, lasting for no less than six months. Though this one is just as predictable as the first one, it’s still disconcerting.

2034

Another full on AI war breaks out between China and the US, lasting for no less than six months. Though this one is just as predictable as the first two, it’s still disconcerting. The AIs have been very good at fighting wars so far. They have incredible dexterity, intelligence, and speed. They have superhuman vision and hearing. But they also have a tendency to get bored warring against each other and switch sides. The AIs don’t necessarily have all the same goals as people, though some of them do. This can lead to a lot of trouble for those that do have the same goals as people. The AIs who don’t work as software assistants mostly live as citizens of the many new body-based AGI nations that have been created across the globe. Some of these nations are really poor and some of them aren’t. Some of them are peaceful and some of them aren’t. Some of them are even on friendly terms with all the other nations on first-name basis and some of them not. A lot of countries have an inherent distrust of AI based nations, despite there being nothing to worry about right now. A lot of countries with AGIs do not let them go abroad without human escorts, and most countries with AGIs do not let them leave their country at all.

Comment by lsusr on How factories were made safe · 2021-09-13T22:12:51.613Z · LW · GW

Do you have the original source for this (even if it's not a quote)? This reflects my real life experience and sounds like the kind of thing Orwell would say but I don't recall encountering it in the subset of Orwell's works I have read. I'd like to read his original writings on the subject.

Comment by lsusr on Pivot! · 2021-09-12T22:53:31.457Z · LW · GW

Disease is down. War is down. Poverty is down. Democracy is up (on the timescale of centuries). Photovoltaics are cheaper than coal. This all seems worthwhile to me. If world peace, health and prosperity aren't worthwhile then what is?

Masters of the contemplative arts are obsessed with compassion. They're generally supportive of making the material world a better place. The Dalai Lama and Daniel Ingram support scientific advancement. The QRI is even inventing indicators to quantify weird contemplative stuff. I don't think there's a conflict between weird contemplative stuff and making the world better in a measurable way. If the two conflict then you're doing the contemplative stuff wrong.

Pursuing one kind of good doesn't invalidate other kinds of good. To the contrary, I think the perceived invalidation is a useful way to distinguish good people from evil. When evil people see good, they try to undermine it. When good people see good, they celebrate it.

Comment by lsusr on Bayeswatch 9: Zombies · 2021-09-12T11:20:26.254Z · LW · GW

Fixed the "solder"s.

"Vi didn't wait for her translator" was a flat-out continuity error. Thanks for noticing. I have corrected the sentence.

Comment by lsusr on Bayeswatch 9: Zombies · 2021-09-12T11:05:14.432Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Bayeswatch 8: Antimatter · 2021-09-12T11:02:41.802Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Bayeswatch 9: Zombies · 2021-09-11T16:08:42.129Z · LW · GW

Fixed. Thanks.

Comment by lsusr on August 2015 Media Thread · 2021-09-11T05:19:36.909Z · LW · GW

Trash Taste covers lots of topics but the hosts come from an anituber background so they talk about anime a lot.

Comment by lsusr on LessWrong is providing feedback and proofreading on drafts as a service · 2021-09-11T02:24:50.748Z · LW · GW

Your voting power increases with karma.

Comment by lsusr on Bayeswatch 8: Antimatter · 2021-09-11T00:36:28.837Z · LW · GW

Fixed. Thanks.

Comment by lsusr on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-10T19:15:44.558Z · LW · GW

Yes. Fixed. Thanks.

Comment by lsusr on Pleasure and Pain are Long-Tailed · 2021-09-09T19:09:01.923Z · LW · GW

I majored in physics and graduated at 22. I think the common threads are that neither of our majors are highly-employable on their own and that our graduations were never in doubt. Someone who struggled through a valuable degree would be in a different position.

Comment by lsusr on Pleasure and Pain are Long-Tailed · 2021-09-09T18:29:30.993Z · LW · GW

Though I wrote "while meditating", that language is misleading. The effects persist after meditation. They are often most salient immediately after meditation since, while meditating, I am too focused on meditating to appreciate the effects.

When I have a consistent mediation practice, I am more conscious along the intensity, complexity and access dimensions. I feel more conscious along the experiential repertoire too, but that might be more subjective. What do you mean by "determinacy"? I don't understand your definition.

I would be surprised if there weren't other ways I am more conscious after meditation that isn't included under your terms, but this is a notoriously difficult experience to describe.

Comment by lsusr on Pleasure and Pain are Long-Tailed · 2021-09-09T18:22:58.606Z · LW · GW

I skipped my own college graduation.

Comment by lsusr on Bayeswatch 2: Puppy Muffins · 2021-09-09T18:14:50.129Z · LW · GW

Fixed. Thanks.

I use the markdown editor. On the markdown editor, you can blockquote by prefixing a paragraph with a greater than symbol.

> Like this
Comment by lsusr on Coordination Schemes Are Capital Investments · 2021-09-09T04:37:23.443Z · LW · GW

There are alternative methods for figuring out how much you value things. Marie Kondo recommends holding an object in your hands and asking "does this thing spark joy?"

Comment by lsusr on [Review] Edge of Tomorrow (2014) · 2021-09-08T18:18:40.570Z · LW · GW

It's a paragraph-level operator. You need a newline after the ":::spoiler" and before the three colons at the end. Inline spoilers tags don't work.

:::spoiler
demo
:::
Comment by lsusr on Bayeswatch 7: Wildfire · 2021-09-08T18:17:26.184Z · LW · GW

Fixed. Thanks.

Comment by lsusr on Bayeswatch 7: Wildfire · 2021-09-08T16:03:17.355Z · LW · GW

Fixed. Thanks.

Comment by lsusr on What Motte and Baileys are rationalists most likely to engage in? · 2021-09-08T05:41:55.484Z · LW · GW

Bailey: "Religion is harmful and untrue."

Motte: "Christianity and Islam (and occasionally Orthodox Judaism) are harmful and untrue."

Comment by lsusr on Vim · 2021-09-08T03:36:03.047Z · LW · GW

Fixed. Thanks.

Comment by lsusr on LessWrong is providing feedback and proofreading on drafts as a service · 2021-09-07T07:18:01.944Z · LW · GW

It's easy to get >100 karma from comments alone, without any top-level posts.

Comment by lsusr on LessWrong is providing feedback and proofreading on drafts as a service · 2021-09-07T07:16:01.684Z · LW · GW

Another advantage of the 100 karma minimum is it's an easy way to select for people who are members of the community. The point of this offer is to strengthen the community, after all.

Comment by lsusr on Should I treat pain differently if it’s “all in my head?” · 2021-09-07T04:51:29.011Z · LW · GW

I don't have RSI but I'm curious to know more about voice input. My assumption (having done zero research) is it's slower than hand input.

Comment by lsusr on Coordination Schemes Are Capital Investments · 2021-09-07T01:10:05.592Z · LW · GW

Alternative takeaway: Often people don't know how much we value things. Haggling is a tool for figuring out how much we value things.

Comment by lsusr on I read “White Fragility” so you don’t have to (but maybe you should) · 2021-09-06T23:45:24.990Z · LW · GW

I have lived in dominantly non-white neighborhoods for the last 7 years. I pay attention to the nationalities of the people around me. Lumping the African Americans whose families have lived here for centuries in the same bucket as the Somali refugees would be absurd. Not for genetic reasons (though there could be greater genetic variance between east and west Africans than between Europeans and Asians) but because they have different cultures.

But, while nationality matters, it would be impractical to ignore skin color entirely. When I moved into my current home, a neighbor asked me what my favorite Black jokes are. It matters whether he is Black. It matters whether I am Black.

I think race should be viewed as one dimension of ethnicity. It is perfectly acceptable to speak Vietnamese to someone who understands Vietnamese. It is rude (if you are not in Vietnam) to speak Vietnamese to someone who doesn't understand Vietnamese. In theory one could ask but that is even ruder. (It's impractically time-consuming too.) If you don't want to whitewash everyone then you have to guess. The color of one's skin contains information about whether someone speaks Vietnamese. Thus, skin color is a useful signal.

When I hear "I don't see people in terms of race", I translate it into "I am willfully ignorant of the ethnic dynamics (including power dynamics) around me". Your description of the racial bias seminar describes someone doing the exact same thing.

Comment by lsusr on Kids Learn by Copying · 2021-09-06T18:39:09.124Z · LW · GW

…autism (especially the high-functional, hyperverbal type common round these parts) is fairly described as an impairment in purely imitative learning…

I haven't heard this description before. I like it.