Posts

Can we teach Rationality at the University of Reddit? 2012-08-20T21:59:08.554Z · score: 23 (24 votes)
'Thinking, Fast and Slow' Chapter Summaries / Notes [link] 2012-04-15T09:14:14.280Z · score: 17 (18 votes)
TED-Ed Launch 2012-03-12T21:20:31.364Z · score: 8 (9 votes)
Weak supporting evidence can undermine belief 2011-09-29T10:11:49.168Z · score: 13 (13 votes)
Some ideas on communicating risks to the general public 2010-12-05T10:44:19.060Z · score: 2 (3 votes)

Comments

Comment by lightwave on Beta - First Impressions · 2017-09-21T10:02:18.929Z · score: 19 (12 votes) · LW · GW

The site is lacking breadcrumbs so it's hard to orient oneself. It's hard to follow what section of the website you're in as you dig deeper into the content. Any plans to add breadcrumbs (or some alternative)?

Comment by lightwave on Marginal Revolution Thoughts on Black Lives Matter Movement · 2017-01-19T09:52:53.282Z · score: 1 (1 votes) · LW · GW

A black is also more likely to commit a violent crime than a white person.

Isn't it more relevant whether a black person is more likely to commit a violent crime against a police officer (during a search, etc)? After all the argument is that the police are responding to some perceived threat. The typical mostly black-on-black violent crime isn't the most relevant statistic that should be used. Where are the statistics about how blacks respond to the police?

Comment by lightwave on Why GiveWell can't recommend MIRI or anything like it · 2016-11-29T22:41:25.882Z · score: 11 (11 votes) · LW · GW

Funny you should mention that..

AI risk is one of the 2 main focus areas for the The Open Philanthropy Project for this year, which GiveWell is part of. You can read Holden Karnofsky's Potential Risks from Advanced Artificial Intelligence: The Philanthropic Opportunity.

They consider that AI risk is high enough on importance, neglectedness, and tractability (their 3 main criteria for choosing what to focus on) to be worth prioritizing.

Comment by lightwave on Crony Beliefs · 2016-11-20T11:21:58.509Z · score: 0 (0 votes) · LW · GW

things that I am very confident are false

Could you give any example?

Comment by lightwave on Barack Obama's opinions on near-future AI [Fixed] · 2016-10-12T16:48:07.506Z · score: 7 (7 votes) · LW · GW

This is also interesting: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity

Comment by lightwave on Barack Obama's opinions on near-future AI · 2016-10-12T15:32:19.554Z · score: 1 (1 votes) · LW · GW

This is the article: Barack Obama on Artificial Intelligence, Autonomous Cars, and the Future of Humanity

Comment by lightwave on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-28T10:05:12.903Z · score: 2 (2 votes) · LW · GW

He's mentioned it on his podcast. It won't be out for another 1.5-2 years I think.

Also Sam Harris recently did a TED talk on AI, it's now up.

Comment by lightwave on Open thread, Sep. 26 - Oct. 02, 2016 · 2016-09-27T09:03:08.436Z · score: 3 (3 votes) · LW · GW

He's writing an AI book together with Eliezer, so I assume he's on board with it.

Comment by lightwave on Turning the Technical Crank · 2016-04-06T07:48:51.055Z · score: 2 (2 votes) · LW · GW

Can't we just add a new 'link' post type to the current LW? Links and local posts would both have comment threads (here on LW), the only difference is the title of the linked post would link to an outside website/resource.

Comment by lightwave on Open Thread April 4 - April 10, 2016 · 2016-04-05T12:34:06.282Z · score: 3 (3 votes) · LW · GW

Should we try to promote the most valuable/important (maybe older?) Less Wrong content on the front page? Currently the front page features a bunch of links and featured articles that don't seem to be organized in any systematic way. Maybe Less Wrong would be more attractive/useful to new people if they could access the best the site has to offer directly from the front page (or at least more if it, and in a systematic way)?

Comment by lightwave on Lesswrong Potential Changes · 2016-03-20T09:17:22.140Z · score: 0 (0 votes) · LW · GW

Target: a good post every day for a year.

Why specifically 1/day? It seems a bit too much. Why not e.g. ~3/week?

Comment by lightwave on Consciousness and Sleep · 2016-01-12T11:48:41.271Z · score: 0 (0 votes) · LW · GW

Your sensory system is still running

There are brain subsystems that are still running, but they are not necessarily ones "you" identify with. If you replaced the parts/networks of the brain that control your heart and lungs (through some molecular nanotechnology), would "you" still be you? My intuition says yes. The fact that "something is running" doesn't mean that something is you.

I know the computer metaphor doesn't work well for the brain, but imagine the system in the brain that wakes you up when you hear some sound could be sort of like when a sleeping computer that gets woken up by a signal in the computer network.

Also as others have mentioned, I'm pretty sure during anesthesia/coma there can be periods where you are completely lacking any experience.

Comment by lightwave on How to escape from your sandbox and from your hardware host · 2015-08-04T05:56:31.006Z · score: 0 (0 votes) · LW · GW

Would this help?

Comment by lightwave on (Rational) website design and cognitive aesthetics generally- why no uptake? · 2015-07-23T07:48:24.468Z · score: 4 (4 votes) · LW · GW

empirical literature on what makes websites effective (which we've done a lot of now)

Can you share some of your sources?

Comment by lightwave on In Defense of the Fundamental Attribution Error · 2015-06-04T08:03:17.860Z · score: 4 (6 votes) · LW · GW

It only takes a small extension of the logic to show that the Just World Hypothesis is a useful heuristic.

I don't see it, how is it useful?

Comment by lightwave on Meetup : London - Index Funds and Other Fun Stuff · 2014-10-24T07:38:49.130Z · score: 0 (0 votes) · LW · GW

Hey, is there write-up a of the UK-specific stuff for people who weren't able to attend?

Comment by lightwave on Confused as to usefulness of 'consciousness' as a concept · 2014-07-17T08:13:54.189Z · score: 2 (2 votes) · LW · GW

Sleep might be a Lovecraftian horror.

Going even further, some philosophers suggest that consciousness isn't even continuous, e.g. as you refocus your attention, as you blink, there are gaps that we don't notice. Just like how there are gaps in your vision when you move your eyes from one place to another, but to you it appears as a continuous experience.

Comment by lightwave on This is why we can't have social science · 2014-07-14T09:20:39.017Z · score: 3 (3 votes) · LW · GW

The error rate in replication experiments in the natural sciences is expected to be much much lower than in the social sciences. Humans and human environments are noisy and complicated. Look at nutrition/medicine - it's taking us decades to figure out whether some substance/food is good or bad for you and under what circumstances. Why would you expect it be easier to analyze human psychology and behavior?

Comment by lightwave on Open thread for December 24-31, 2013 · 2013-12-24T11:52:12.696Z · score: 10 (10 votes) · LW · GW

The trailer for the movie Transcendence is out.

Comment by lightwave on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-30T07:28:01.619Z · score: 0 (0 votes) · LW · GW

to value breadth of perspective and flexibility of viewpoint significantly more than internal consistency

As humans we can't change/modify ourselves too much anyway, but what about if we're able to in the future? If you can pick and choose your values? It seems to me that, for such entity, not valuing consistency is like not valuing logic. And then there's the argument that it leaves you open for dutch booking / blackmail.

Comment by lightwave on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-27T23:06:12.499Z · score: 0 (0 votes) · LW · GW

Well whether it's a "real" change may be besides the point if you put it this way. Our situation and our knowledge are also changing, and maybe our behavior should also change. If personal identity and/or consciousness are not fundamental, how should we value those in a world where any mind-configurations can be created and copied at will?

Comment by lightwave on What makes us think _any_ of our terminal values aren't based on a misunderstanding of reality? · 2013-09-27T07:56:07.435Z · score: 0 (0 votes) · LW · GW

we value what we value, we don't value what we don't value, what more is there to say?

I'm confused what you mean by this. If there wasn't anything more to say, then nobody would/should ever change what they value? But people's values changes over time, and that's a good thing. For example in medieval/ancient times people didn't value animals' lives and well-being (as much) as we do today. If a medieval person tells you "well we value what we value, I don't value animals, what more is there to say?", would you agree with him and let him go on to burning cats for entertainment, or would you try to convince him that he should actually care about animals' well-being?

You are of course using some of your values to instruct other values. But they need to be at least consistent and it's not really clear which are the "more-terminal" ones. It seems to me byrnema is saying that privileging your own consciousness/identity above others is just not warranted, and if we could, we really should self-modify to not care more about one particular instance, but rather about how much well-being/eudaimonia (for example) there is in the world in general. It seems like this change would make your value system more consistent and less arbitrary and I'm sympathetic to this view.

Comment by lightwave on More "Stupid" Questions · 2013-08-01T07:44:45.244Z · score: 0 (0 votes) · LW · GW

By the same logic eating you favorite food because it tastes good is also wireheading.

Comment by lightwave on Instrumental rationality/self help resources · 2013-07-18T11:18:49.901Z · score: 0 (0 votes) · LW · GW

Better, instrumentally, to learn to handle the truth.

It really depends on your goals/goal system. I think the wiki definition is supposed to encompass possible non-human minds that may have some uncommon goals/drives, like a wireheaded clippy that produces virtual paperclips and doesn't care whether they are in the real or virtual world, so it doesn't want/need to distinguish between them.

Comment by lightwave on "Stupid" questions thread · 2013-07-15T09:54:06.464Z · score: 0 (0 votes) · LW · GW

You can use the "can be good at everything" definition to suggest quantification as well. For example, you could take these same agents and make them produce other things, not just paperclips, like microchips, or spaceships, or whatever, and then the agents that are better at making those are the more intelligent ones. So it's just using more technical terms to mean the same thing.

Comment by lightwave on Open Thread, June 16-30, 2013 · 2013-06-16T09:39:51.098Z · score: 0 (0 votes) · LW · GW

I looked through some of them, there's a lot of theory and discussions, but I'm rather interested just in a basic step-by-step guide on what to do basically.

Comment by lightwave on Open Thread, June 16-30, 2013 · 2013-06-16T08:59:53.039Z · score: 7 (7 votes) · LW · GW

So I'm interested in taking up meditation, but I don't know how/where to start. Is there a practical guide for beginners somewhere that you would recommend?

Comment by lightwave on Three more ways identity can be a curse · 2013-04-29T07:21:21.223Z · score: 1 (3 votes) · LW · GW

"Regression to the mean" as used above is basically using a technical term to call someone stupid.

Well I definitely wasn't implying that. I actually wanted to discuss the statistics.

Comment by lightwave on Three more ways identity can be a curse · 2013-04-29T07:19:02.247Z · score: 3 (3 votes) · LW · GW

Why? I couldn't think of a way to make this comment without it sounding somewhat negative towards the OP, so I added this as a disclaimer, meaning that I want to discuss the statistics, not to insult the poster.

Comment by lightwave on Three more ways identity can be a curse · 2013-04-28T08:38:50.842Z · score: -8 (20 votes) · LW · GW

I look forward to reading your future posts.

I hate to sound negative, but I wouldn't count on it.

Comment by lightwave on A thought-process testing opportunity · 2013-04-23T07:28:24.298Z · score: 6 (6 votes) · LW · GW

They probably would have flown off had he twisted it faster.

Comment by lightwave on An attempt to dissolve subjective expectation and personal identity · 2013-02-25T15:45:29.062Z · score: 0 (0 votes) · LW · GW

Or maybe you do, but it's not meaningfully different from deciding to care about some other person (or group of people) to the exclusion of yourself if you believe in personal identity

I think the point is actually similar to this discussion, which also somewhat confuses me.

Comment by lightwave on Discussion: Which futures are good enough? · 2013-02-24T10:59:25.692Z · score: 0 (0 votes) · LW · GW

figure out how to make everyone sitting around on a higher level credibly precommit to not messing with the power plug

That's MFAI's job. Living on the "highest level" also has the same problem, you have to protect your region of the universe from anything that could "de-optimize" it, and FAI will (attempt to) make sure this doesn't happen.

Comment by lightwave on Discussion: Which futures are good enough? · 2013-02-24T10:53:47.513Z · score: 4 (4 votes) · LW · GW

I, on the other hand, (suspect) I don't mind being simulated and living in a virtual environment. So can I get my MFAI before attempts to build true FAI kill the rest of you?

Comment by lightwave on An attempt to dissolve subjective expectation and personal identity · 2013-02-24T10:20:01.100Z · score: 0 (0 votes) · LW · GW

Not really. You can focus your utility function on one particular optimization process and its potential future execution, which may be appropriate given that the utility function defines the preference over outcomes of that optimization process.

Well you could focus your utility function on anything you like anyway, the question is why, under utilitarianism, would it be justified to value this particular optimization process? If personal identity was fundamental, then you'd have no choice, conscious existence would be tied to some particular identity. But if it's not fundamental, then why prefer this particular grouping of conscious-experience-moments, rather than any other? If I have the choice, I might as well choose some other set of these moments, because as you said, "why not"?

Comment by lightwave on Memetic Tribalism · 2013-02-15T14:47:48.016Z · score: 12 (12 votes) · LW · GW

Well, shit. Now I feel bad, I liked your recent posts.

Comment by lightwave on Memetic Tribalism · 2013-02-15T09:02:03.378Z · score: 7 (7 votes) · LW · GW

I'm now quite skeptical that my urge to correct reflects an actual opportunity to win by improving someone's thinking,

Shouldn't you be applying this logic to your own motivations to be a rationalist as well? "Oh, so you've found this blog on the internet and now you know the real truth? Now you can think better than other people?" You can see how it can look from the outside. What would the implication for yourself be?

Comment by lightwave on A Little Puzzle about Termination · 2013-02-04T09:44:55.910Z · score: 2 (2 votes) · LW · GW

On the other hand, given that humans (especially on LW) do analyze things on several meta levels, it seems possible to program an AI to do the same, and in fact many discussions of AI assume this (e.g. discussing whether the AI will suspect it's trapped in some simulation). It's an interesting question how intelligent can an AI get without having the need (or ability) to go meta.

Comment by lightwave on Ideal Advisor Theories and Personal CEV · 2012-12-26T11:49:12.859Z · score: 3 (3 votes) · LW · GW

Given that a parliament of humans (where they vote on values) is not accepted as a (final) solution to the interpersonal value / well-being comparison problem, why would a parliament be acceptable for intrapersonal comparisons?

Comment by lightwave on Open Thread, November 16–30, 2012 · 2012-11-20T09:18:39.871Z · score: 2 (2 votes) · LW · GW

It seems like people sort of turn into utility monsters - if people around you have a strong opinion on a certain topic, you better have a strong opinion too, or else it won't carry as much "force".

Comment by lightwave on Rationality versus Short Term Selves · 2012-10-28T08:32:57.754Z · score: 0 (0 votes) · LW · GW

What about "decided on"?

Comment by lightwave on Rationality versus Short Term Selves · 2012-10-25T10:44:24.401Z · score: 1 (1 votes) · LW · GW

With regards to the singularity, and given that we haven't solved 'morality' yet, one might just value "human well-being" or "human flourishing" without referring to a long-term self concept. I.e. you just might care about a future 'you', even if that person is actually a different person. As a side effect you might also equally care about everyone else in to future too.

Comment by lightwave on Open Thread, October 16-31, 2012 · 2012-10-17T14:27:23.594Z · score: 2 (2 votes) · LW · GW

Right, but I want to use a closer to real life situation or example that reduces to the wason selection task (and people fail at it) and use that as the demonstration, so that people can see themselves fail in a real life situation, rather than in a logical puzzle. People already realize they might not be very good at generalized logic/math, I'm trying to demonstrate that the general logic applies to real life as well.

Comment by lightwave on Open Thread, October 16-31, 2012 · 2012-10-17T13:48:23.152Z · score: 2 (2 votes) · LW · GW

Well the thing is that people actually get this right in real life (e.g. with the rule 'to drink you must be over 18'). I need something that occurs in real life and people fail at it.

Comment by lightwave on Open Thread, October 16-31, 2012 · 2012-10-17T12:51:37.129Z · score: 1 (1 votes) · LW · GW

I'm planning on doing a presentation on cognitive biases and/or behavioral economics (Kahneman et al) in front of a group of university students (20-30 people). I want to start with a short experiment / demonstration (or two) that will demonstrate to the students that they are, in fact, subject to some bias or failure in decision making. I'm looking for suggestion on what experiment I can perform within 30 minutes (can be longer if it's an interesting and engaging task, e.g. a game), the important thing is that the thing being demonstrated has to be relevant to most people's everyday lives. Any ideas?

I also want to mention that I can get assistants for the experiment if needed.

Edit: Has anyone at CFAR or at rationality minicamps done something similar? Who can I contact to inquire about this?

Comment by lightwave on Is xkcd "Think Logically" talking about this site? · 2012-09-24T19:51:47.839Z · score: 11 (11 votes) · LW · GW

I don't think this deserves its own top level discussion post and I suspect most of the downvotes are for this reason. Maybe use the open thread next time?

Comment by lightwave on New study on choice blindness in moral positions · 2012-09-21T19:34:56.913Z · score: 0 (0 votes) · LW · GW

Some of them were general moral principles, but some of them were specific statements.

Trolley problems are also very specific, but people have great trouble with them. Maybe I should have said "non-familiar" rather than just "general".

Comment by lightwave on New study on choice blindness in moral positions · 2012-09-20T21:41:05.580Z · score: 4 (4 votes) · LW · GW

One interpretation is that many people don't have strongly held or stable opinions on some moral questions and/or don't care. Doesn't sound very shocking to me.

Maybe morality is extremely context sensitive in many cases, thus polls on general moral questions are not all that useful.

Comment by lightwave on New study on choice blindness in moral positions · 2012-09-20T20:57:13.118Z · score: 16 (16 votes) · LW · GW

When reading old LW posts and comments and seeing I've upvoted some comment, I find myself thinking "Wait, why have I upvoted this comment?"

Comment by lightwave on New study on choice blindness in moral positions · 2012-09-20T20:48:21.106Z · score: 5 (5 votes) · LW · GW

Kid version of choice blindness. :D