Posts

Are most personality disorders really trust disorders? 2024-02-06T12:37:56.070Z
Inside View, Outside View... And Opposing View 2023-12-20T12:35:48.509Z
You should just smile at strangers a lot 2023-09-25T20:12:56.907Z
Why "AI alignment" would better be renamed into "Artificial Intention research" 2023-06-15T10:32:26.094Z
Let's build a fire alarm for AGI 2023-05-15T09:16:29.369Z
10 great reasons why Lex Fridman should invite Eliezer and Robin to re-do the FOOM debate on his podcast 2023-05-10T08:27:19.409Z
The biological function of love for non-kin is to gain the trust of people we cannot deceive 2022-11-07T20:26:29.876Z
I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too 2022-10-15T12:41:52.504Z
AI safety: the ultimate trolley problem 2022-04-09T12:05:33.287Z
What cognitive biases feel like from the inside 2020-01-03T14:24:22.265Z
Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. 2019-11-04T14:04:10.851Z
Functional silence: communication that minimizes change of receiver's beliefs 2019-02-12T21:32:27.015Z
In favor of tabooing the word “values” and using only “priorities” instead 2018-10-25T22:28:34.920Z
An optimistic explanation of the outrage epidemic 2018-07-15T14:35:26.357Z
The Copenhagen Letter 2017-09-18T18:45:38.469Z
New business opportunities due to self-driving cars 2017-09-06T20:07:47.183Z
Prediction should be a sport 2017-08-10T07:55:44.313Z
Meetup : First LessWrong meetup in Leipzig! 2017-05-17T09:03:01.392Z
Elon Musk launches Neuralink, a venture to merge the human brain with AI 2017-03-28T10:49:44.376Z
Could utility functions be for narrow AI only, and downright antithetical to AGI? 2017-03-16T18:24:22.657Z
In-depth description of a quite strict, quite successful government program against teen substance abuse, spreading from Iceland 2017-01-19T12:04:48.693Z
A different argument against Universal Basic Income 2016-12-28T22:35:31.696Z
What degree of cousins are you and I? Estimates of Consanguinity to promote feelings of kinship and empathy 2015-05-20T17:10:37.941Z
Nick Bostrom's TED talk on Superintelligence is now online 2015-04-27T15:15:21.481Z
3-day Solstice in Leipzig, Germany: small, nice, very low cost, includes accommodation, 19th-21st Dec 2014-10-09T16:38:06.739Z
In order to greatly reduce X-risk, design self-replicating spacecraft without AGI 2014-09-20T20:25:36.802Z
Talking to yourself: A useful thinking tool that seems understudied and underdiscussed 2014-09-09T16:56:38.149Z
[link] The ethics of genetically enhanced monkey slaves 2014-02-20T09:40:56.517Z
A big Singularity-themed Hollywood movie out in April offers many opportunities to talk about AI risk 2014-01-07T17:48:11.363Z
Measuring lethality in reduced expected heartbeats 2014-01-03T14:14:40.145Z
Meetup : Secular Solstice Celebration! (And the Inauguration of the LW Leipzig Community) 2013-11-30T12:42:31.770Z
[Link] Cognitive biases about violence as a negotiating tactic 2013-10-25T11:43:09.909Z
Teaching rationality to kids? 2013-10-16T12:38:25.199Z
Techniques to consciously activate a rationalist self-image 2013-08-30T00:01:14.770Z
The Fermi paradox as evidence against the likelyhood of unfriendly AI 2013-08-01T18:46:53.630Z
Business Insider: "They Finally Tested The 'Prisoner's Dilemma' On Actual Prisoners — And The Results Were Not What You Would Expect" 2013-07-24T12:44:05.763Z
Can we make Drake-like Fermi estimates of expected distance to the next planet with primitive, sentient or self-improving life? 2013-07-10T01:34:36.397Z
Google's Executive Chairman Eric Schmidt: apparently a transhumanist 2013-04-25T00:36:41.935Z
Anybody want to meet in Leipzig, Germany? 2013-04-03T22:53:50.516Z
Caelum est Conterrens: I frankly don't see how this is a horror story 2013-03-06T10:31:48.001Z
My simple hack for increased alertness and improved cognitive functioning: very bright light 2013-01-18T13:43:01.031Z
Replaceability as a virtue 2012-12-12T07:53:48.868Z

Comments

Comment by chaosmage on Are most personality disorders really trust disorders? · 2024-02-11T21:47:30.664Z · LW · GW

"The Social Leap" by William von Hippel. He basically says we diverged from chimps when we left the forests for the savannah not only by becoming more cooperative (standard example: sclera that make our focus of attention common knowledge) but also by becoming much more flexible in our social behaviors, cooperating or competing much more dependent on context, over the last six million years.

I have tried and failed to find a short quote in it to paste here. It's a long and occasionally meandering book, much more alike the anthropological than the rationalist literature.

Comment by chaosmage on Would you have a baby in 2024? · 2023-12-27T07:46:48.942Z · LW · GW

I didn't say the risk was "very high" (which would indeed be nonsense), I said it is non-zero. And I personally know two men who were tricked into becomng fathers.

And the thing with average intelligence is that only 50% of the population has it. For both partners to have it has to be (slightly) less likely than that.

Comment by chaosmage on Would you have a baby in 2024? · 2023-12-25T12:10:55.419Z · LW · GW

PSA: you have less control over whether you have kids, or how many you get, than people generally believe. There are biological problems you might not know you have, there are women who lie about contraception, there are hormonal pressures you won't feel till you reach a certain age, there are twins and stillbirth, and most of all there are super horny split second decisions in the literal heat of the moment that your system 2 is too slow to stop.

I understand this doesn't answer the question, I just took the opportunity to share a piece of information that I consider not well-understood enough. Please have a plan for the scenario where your reproductive strategy doesn't work out.

Comment by chaosmage on I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too · 2023-12-21T13:05:26.125Z · LW · GW

This is the other, more in-depth post I was preparing.

https://www.lesswrong.com/posts/5SbwfQgHCoGRG9LQ9/inside-view-outside-view-and-opposing-view

Comment by chaosmage on I learn better when I frame learning as Vengeance for losses incurred through ignorance, and you might too · 2023-12-20T07:53:46.391Z · LW · GW

I continue to stand by this post.

I believe that in our studies of human cognition, we have relatively neglected the aggressive parts of it. We understand they're there, but they're kind of yucky and unpleasant, so they get relatively little attention. We can and should go into more detail, try to understand, harness and optimize aggression, because it is part of the brains that we're trying to run rationality on.

I am preparing another post to do this in more depth.

Comment by chaosmage on You should just smile at strangers a lot · 2023-09-26T04:52:10.198Z · LW · GW
  1. None
  2. Leipzig, Germany
Comment by chaosmage on Guide to rationalist interior decorating · 2023-07-06T10:38:44.305Z · LW · GW

I'd like to complain that the original post popularizing really bright lights was mine in 2013: My simple hack for increased alertness and improved cognitive functioning: very bright light — LessWrong . This was immediately adopted at MIRI and (I think obviously) led to the Lumenator described by Eliezer three years later.

Comment by chaosmage on What is the foundation of me experiencing the present moment being right now and not at some other point in time? · 2023-06-18T09:11:51.656Z · LW · GW

I suspect it is creation of memories. You don't experience time when you're not creating memories, and they're some kind of very subtle object that lasts from one moment to (at least) the next so they leave a very subtle trace in causality, and the input that goes into them is correlated in time, because it is (some small selection from) the perceptions and representations you had simultaneously when you formed the memory.

I even believe you experience a present moment particularly intensely when you're creating a long-term memory - I use this to consciously choose to create long-term memories, and it subjectively seems to work.

Comment by chaosmage on Why "AI alignment" would better be renamed into "Artificial Intention research" · 2023-06-15T16:26:32.978Z · LW · GW

I fail to see how that's a problem.

Comment by chaosmage on What cognitive biases feel like from the inside · 2023-01-03T17:09:32.857Z · LW · GW

That's exactly right. It would be much better know a simple method of how to distinguish overconfidence from being actually right without a lot of work. In the absence of that, maybe tables like this can help people choose more epistemic humility.

Comment by chaosmage on The biological function of love for non-kin is to gain the trust of people we cannot deceive · 2022-11-12T09:06:08.549Z · LW · GW

Well of course there are no true non-relatives, even the sabertooth and antelopes are distant cousins. The question is how much you're willing to give up for how distant cousins. Here I think the mechanism I describe changes the calculus.

I don't think we know enough about the lifestyles of cultures/tribes in the ancestral environment, except we can be pretty sure they were extremely diverse. And all cultures we've ever found have some kind of incest taboo that promotes mating between members of different groups.

Comment by chaosmage on Why I think strong general AI is coming soon · 2022-10-16T06:07:05.914Z · LW · GW

I am utterly in awe. This kind of content is why I keep coming back to LessWrong. Going to spend a couple of days or weeks digesting this...

Comment by chaosmage on What cognitive biases feel like from the inside · 2022-10-15T15:18:17.161Z · LW · GW

Welcome. You're making good points. I intend to make versions of this geared to various audiences but haven't gotten around to it.

Comment by chaosmage on Leipzig, Germany – ACX Meetups Everywhere 2022 · 2022-08-26T15:30:44.105Z · LW · GW

I will attempt to attend this.

Comment by chaosmage on Security Mindset: Lessons from 20+ years of Software Security Failures Relevant to AGI Alignment · 2022-07-10T16:56:58.658Z · LW · GW

A big bounty creates perverse incentives where one guy builds a dangerous AI in a jurisdiction where that isn't a crime yet, and his friend reports him so they can share the bounty.

Comment by chaosmage on What cognitive biases feel like from the inside · 2022-06-17T11:49:57.593Z · LW · GW

I did not know this, and I like it. Thank you!

Comment by chaosmage on AI safety: the ultimate trolley problem · 2022-04-20T15:05:06.324Z · LW · GW

No it doesn't mean you shouldn't be consequentialist. I'm challenging people to point out the flaw in the argument.

If you find the argument persuasive, and think the ability to "push the fat man" (without getting LW tangled up in the investigation) might be a resource worth keeping, the correct action to take is not to comment, and perhaps to downvote.

Comment by chaosmage on Mental nonsense: my anti-insomnia trick · 2022-04-09T12:16:48.898Z · LW · GW

I find it too hard to keep things unrelated over time, so I prefer to keep thinking up new objects at what passes for random to my sleepy mind.

Comment by chaosmage on Mental nonsense: my anti-insomnia trick · 2022-03-29T07:23:36.765Z · LW · GW

Yes, my method is to visualize a large collection of many small things that have no relation to each other, like a big shelf of random stuff. Sometimes I throw them in all directions. This is the best method I have found.

Comment by chaosmage on Irrational Modesty · 2021-07-12T19:03:40.080Z · LW · GW

I think seeking status and pointing out you already have some are two different things. Writing an analysis, it would be quite relevant to mention what expertise or qualifications you have concerning the subject matter.

Comment by chaosmage on Irrational Modesty · 2021-06-21T12:28:24.548Z · LW · GW

I'd go as far as to say justified pride and status-seeking is actually a virtue and a moral duty!

Why? Because status is a signal: high status people are worth imitating. That isn't all status is, but it is a very central benefit that justifies its existence. If you are really successful, and you're hiding that, you're refusing to share valuable information. They might want to check what you're dong right, and imitate that, hopefully becoming more sucessful themselves.

And why would you refuse to seek justified status? I see only three reasons.

  • Fear of embarassment when you reach too high.
  • Deliberate deception in order to benefit from asymmetric information.
  • Outmoded cultural traditions from back when it was hard to check someone else's work and see whether it is actually that good.

I don't think any of these are good reasons.

Comment by chaosmage on Thoughts on the Repugnant Conclusion · 2021-03-08T13:45:41.336Z · LW · GW

I will reluctantly concede this is logical. If you want to optimize for maximal happiness, find out what the minimal physical correlate of happiness is, and build tiny replicators that do nothing but have a great time. Drown the planet in them. You can probably justify the expense of building ships and ship builders with a promise of more maximized happiness on other planets.

But this is basically a Grey Goo scenario. Happy Goo.

Yes it's a logical conclusion, yes it is repugnant, and I think it's a reductio ad absurdum of the whole idea of optimizing for conscious states. An even more dramatic one than wild animal suffering.

Comment by chaosmage on What cognitive biases feel like from the inside · 2021-02-27T16:14:46.576Z · LW · GW

I think this is off topic here, except it does sort of the same thing by breaking principles down I to concrete statements. That said, I think that site is exceptionally well-written and designed. I wish other persuasion projects adopted that kind of approach.

Comment by chaosmage on What's your best alternate history utopia? · 2021-02-22T09:42:59.136Z · LW · GW

Oh I know how!

When Einstein figured out spacetime, we rethought not only physics, but also other faulty conclusions from our false assumption that reality is three-dimensional. Everything is moving through four dimensions, including us, and that means we're four-dimensional too, although our consciousness is limited to three-dimensional moments.

We started to see ourselves as growing through time like four-dimensional snakes. Or branches, really, since we've all branched off our four-dimensional others when we were born. And by simple recursion we realized that in four dimensions, we all are branched off common ancestors, way back to the origin of life, and all other life-forms are merely seperate-seeming branches of the only life on Earth, the evolutionary tree of life. All of our bodies and minds are extensions of the same thing, just like our fingers are extensions of the same hand.

Lots of religious and mystically inclined people got very excited about this and wanted to believe this is something like proof of God, or all life is conscious, or there's some grand plan, but we insisted on plain physics: nothing about causality has changed, life isn't smart or intentional or conscious, but life is us, and that merely means our self-image was as mistaken as our image of physics. We had to stop identifying with consciousness, which made a lot of problems with consciousness more tractable, and started to identify with the single process that produces all our seperate consciousnesses.

That necessitated a lot of re-thinking of ethics, because consciousness wasn't so fundamental anymore and suffering of conscious beings started to look more incidental. We decided that our minds were created by us/life to serve its/our purpose, and life's/our purpose, while not conscious, looked from revealed preferences like survival, dissemination and diversification. So that became our yardstick for ethical behavior: good is what helps life to survive, spread, diversity and, somewhere among the stars, maybe meet another one.

Comment by chaosmage on AR Glasses: Much more than you wanted to know · 2021-01-16T08:09:08.814Z · LW · GW

Awesome article, I would only add another huge AR-enabled transformation that you missed.

AR lets you stream your field of view to someone and hear their comments. I hear this is already being used in airplane inspection: a low level technician at some airfield can look at an engine and stream their camera to a faraway specialist for that particular engine and get their feedback if it is fine, or instructions what to do for diagnostics and repair. The same kind of thing is apparently being explored for remote repairs of things like oil pipelines, where quick repair is very valuable but the place of the damage can be quite remote. I think it also makes a lot of sense for spaceflight, where an astronaut could run an experiment while streaming to, and instructed by, the scientists who designed it. As the tech becomes cheaper and more mature, less extreme use cases begin to make economic sense.

I imagine this leads into a new type of job that I guess could be called an avatar: someone who has AR glasses and a couple of knowledgable people they can impersonate. This lets the specialist stay at home and lend their knowledge to lots of avatars and complete more tasks than they could have done in person. Throw in a market for avatars and specialists to find each other and you can give a lot of fit but unskilled youngsters and skilled but slow seniors new jobs.

And this makes literal hands-on training much cheaper. You can put on AR glasses and connect to an instructor who will instruct you to try out stuff, explain what is going on and give you the most valuable kind of training. This already exists for desk jobs but now you can do it with gardening or cooking or whatever.

Comment by chaosmage on What cognitive biases feel like from the inside · 2021-01-08T06:47:06.030Z · LW · GW

Did he read it later?

Comment by chaosmage on Covid 1/7: The Fire of a Thousand Suns · 2021-01-08T06:45:12.106Z · LW · GW

South Africa, and Brazil where the South Africa strain is apparently spreading, are in summer right now. How are temperatures going to save us from that one?

Comment by chaosmage on What cognitive biases feel like from the inside · 2020-08-26T17:35:09.622Z · LW · GW

Did you share it with your son, and if so what was the result?

Comment by chaosmage on What cognitive biases feel like from the inside · 2020-03-13T14:08:51.965Z · LW · GW

Awesome! Thanks a lot!

Comment by chaosmage on What cognitive biases feel like from the inside · 2020-01-04T12:53:17.177Z · LW · GW

I'm fantasizing about infographics with multiple examples of the same bias, an explanation how they're all biased the same way, and very brief talking points like "we're all biased, try to avoid this mistake, forgive others if they make it, learn more at LessWrong.com".

They could be mass produced with different examples. Like one with a proponent of Minimum Wage and an opponent of it, arguing under intense confirmation bias as described in the table above, with a headline like "Why discussions about Minimum Wage often fail". Another one "Why discussions of Veganism often fail", another one "Why discussions of Gun Control often fail" etc. Each posted to the appropriate subreddits etc. Then evolve new versions based on what got the most upvotes.

But I am completely clueless about how to do infographics. I'd love for someone to grab the idea and run with it. But realistically I should probably try to half-ass something and hope it shows enough potential for someone with the skills to take pity.

Or at least get more eyes on it to further improve the concept. Getting feedback from fellow LessWrongers was extremely helpful for development thus far.

Comment by chaosmage on What cognitive biases feel like from the inside · 2020-01-03T14:33:02.270Z · LW · GW

I'm using pictures because I couldn't get either editor to accept a proper table.

Comment by chaosmage on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-12-13T14:58:11.410Z · LW · GW

In a car park? But they will be way more densely packed than cars in car parks, because no humans need access. The cabins get placed there and retrieved from there by autonomous engines.

Comment by chaosmage on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-06T10:04:43.039Z · LW · GW

Here are more use cases.

  • A specialized cabin for your kid to drive to/from school alone, or for your toddler to drive to/from kindergarten alone. Robotaxis will definitely be used for this because it is super valuable to parents. But a small specialized cabin would be more economical than a standard (typical car size) cabin fitted with child seats.
  • Visiting dialysis station.
  • Specialized delivery cabins for particular types of cargo: refrigerated, extra suspension, stuff for transporting animals. We do this with trucks, but trucks are big because they're optimized to need less human drivers per mass of cargo, and once that restriction is gone the disadvantages of big trucks should incentivize a move to smaller cargo vehicles.

I think these three are major enough that even if we stay with single vehicles, these use cases would merit development of specialized robotaxis to cover them, sooner or later. But a tug and cabin system gets there sooner.

Comment by chaosmage on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T09:25:09.331Z · LW · GW

I think I made a mistake using the word "accommodation". (English isn't my first language.) What I meant is basically "where the people and cargo are stored safely and comfortably". That can be something big to live in, but it could also be a single seat cabin for a commute.

The point is you can have several different types for different purposes, because you don't need to buy an expensive motor and computer with each of them.

Comment by chaosmage on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-05T09:15:51.791Z · LW · GW

Good points.

Agree about the battery swaps, but swapping a tug would be easier.

Cargo containers are definitely like this, but they're big because it is more economical to spread the cost of the driver over a large amount of cargo. Cargo wagons/modules could be in a wide range of sizes, including small/fast ones that are more like courier service than like bulk transport.

Comment by chaosmage on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T15:23:08.116Z · LW · GW

You don't need a parking spot - the system can still be used as a robotaxi, it just has additional uses.

You don't need to be where your wagon is, you can send it places. Because of that, you could even rent out your wagon (say you offer a rental sound system or a mobile massage parlor).

Comment by chaosmage on Elon Musk is wrong: Robotaxis are stupid. We need standardized rented autonomous tugs to move customized owned unpowered wagons. · 2019-11-04T14:52:11.624Z · LW · GW

If you're a first world citizen and able to spend $35k+ on a car, sure. Most of the cars that need replacing are way cheaper, and their replacement needs to be way cheaper too.

Comment by chaosmage on Winter Solstice 2018 Roundup · 2018-11-28T09:04:26.007Z · LW · GW

There is a Secular Solstice in Berlin, Germany, but it happens in a small apartment so it has to be invitation only and is already full AFAIK.

Frankfurt, Germany might again be doing one but I do not know particulars.

Leipzig, Germany is not having one this year due to the place where the last couple of Solstices happened being currently infested with toddlers.

Comment by chaosmage on Embedded Agents · 2018-11-01T08:47:51.237Z · LW · GW

The text is beautifully condensed. And the handwritten style does help it look casual/inviting.

But the whole thing loaded significantly slower than I read. How many megabytes is this post? I haven't waited this long for a website to load for years.

Comment by chaosmage on On Doing the Improbable · 2018-10-29T13:11:25.300Z · LW · GW

What really helps is mortality and our inbred need to leave a legacy. It is better to pick a project with low probability of success than none at all. That can help you stick with something you only estimate to have a low chance of success, at least long enough to have sunk costs kick in. Does for me anyway.

This mechanism may only work for one man projects, or work in tight knit groups like bands of musicians. Your contribution to a big project doesn't feel like a legacy to the same degree.

Comment by chaosmage on [deleted post] 2018-10-28T17:30:20.522Z

That sounds a *lot* like http://slatestarcodex.com/2018/04/01/the-hour-i-first-believed/ .

It does not sound a lot like any existing variant of Panpsychism. Since the word isn't doing any work here, I suggest you do without it.

Comment by chaosmage on An optimistic explanation of the outrage epidemic · 2018-07-15T20:12:20.418Z · LW · GW

No, the degree of outrage also depends on closeness to the victim. In this case Jews will feel closer to Israelis (the victims of Palestinians), and Muslims will feel closer to Palestinians (the victims of Israelis) so that's what they're outraged about. Closeness to the perpetrator is a factor I think, but I don't expect it is stronger than closeness to the victim.

Comment by chaosmage on Understanding is translation · 2018-06-01T15:35:17.596Z · LW · GW

Yes! Thank you!

I've had similar ideas for a long time. I've translated three books and find that I think of many acts of communication as translations. In particular, I find it useful to think of misunderstandings as mistranslations.

To think of thinking/speaking styles as languages just plain makes sense, and I feel that when people "are on the same wavelength" what is really happening is that they're (somewhat unusually) actually speaking the same language.

I don't use this concept for processes inside a single mind, though. Might be worth thinking about, but a term that denotes work with explicit communications does not seem like a good fit for processes that are almost entirely implicit.

Comment by chaosmage on Eight political demands that I hope we can agree on · 2018-05-01T23:04:20.364Z · LW · GW

#6 is really "we want legal euthanasia" right? Might as well say it like it is.

I think legal prostitution belongs on the list as well.

And maybe an end to tax advantages for churches? Because that's direct state funding for irrationality.

Comment by chaosmage on Mythic Mode · 2018-02-24T10:43:05.182Z · LW · GW

This fake frameworks thing looks quite clearly like Chaos Magic, and the reference to the Book of the Law quote "wine and strange drugs" is a dog whistle to that effect.

Some chaos magicians like to use drug experiences as ready-made containers for what Val calls the Mythic Mode. Some drugs can both increase the ability to suspend disbelief while inside the experience and make it easier to distance oneself from it when outside of it. A good description of techniques for this, with all non-scientific woo-woo strictly optional, is Julian Vayne's "Getting Higher - The Manual of Psychedelic Ceremony".

There's more where that came from. I like to recommend Philip Farber's "Meta-Magick - The Book of Atem" with a bunch of visualization-focused techniques very similar to the ones Kaj Sotola has described and demonstrated to great effect. Julian Vayne's and Nikki Wyrd's "The Book of Baphomet" is perhaps the best example of an artificial myth created with great artistic and poetic skill and then inhabited with significant personal results.

Comment by chaosmage on An Equilibrium of No Free Energy · 2017-11-05T18:57:56.484Z · LW · GW

I posted the idea of installing very bright lights on LW five years ago and Eliezer commented there so I give myself credit for at least making that spontaneous idea more likely. And it happens to be the case I've been thinking about the failings of light boxes for SAD in the meantime.

What happened is that a few people experimented with light therapy, got succcess with 2500lux for two hours, decided two hours per day was infeasible outside the lab, found that they could get the same result dividing the time but multiplying the light intensity and then... just... stopped. They did studies with 10000 lux boxes and that's a relatively expensive study so you better cooperate with a producer of such boxes. So you get some type of kickback and suddenly nobody's interested in studying whether stronger, cheaper lights are even better. Light boxes became a medical device and magically became just as expensive as medical insurers would tolerate. That they don't work for everyone was expected, because no depression treatment works for everyone (maybe except electroconvulsive therapy, and Ketamine but that was later). And LEDs only became cheap enough recently (five years ago they still weren't clearly the cheapest option), so going much beyond 10000 lux presented enough of a technical challenge to make further trials pretty expensive, until recently.

Right now, you could probably do a study with SAD sufferers who have tried light boxes and found them insufficient. Give them a setup that produces like 40000 lux and fits in a normal ceiling fixture so they can have it running while they do things, rather than have to make time to sit in front of it. For a double blind control design, maybe give one group twice the brightness of the other? Have your participants log every day how much time they spent in the room with the lamp running, and how much time they spent outside. Don't give them money but let them keep the lamp if they continue mailing their fillled out questionnaires. Should be doable at a hundred dollars per participant, and without ever physically meeting them. You still need six figures to run that study at a large enough size, and no light box maker is going to fund you.

Comment by chaosmage on New business opportunities due to self-driving cars · 2017-09-13T15:41:58.446Z · LW · GW

Yes. I wonder how hard it'll be to sleep in the things. I find sleeper trains generally a bad place to sleep, but that's mostly because of the other passengers.

Comment by chaosmage on New business opportunities due to self-driving cars · 2017-09-12T18:50:28.428Z · LW · GW

I should be disappointed, but disappointment requires surprise.

Comment by chaosmage on Inconsistent Beliefs and Charitable Giving · 2017-09-12T13:12:00.548Z · LW · GW

Don't worry, you didn't actually come across that way, Lumifer is just being a jerk again. You're fairly new here, so you don't yet know Lumifer prefers that kind of comment. Sorry about him, and about LW not having a mute button.

Comment by chaosmage on New business opportunities due to self-driving cars · 2017-09-12T13:06:38.855Z · LW · GW

I completely agree with everything you said here.