Posts

AI Safety Chatbot 2023-12-21T14:06:48.981Z
Holly Elmore and Rob Miles dialogue on AI Safety Advocacy 2023-10-20T21:04:32.645Z
Stampy's AI Safety Info soft launch 2023-10-05T22:13:04.632Z
AI Safety Info Distillation Fellowship 2023-02-17T16:16:45.732Z
All AGI Safety questions welcome (especially basic ones) [~monthly thread] 2023-01-26T21:01:57.920Z
All AGI Safety questions welcome (especially basic ones) [~monthly thread] 2022-11-01T23:23:04.146Z
All AGI safety questions welcome (especially basic ones) [July 2022] 2022-07-16T12:57:44.157Z
Implications of automated ontology identification 2022-02-18T03:30:53.795Z
Robert Miles's Shortform 2022-02-15T03:21:30.586Z

Comments

Comment by Robert Miles (robert-miles) on Holly Elmore and Rob Miles dialogue on AI Safety Advocacy · 2023-10-29T00:46:02.339Z · LW · GW

Covid was a big learning experience for me, but I'd like to think about more than one example. Covid is interesting because, compared to my examples of birth control and animal-free meat, it seems like with covid humanity smashed the technical problem out of the park, but still overall failed by my lights because of the political situation.

How likely does it seem that we could get full marks on solving alignment but still fail due to politics? I tend to think of building a properly aligned AGI as a straightforward win condition, but that's not a very deeply considered view. I guess we could solve it on a whiteboard somewhere but for political reasons it doesn't get implemented in time?

Comment by Robert Miles (robert-miles) on (Confusion Phrases) AKA: Things You Might Say or Think When You're Confused to Use as Triggers for Internal TAPs · 2023-09-17T10:57:16.944Z · LW · GW

I think almost all of these are things that I'd only think after I'd already noticed confusion, and most are things I'd never say in my head anyway. A little way into the list I thought "Wait, did he just ask ChatGPT for different ways to say "I'm confused"?".

I expect there are things that pop up in my inner monologue when I'm confused about something, that I wouldn't notice, and it would be very useful to have a list of such phrases, but your list contains ~none of them.

Edit: Actually the last three are reasonable. Are they human written?

Comment by Robert Miles (robert-miles) on Probabilistic argument relationships and an invitation to the argument mapping community · 2023-09-10T16:32:59.267Z · LW · GW

One way of framing the difficulty with the lanternflies thing is that the question straddles the is-ought gap. It decomposes pretty cleanly into two questions: "What states of the universe are likely to result from me killing vs not killing lanternflies" (about which Bayes Rule fully applies and is enormously useful), and "Which states of the universe do I prefer?", where the only evidence you have will come from things like introspection about your own moral intuitions and values. Your values are also a fact about the universe, because you are part of the universe, so Bayes still applies I guess, but it's quite a different question to think about.
If you have well defined values, for example some function from states (or histories) of the universe to real numbers, such that larger numbers represent universe states that you would always prefer over smaller numbers, then every "should I do X or Y" question has an answer in terms of those values. In practice we'll never have that, but still it's worth thinking separately about "What are the expected consequences of the proposed policy?" and "What consequences do I want", which a 'should' question implicitly mixes together.

Comment by Robert Miles (robert-miles) on Solomonoff induction still works if the universe is uncomputable, and its usefulness doesn't require knowing Occam's razor · 2023-06-18T23:12:55.850Z · LW · GW

I've always thought of it like, it doesn't rely on the universe being computable, just on the universe having a computable approximation. So if the universe is computable, SI does perfectly, if it's not, SI does as well as any algorithm could hope to.

Comment by Robert Miles (robert-miles) on Why libertarians are advocating for regulation on AI · 2023-06-15T23:30:37.838Z · LW · GW

A slightly surreal experience to read a post saying something I was just tweeting about, written by a username that could plausibly be mine.

Comment by Robert Miles (robert-miles) on The Sharp Right Turn: sudden deceptive alignment as a convergent goal · 2023-06-07T10:35:50.941Z · LW · GW

Do we even need a whole new term for this? Why not "Sudden Deceptive Alignment"?

Comment by Robert Miles (robert-miles) on Meta-conversation shouldn't be taboo · 2023-06-05T09:15:35.696Z · LW · GW

I think in some significant subset of such situations, almost everyone present is aware of the problem, so you don't always have to describe the problem yourself or explicitly propose solutions (which can seem weird from a power dynamics perspective). Sometimes just drawing the group's attention to the meta level at all, initiating a meta-discussion, is sufficient to allow the group to fix the problem.

Comment by Robert Miles (robert-miles) on Adumbrations on AGI from an outsider · 2023-05-25T13:17:06.751Z · LW · GW

This is good and interesting. Various things to address, but I only have time for a couple at random.

I disagree with the idea that true things necessarily have explanations that are both convincing and short. In my experience you can give a short explanation that doesn't address everyone's reasonable objections, or a very long one that does, or something in between. If you understand some specific point about cutting edge research, you should be able to properly explain it to a lay person, but by the time you're done they won't be a lay person any more! If you restrict your explanation to "things you can cover before the person you're explaining to decides this isn't worth their time and goes away", many concepts simply cannot ever be explained to most people, because they don't really want to know.

So the core challenge is staying interesting enough for long enough to actually get across all of the required concepts. On that point, have you seen any of my videos, and do you have thoughts on them? You can search "AI Safety" on YouTube.

Similarly, do you thoughts on AISafety.info ?

Comment by Robert Miles (robert-miles) on Proposal: we should start referring to the risk from unaligned AI as a type of *accident risk* · 2023-05-16T21:34:53.122Z · LW · GW

Are we not already doing this? I thought we were already doing this. See for example this talk I gave in 2018

https://youtu.be/pYXy-A4siMw?t=35

I guess we can't be doing it very well though

Comment by Robert Miles (robert-miles) on Better debates · 2023-05-15T12:35:52.418Z · LW · GW

Structured time boxes seem very suboptimal, steamrollering is easy enough to deal with by a moderator "Ok let's pause there for X to respond to that point"

Comment by Robert Miles (robert-miles) on Better debates · 2023-05-10T21:59:32.347Z · LW · GW

This would make a great YouTube series

Edit: I think I'm going to make this a YouTube series

Comment by Robert Miles (robert-miles) on Is GPT-N bounded by human capabilities? No. · 2023-05-08T12:17:25.597Z · LW · GW

Other tokens that require modelling more than a human:

  • The results sections of scientific papers - requires modelling whatever the experiment was about. If humans could do this they wouldn't have needed to run the experiment
  • Records of stock price movements - in principle getting zero loss on this requires insanely high levels of capability
Comment by Robert Miles (robert-miles) on Explaining “Hell is Game Theory Folk Theorems” · 2023-05-06T00:20:29.156Z · LW · GW

Compare with this from Meditations on Moloch:

Imagine a country with two rules: first, every person must spend eight hours a day giving themselves strong electric shocks. Second, if anyone fails to follow a rule (including this one), or speaks out against it, or fails to enforce it, all citizens must unite to kill that person. Suppose these rules were well-enough established by tradition that everyone expected them to be enforced. So you shock yourself for eight hours a day, because you know if you don’t everyone else will kill you, because if they don’t, everyone else will kill them, and so on.

Seems to me a key component here, which flows naturally from "punish any deviation from the profile" is this pattern of 'punishment of non-punishers'.

Comment by Robert Miles (robert-miles) on Formalizing the "AI x-risk is unlikely because it is ridiculous" argument · 2023-05-04T08:23:57.546Z · LW · GW

The historical trends thing is prone to standard reference class tennis. Arguments like "Every civilization has collapsed, why would ours be special? Something will destroy civilisation, how likely is it that it's AI?". Or "almost every species has gone extinct. Something will wipe us out, could it be AI?". Or even "Every species in the genus homo has been wiped out, and the overwhelmingly most common cause is 'another species in the genus homo', so probably we'll do it to ourselves. What methods do we have available?".

These don't point to AI particularly, they remove the unusual-seemingness of doom in general

Comment by Robert Miles (robert-miles) on Does descaling a kettle help? Theory and practice · 2023-05-03T23:41:23.156Z · LW · GW

Oh, I missed that! Thanks. I'll delete I guess.

Comment by Robert Miles (robert-miles) on Does descaling a kettle help? Theory and practice · 2023-05-03T20:18:31.965Z · LW · GW
Comment by Robert Miles (robert-miles) on In favor of steelmanning · 2023-05-01T23:44:40.706Z · LW · GW

I think there's also a third thing that I would call steelmanning, which is a rhetorical technique I sometimes use when faced with particularly bad arguments. If strawmanning introduces new weaknesses to an argument and then knocks it down, steelmanning fixes weaknesses in an argument and then knocks it down anyway. It looks like "this argument doesn't work because X assumption isn't true, but you could actually fix that like this so you don't need that assumption. But it still doesn't work because of Y, and even if you fix that by such and such, it all still fails because of Z". You're kind of skipping ahead in the debate, doing your opponent's job of fixing up their argument as it's attacked, and showing that the argument is too broken to fix up. This is not a very nice way to act, it's not truth seeking, and you'd better be damn sure that you're right, and make sure to actually repair the argument well rather than just putting on a show of it. But done right, in a situation that calls for it, it can produce a very powerful effect. This should probably have a different name, but I still think of it as making and then knocking down a steel man.

Comment by Robert Miles (robert-miles) on Bing Chat is blatantly, aggressively misaligned · 2023-02-16T11:57:15.026Z · LW · GW

The main reason I find this kind of thing concerning is that I expect this kind of model to be used as part of a larger system, for example the descendants of systems like SayCan. In that case you have the LLM generate plans in response to situations, break the plans down into smaller steps, and eventually pass the steps to a separate system that translates them to motor actions. When you're doing chain-of-thought reasoning and explicit planning, some simulacrum layers are collapsed - having the model generate the string "kill this person" can in fact lead to it killing the person.

This would be extremely undignified of course, since the system is plotting to kill you in plain-text natural language. It's very easy to catch such things with something as simple as an LLM that's prompted to look at the ongoing chain of thought and check if it's planning to do anything bad. But you can see how unreliable that is at higher capability levels. And we may even be that undignified in practice, since running a second model on all the outputs ~doubles the compute costs.

Comment by robert-miles on [deleted post] 2023-01-08T00:56:27.090Z

Makes sense. I guess the thing to do is bring it to some bio-risk people in a less public way

Comment by robert-miles on [deleted post] 2023-01-06T11:52:02.893Z

It's an interesting question, but I would suggest that when you come up with an idea like this, you weigh up the possible benefits of posting it on the public internet with the possible risks/costs. I don't think this one comes up as positive on balance.

I don't think it's a big deal in this case, but something to think about.

Comment by Robert Miles (robert-miles) on The No Free Lunch theorem for dummies · 2022-12-06T23:50:17.696Z · LW · GW

It's impossible to create a fully general intelligence, i.e. one that acts intelligently in all possible universes. But we only have to make one that works in this universe, so that's not an issue.

Comment by Robert Miles (robert-miles) on Using GPT-Eliezer against ChatGPT Jailbreaking · 2022-12-06T22:21:49.858Z · LW · GW

Please answer with yes or no, then explain your thinking step by step.

Wait, why give the answer before the reasoning? You'd probably get better performance if it thinks step by step first and only gives the decision at the end.

Comment by Robert Miles (robert-miles) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T22:34:35.435Z · LW · GW

Not a very helpful answer, but: If you don't also require computational efficiency, we can do some of those. Like, you can make AIXI variants. Is the question "Can we do this with deep learning?", or "Can we do this with deep learning or something competitive with it?"

Comment by Robert Miles (robert-miles) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T22:26:01.904Z · LW · GW

I think they're more saying "these hypothetical scenarios are popular because they make good science fiction, not because they're likely." And I have yet to find a strong argument against the latter form of that point.

Yeah I imagine that's hard to argue against, because it's basically correct, but importantly it's also not a criticism of the ideas. If someone makes the argument "These ideas are popular, and therefore probably true", then it's a very sound criticism to point out that they may be popular for reasons other than being true. But if the argument is "These ideas are true because of <various technical and philosophical arguments about the ideas themselves>", then pointing out a reason that the ideas might be popular is just not relevant to the question of their truth.
Like, cancer is very scary and people are very eager to believe that there's something that can be done to help, and, perhaps partly as a consequence, many come to believe that chemotherapy can be effective. This fact does not constitute a substantive criticism of the research on the effectiveness of chemotherapy.

Comment by Robert Miles (robert-miles) on All AGI safety questions welcome (especially basic ones) [July 2022] · 2022-07-17T22:08:37.698Z · LW · GW

The approach I often take here is to ask the person how they would persuade an amateur chess player who believes they can beat Magnus Carlsen because they've discovered a particularly good opening with which they've won every amateur game they've tried it in so far.

Them: Magnus Carlsen will still beat you, with near certainty

Me: But what is he going to do? This opening is unbeatable!

Them: He's much better at chess than you, he'll figure something out

Me: But what though? I can't think of any strategy that beats this

Them: I don't know, maybe he'll find a way to do <some chess thing X>

Me: If he does X I can just counter it by doing Y!

Them: Ok if X is that easily countered with Y then he won't do X, he'll do some Z that's like X but that you don't know how to counter

Me: Oh, but you conveniently can't tell me what this Z is

Them: Right! I'm not as good at chess as he is and neither are you. I can be confident he'll beat you even without knowing your opener. You cannot expect to win against someone who outclasses you.

Comment by Robert Miles (robert-miles) on What Are You Tracking In Your Head? · 2022-07-01T19:43:20.425Z · LW · GW

I was thinking you had all of mine already, since they're mostly about explaining and coding. But there's a big one: When using tools, I'm tracking something like "what if the knife slips?". When I introspect, it's represented internally as a kind of cloud-like spatial 3D (4D?) probability distribution over knife locations, roughly co-extentional with "if the material suddenly gave or the knife suddenly slipped at this exact moment, what's the space of locations the blade could get to before my body noticed and brought it to a stop?". As I apply more force this cloud extends out, and I notice when it intersects with something I don't want to get cut. (Mutatis mutandis for other tools of course. I bet people experienced with firearms are always tracking a kind of "if this gun goes off at this moment, where does the bullet go" spatial mental object)

I notice I'm tracking this mostly because I also track it for other people and I sometimes notice them not tracking it. But that doesn't feel like "Hey you're using bad technique", it feels like "Whoah your knife probability cloud is clean through your hand and out the other side!"

Comment by Robert Miles (robert-miles) on Slack gives you space to notice/reflect on subtle things · 2022-04-28T10:21:45.467Z · LW · GW

This is actually a lot of what I get out of meditation. I'm not really able to actually stop myself from thinking, and I'm not very diligent at noticing that I'm thinking and returning to the breath or whatever, but since I'm in this frame of "I'm not supposed to be thinking right now but it's ok if I do", the thoughts I do have tend to have this reflective/subtle nature to them. It's a lot like 'shower thoughts' - having unstructured time where you're not doing anything, and you're not supposed to be doing anything, and you're also not supposed to be doing nothing, is valuable for the mind. So I guess meditation is like scheduled slack for me.

Comment by Robert Miles (robert-miles) on Rationalists Should Learn Lock Picking · 2022-04-25T11:24:12.508Z · LW · GW

I also like the way it changes how you look at the world a little bit, in a 'life has a surprising amount of detail', 'abstractions are leaky' kind of way. To go from a model of locks that's just "you cannot open this without the right key", to seeing how and why and when that model doesn't work, can be interesting. Other problems in life sometimes have this property, where you've made a simplifying assumption about what can't be done, and actually if you look more closely that thing in fact can sometimes be done, and doing it would solve the problem.

Comment by Robert Miles (robert-miles) on How to Lumenate (UK Edition) · 2022-04-25T11:10:08.967Z · LW · GW

it turns out that the Litake brand which I bought first doesn't quite reach long enough into the socket to get the threads to meet, and so I had to return them to get the LOHAS brand.

 

I came across a problem like this before, and it was kind of a manufacturing/assembly defect. The contact at the bottom of the socket is meant to be bent up to give a bit of spring tension to connect to the bulb, but mine were basically flat. You can take a tool (what worked best for me was a multitool's can opener) and bend the tab up more so it can contact bulbs that don't screw in far enough. UNPLUG IT FIRST though

Comment by Robert Miles (robert-miles) on Robert Miles's Shortform · 2022-02-15T03:21:30.922Z · LW · GW

Learning Extensible Human Concepts Requires Human Values

[Based on conversations with Alex Flint, and also John Wentworth and Adam Shimi]

One of the design goals of the ELK proposal is to sidestep the problem of learning human values, and settle instead for learning human concepts. A system that can answer questions about human concepts allows for schemes that let humans learn all the relevant information about proposed plans and decide about them ourselves, using our values.

So, we have some process in which we consider lots of possible scenarios and collect a dataset of questions about those scenarios, along with the true answers to those questions. Importantly these are all 'objective' or 'value-neutral' questions - things like "Is the diamond on the pedestal?" and not like "Should we go ahead with this plan?". This hopefully allows the system to pin down our concepts, and thereby truthfully answer our objective questions about prospective plans, without considering our values.

One potential difficulty is that the plans may be arbitrarily complex, and may ask us to consider very strange situations in which our ontology breaks down. In the worst case, we have to deal with wacky science fiction scenarios in which our fundamental concepts are called into question.

We claim that, using a dataset of only objective questions, it is not possible to extrapolate our ontology out to situations far from the range of scenarios in the dataset. 

An argument for this is that humans, when presented with sufficiently novel scenarios, will update their ontology, and *the process by which these updates happen depends on human values*, which are (by design) not represented in the dataset. Accurately learning the current human concepts is not sufficient to predict how those concepts will be updated or extended to novel situations, because the update process is value-dependent.

Alex Flint is working on a post that will move towards proving some related claims.


 

Comment by Robert Miles (robert-miles) on LW Open Source – Overview of the Codebase · 2021-10-27T10:43:44.293Z · LW · GW

Ah ok, thanks! My main concern with that is that it goes to "https://z0gr6exqhd-dsn.algolia.net", which feels like it could be a dynamically allocated address that might change under me?

Comment by Robert Miles (robert-miles) on LW Open Source – Overview of the Codebase · 2021-10-26T20:15:01.938Z · LW · GW

Is there a public-facing API endpoint for the Algolia search system? I'd love to be able to say to my discord bot "Hey wasn't there a lesswrong post about xyz?" and have him post a few links

Comment by Robert Miles (robert-miles) on In Wikipedia — reading about Roko's basilisk causing "nervous breakdowns" ... · 2021-10-26T11:24:09.523Z · LW · GW

Agreed. On priors I would expect above-baseline rates of mental health issues in the community even in the total absence of any causal arrow from the community to mental health issues (and in fact even in the presence of fairly strong mental health benefits from participation in the community), simply through selection effects. Which people are going to get super interested in how minds work and how to get theirs to work better? Who's going to want to spend large amounts of time interacting with internet strangers instead of the people around them? Who's going to be strongly interested in new or obscure ideas even if it makes the people around them think they're kind of weird? I think people in this community are both more likely to have some pre-existing mental health issues, and more likely to recognise and acknowledge the issues they have.

Comment by Robert Miles (robert-miles) on The Best Software For Every Need · 2021-09-20T14:47:04.698Z · LW · GW

Holy wow excalidraw is good, thank you! I've spent a long time being frustrated that I know exactly what I want from this kind of application and nothing does even half of it. But excalidraw is exactly the ideal program I was imagining. Several times when trying it out I thought "Ok in my ideal program, if I hit A it will switch to the arrow tool." and then it did. "Cool, I wonder what other shortcuts there are" so I hit "?" and hey a nice cheat sheet pops up. Infinite canvas, navigated how I would expect. Instant multiplayer, with visible cursors so you can gesture at things. Even a dark mode. Perfect.

Comment by Robert Miles (robert-miles) on The Best Software For Every Need · 2021-09-20T14:21:16.253Z · LW · GW

This is the factor that persuaded me to try Obsidian in the first place. It's maintained by a company, so perhaps more polish than some FOSS projects, but the notes are all stored purely as simple markdown files on your hard disk, so if the company goes under the worst that happens is there are no more updates and I just keep using whatever the last version was

Comment by Robert Miles (robert-miles) on Nonspecific discomfort · 2021-09-17T11:30:12.966Z · LW · GW

I suppose it makes sense that if you've done a lot of introspection, the main problems you'll have will be the kind that are very resistant to that approach, which makes this post good advice for you and people like you. But I don't think the generalisable lesson is "introspection doesn't work, do these other things" so much as "there comes a point where introspection runs out, and when you hit that, here are some ways you can continue to make progress".

Or maybe it's like a person with a persistent disease who's tried every antibiotic without much effect, and then says "antibiotics suck, don't bother with them, but here are the ways I've found to treat my symptoms and live a good life even with the disease". It's good advice but only once you're sure the infection doesn't respond to antibiotics.

Could it be that most people do so little introspection because they're bad at it and it would only lead them astray anyway? Possibly, but the advice I'd give would still be to train the skill rather than to give up on understanding your problems.

That said, I think all of the things you suggest are a good idea in their own right, and the best strategy will be a combination. Do the things that help with problems-in-general while also trying to understand and fix the problem itself.

Comment by Robert Miles (robert-miles) on Nonspecific discomfort · 2021-09-14T11:46:10.213Z · LW · GW

I think this overestimates the level of introspection most people have in their lives, and therefore underestimates the effectiveness of introspection. I think for most people, most of the time, this 'nonspecific discomfort' is almost entirely composed of specific and easily understood problems that just make the slightest effort to hide themselves, by being uncomfortable to think about.

For example, maybe you don't like your job, and that's the problem. But, you have some combination of factors like

  • I dreamed of doing job X for years, so of course I like doing X
  • I spent so long training and working hard to be allowed to do job X, I can't quit
  • I'm an X-er, that's who I am, what would I even be if I stopped doing job X?

These kinds of things prevent you from ever thinking the thought "I have a problem which is that I don't like doing X and maybe want to do something else". So you have this general feeling of dissatisfaction which resists being pinned on the actual source of the problem, and may pin itself to other things. "Maybe if I get that new, better X-ing equipment", "Maybe if I get promoted to Senior X-er".

Probably doing exercise and socialising and cooking will help you feel better about a life doing a job you don't like, but ten minutes of honest focused introspection would let you see the problem and start to actually deal with it.

It seems plausible to me that there are also problems that are really deeply defended and will resist introspection very effectively, but I think most people haven't spent ten minutes by the clock just really trying to be honest with themselves and stare at the uncomfortable things, and until you've at least done that it's too soon to give up on the idea that your problem can't be understood and dealt with. Certainly introspection can give you wrong answers, but usually the problem is just that people have barely tried introspection at all.

Comment by Robert Miles (robert-miles) on Lifeism in the midst of death · 2021-06-25T22:49:22.289Z · LW · GW

At my grandmother's funeral I read Dirge Without Music by Edna St. Vincent Millay, which captured my feelings at the time fairly well. I think you can say things while reading a poem that you couldn't just say as yourself.

Comment by Robert Miles (robert-miles) on What will 2040 probably look like assuming no singularity? · 2021-05-19T11:49:05.911Z · LW · GW

On point 12, Drone delivery: If the FAA is the reason, we should expect to see this already happening in China?

My hypothesis is, the problem is noise. Even small drones are very loud, and ones large enough to lift the larger packages would be deafening. This is something that's very hard to engineer away, since transferring large amounts of energy into the air is an unavoidable feature of a drone's mode of flight. Aircraft deal with this by being very high up, but drones have to come to your doorstep. I don't see people being ok with that level of noise on a constant, unpredictable basis.

Comment by Robert Miles (robert-miles) on Rationalism before the Sequences · 2021-04-08T11:07:45.063Z · LW · GW

It would certainly be a mistake to interpret your martial art's principle of "A warrior should be able to fight well even in unfavourable combat situations" as "A warrior should always immediately charge into combat, even when that would lead to an unfavourable situation", or "There's no point in trying to manoeuvre into a favourable situation"

Comment by Robert Miles (robert-miles) on A Medical Mystery: Thyroid Hormones, Chronic Fatigue and Fibromyalgia · 2021-04-06T13:14:39.002Z · LW · GW

I just stumbled across this and see it is in fact 5 years later! Have you seen anything interesting from GWAS so far?

Comment by Robert Miles (robert-miles) on Disentangling Corrigibility: 2015-2021 · 2021-04-01T10:45:22.897Z · LW · GW

Note that the way Paul phrases it in that post is much clearer and more accurate:

> "I believe this concept was introduced in the context of AI by Eliezer and named by Robert Miles"

Comment by Robert Miles (robert-miles) on Disentangling Corrigibility: 2015-2021 · 2021-04-01T10:43:06.902Z · LW · GW

Yeah I definitely wouldn't say I 'coined' it, I just suggested the name

Comment by Robert Miles (robert-miles) on If you've learned from the best, you're doing it wrong · 2021-03-19T10:44:49.862Z · LW · GW

Worth noting that the 'corrupt polymaths' problem only happens in areas that aren't too easy to measure (which is most areas). But like, the famous best 100m sprinter actually is just the best, he didn't need to do any politics to be recognised.

Comment by Robert Miles (robert-miles) on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-03-15T12:14:41.760Z · LW · GW

Yeah, the mechanics of helicopter rotors is pretty complex and a bit counter-intuitive, Smarter Every Day has a series on it

Comment by Robert Miles (robert-miles) on Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain · 2021-03-04T10:32:18.658Z · LW · GW

I came here to say this :)

If you do the stabilisation with the rotors in the usual helicopter way, you basically have a Chinook (though you don't need the extra steering propeller because you can control the rotors well enough)

Comment by Robert Miles (robert-miles) on Why are young, healthy people eager to take the Covid-19 vaccine? · 2020-12-04T10:38:55.349Z · LW · GW

A neglected motivation: If I'm vaccinated, and my friends are vaccinated, I can hang out with my friends again

Comment by Robert Miles (robert-miles) on I made an N95-level mask at home, and you can too · 2020-11-26T11:34:25.712Z · LW · GW

My understanding is the CO2/O2 thing is almost completely a red herring/non-issue. Firstly of course any mask or filter is going to let through O2 and CO2 molecules completely indiscriminately since they're far too small to be affected. And secondly you always breathe in some of the air you breathed out, since it's still in your airways. In the worst case, adding a mask would increase this re-inhaled amount by the volume of the space between the mask and the face, which is pretty small. So breathing through a mask is like breathing through a tube with the same inner volume as the inside-mask space - a regular swimming snorkel results in much more re-breathing, and is also not a problem. It wouldn't surprise me if some people are re-breathing more without a mask than others do with a mask, just because they have a longer neck or larger airways.

Comment by Robert Miles (robert-miles) on Any work on honeypots (to detect treacherous turn attempts)? · 2020-11-12T21:18:55.303Z · LW · GW

A related keyword to search for is 'tripwires', which might be thought of as honeypots which are connected to an automatic shutdown

Comment by Robert Miles (robert-miles) on Why indoor lighting is hard to get right and how to fix it · 2020-11-01T19:50:33.223Z · LW · GW

What are your thoughts on DIYPerks' recent artificial sunlight project?